Brian Lovin
/
Hacker News

Ask HN: Whatever happened to dedicated sound cards?

During the '90s and the early '00s, dedicated soundcards were in-demand components in much the same way GPUs are today. From what I know, Creative won, on-board sound became good enough sometime between Windows XP and Windows 7, and the audio enthusiasts moved on to external DACs and $2000 headphones. Today Creative still sells soundcards, but none of them appear to be substantial improvements over previous models.

So what other reasons could have caused the decline in interest? Was there nothing that could be improved upon? Were there improvements on the software side that made hardware redundant and/or useless? Is there any other company besides Creative, however large or small, still holding the torch for innovating in this space?

Daily Digest email

Get the top HN stories in your inbox every day.

speeder

The main reason for their death in my opinion, is the DRM-driven (although MS claim it wasn't because of DRM) changes to Windows drivers rules.

When DVDs and HDMI were becoming popular, and Windows Vista was launched, a lot of restrictions were put on drivers, I saw many people defending them claiming it was for better stability, avoiding blue screens and so on.

But a major thing the restrictions did, was restrain several of the sound cards features, most notably their 3D audio calculations that were then just starting to take off, people were making 3D audio APIs that intentionally mirrored 3D graphics API with the idea you would have both a GPU and a 3D audio processor, and you would have games where the audio was calculated with reflections, refractions and diffractions...

After that, the only use of sound cards became what the drivers still allowed you to do, that was mostly play sampled audio, so sound cards became kinda pointless.

Gone are the days of 3D audio chips, or having sound cards full of synthethizers that could create new audio on the fly.

Yamaha still manufactures sound card chips, and their current ones have way less features than the ones that they made during the sound card era.

EDIT: also forgot to point out the same restrictions kinda killed analog video too, for example before the restrictions nothing prevented people from sending arbitrary data to analog monitors, so you could have monitors with non-standard resolutions, non-square pixels, unusual bit depths (for example SGI made some monitors that happily accepted 48 bits of color) or not even having pixels at all (think vectrex) and so on. All this died and in a sense also affected video development, some features that video cards were getting at the time were removed and hardware design moved to a narrower path, more compatible with MS rules.

As for what the restrictions have to do with DRM: the point was not allow people to intercept audio and video using analog signals with perfect quality, since this would be an easy way to go around the DRM built-in on HDMI cables.

joe91

This is nonsense. The main reason behind the demise of dedicated sound cards: motherboard sound chipsets got "good enough". The value add wasn't adding enough value any more because you can get decent sound quality just by using the default sound output provided by your motherboard.

3D sound and other processing got baked into middleware for games because it became trivial to do all of the processing in software - and the processing became more advanced than anything that the sound card vendors were offering (and they didn't move quickly enough anyway).

Pro audio vastly progressed past anything that is possible to provide in fixed silicon. For input, dedicated USB (and ethernet) audio interfaces progressed to the point where it would be ridiculous to provide such functionality on a general "sound card".

It's just evolution - there just isn't a compelling enough niche for a dedicated sound card any more.

ideashower

This is the answer. The only people buying dedicated sound cards these days are those doing audio engineering or production work, needing access to dedicated inputs and interfaces. Motherboard sound chipsets cover nearly every other use case.

retrac

Correct. Same thing has happened with GPUs. The vast majority of general purpose computers sold today come with integrated graphics. Only those who have unusually heavy 3D graphics needs, like CAD or the latest games at full quality, still buy a discrete video card.

utucuro

To add to this - I have a dedicated sound card on my desktop - it lives inside the USB tiny dongle of my gaming headset and makes it emulate surround sound a little bit better. My two tinny tiny speakers are connected to the onboard audio output. Anything I watch, I watch on the TV, or via a bluetooth headset on the phone or tablet. Anything I listen to, I listen to on the phone via aforementioned bluetooth headset, or the nice big non-mobile bluetooth speaker.

I USED to have two powerful and rather higher quality speakers attached to a creative card back in the day when I did all that with the PC though.

rocket_surgeron

>Gone are the days of 3D audio chips, or having sound cards full of synthethizers that could create new audio on the fly.

Modern CPUs can ether do or emulate this, probably using less power than a sound card.

Very, very, few people have their PCs connected to an AV receiver or multichannel speakers, but positional audio is still widely supported in Windows applications using Xaudio2.

The reasons sound cards went away is the use cases went away:

1. People who want high quality recording shifted to firewire and later high-speed USB external audio interfaces. No matter how hard you try an external metal box with multiple inputs and outputs will always be better than a PCI/PCIe card inside a PC for recording. Rare use case in the recording world for sound cards.

2. Gamers who want 3d/positional audio either use headphones, find the 5.1 integrated outputs to be adequate, or like me, run a digital audio cable to a surround sound receiver. Rare use case in the gaming world for sound cards.

Dolby Atmos is awesome for positional audio in games but there are multiple less expensive and more accessible methods for surround audio nowadays. Decent positional audio can be experienced using a laptop and headphones-- no sound card required.

https://www.pcgamingwiki.com/wiki/Glossary:Surround_sound

Back in the sound card days you had to squint on the back of the box and ask "is this creative 3d? aureal?" nowadays you just plug in 5.1 to your PC's onboard audio, tell windows you have 5.1, and it works (mostly).

rkagerer

No matter how hard you try an external metal box with multiple inputs and outputs will always be better than a PCI/PCIe card inside a PC for recording

USB can't offer as low latency as a piece of well-designed hardware plugged directly into your PCI bus, at least in my own limited experience. This comes into play when doing music keyboard recording.

eg. I found it difficult to find a USB MIDI adapter that didn't introduce unacceptable latency (when trying to record new tracks synced in real time to existing ones). Edirol was recommended to me but even after tweaking settings for hours it fell short. I wound up buying a second-hand Creative X-Fi Elite Pro PCIe card and love it.

cogman10

The latency for USB3 is ~30 μS.

I don't think it's a USB protocol problem but rather a driver/manufacturer problem.

cassianoleal

> USB can't offer as low latency as a piece of well-designed hardware plugged directly into your PCI bus, at least in my own limited experience.

This may be true but I've never had latency issues with USB soundcards. Right now I have a Line6 Helix Floor unit that I use to play guitar with. I can route the audio through the Helix effects, into Logic and back to Helix for more post-processing and have no latency problems.

I have had other brands and models and none introduced perceivable latency.

I don't use MIDI but I doubt it requires less latency than live guitar playing.

I had a PCIe soundcard a few years ago that made it almost impossible to get rid of ground hum though.

denkmoon

USB _is_ a well-designed hardware plugged directly into your PCI bus.

xhevahir

I don't know. I didn't think a mere audio interface plus soundfonts was an adequate replacement for a really good soundcard like the Yamaha SW100XG: https://www.musicradar.com/news/blast-from-the-past-yamaha-s... .

Then there's Korg's Oasys PCI, which was so powerful that for a long time people kept using Windows 98 after Korg stopped making drivers: https://www.soundonsound.com/reviews/korg-oasys-pci

squeaky-clean

For playing older games that relied on that hardware, sure soundfonts aren't a great replacement. But modern games moved away from needing to use a soundcard audio engine and are just able to completely do it on the CPU, and the only real benefit of the soundcard at that point is the latency/dac/amp

Arcanum-XIII

I have sub 5ms latency with my RME sound card, could probably go lower. What will hurt after a long while is the bandwidth at 192khz plus some other protocol on top (clock syncing, midi…). But we’re speaking of more than 64 channel in AND out.

So USB is pretty good, and for most sound card, USB2 is enough. Otherwise you can go Thunderbolt, which offer on par experience with PCIe.

What a consumer sound card offer now a day is better dac compare to the one of your motherboard or better output for headset.

copperx

64 channels I/O at 192khz over USB2? That's insane. Isn't USB2's bandwith 60 MB/sec?

Ok, I just checked, and a 192kHz 24-bit WAV file is only 0.56 MB per sec. Nice.

fsckboy

> Very, very, few people have their PCs connected to an AV receiver or multichannel speakers

...in part because there's no way to do that, and if you do it by using the headphone jack, in addition to low quality you're also going to get all system sounds

genmud

Maybe I am self selecting, but I don't think I have seen a desktop computer or motherboard in the last 15 years without spidf over toslink or RCA. Hell, for that matter a bunch of laptops and even apple until recently included mini toslink/optical out the headphone jack.

FpUser

Motherboard on my PC has optical output and it is connected to external amp that is connected to 2 audio monitors and the sub.

brokenmachine

HDMI or spdif

0xfaded

This might be due to computers and headphones becoming portable. When I was a kid my PC had a soundblaster connected to a hacky 7.1 setup in my room, and counterstrike supported it.

majormajor

This seems very off base from my recollection and the state of tech availability at the time. The "analog hole" was around for long after the release of Vista, with Vista maintaining support for direct multi-channel analog audio out as well as VGA/component video out at HD resolutions, but that was not a hugely mainstream thing because going analog meant by definition non-perfect results - decoding a digital stream, sending analog, then re-encoding on capture. They started laying the groundwork, but 2007 PCs and laptops didn't commonly have digital output, so playing a DVD over VGA, for instance, was extremely common still and allowed.

And beyond that, this is the first I'm seeing a claim that "doing 3d audio calculations" was restricted and that this had anything to do with intercepting pre-encoded multi-channel DVD/digital media streams. They seem completely separate from each other as far as technical pipelines go.

Sound cards as a general consumer product were dead long before Vista. The last hurrah I remember was the SBLive!/Turtle Beach Santa Cruz era 1998-2001 stuff, Vista didn't come out until 2007 (Longhorn was famously botched, etc...).

CPUs just got fast enough that all of that, including 3d calcs, could be done better on common CPU by the mid-2000s. Do it on a sound card, you have to buy a new sound card to get improvements. Do it directly in the OS or in-game, and you can benefit from improvements from the OS or library or game devs immediately.

MAGZine

I had a decent gaming machine in 2007-2008, and, in particular I remember that Battlefield 2 sounded a LOT better with a soundcard. The difference was night and day.

In particular, EAX (environmental audio extensions), which was a feature of the X-Fi cards, were definitely EOL'd due to Vista's changes around the DirectSound3D APIs.

https://en.wikipedia.org/wiki/Environmental_Audio_Extensions

undefined

[deleted]

isleyaardvark

My recollection as well, and supported by the chart on this page showing a huge drop in sales from 2001-2003:

https://www.tomshardware.com/reviews/future-3d-graphics,2560...

(It’s not a great chart but shows a general trend.)

Sakos

> CPUs just got fast enough that all of that, including 3d calcs, could be done better on common CPU by the mid-2000s. Do it on a sound card, you have to buy a new sound card to get improvements. Do it directly in the OS or in-game, and you can benefit from improvements from the OS or library or game devs immediately.

A dedicated chip is often better than a general purpose CPU (hello GPUs?). Game audio made a huge step back with Vista and beyond. Since audio cards could no longer do what they used to because of the limited driver model and developers/studios were more focused on graphical fidelity and physics calculations, nobody was going to waste precious CPU cycles on audio, at least not any more than the bare minimum.

jasonwatkinspdx

It had nothing to do with DRM.

3D audio on the PC was deliberately killed by Creative.

They sued Aureal into bankruptcy, bought it in the court auction, and the day the sale closed they nuked the support website and took the drivers offline.

They used similar scummy tactics to decapitate any other competitors. Then they considered their reverb based spatial audio solution sufficient, and promptly sat on their heels doing zero innovation while collecting a rent.

And them as chip technology improved, a basic "Soundblaster 64" chip became so cheap that motherboard manufactures started bundling it in as a selling point (which made a ton of sense for non gaming PC users btw). Additionally MS stepped in and provided some software spatial functionality within DirectX, as processors had improved to the point where dedicated hardware for it wasn't necessary.

Back then I worked in gamedev, and I briefly considered going into competition with MILES et all with a 3D audio library after the Aureal fiasco, after I stumbled on some interesting papers doing Fresnel Zone Tracing variations as low overhead spatial audio, but ultimately wasn't serious about it vs other options at the time.

dleslie

FWIW, nowadays those libraries are quite mature and free to use; with deep integration to game engines available.

Ie, Steam Audio: https://valvesoftware.github.io/steam-audio/

Dracophoenix

> Back then I worked in gamedev, and I briefly considered going into competition with MILES et all with a 3D audio library after the Aureal fiasco, after I stumbled on some interesting papers doing Fresnel Zone Tracing variations as low overhead spatial audio, but ultimately wasn't serious about it vs other options at the time.

Which papers were they, if you recall?

Sharlin

> or having sound cards full of synthethizers that could create new audio on the fly.

To be fair, realtime synthesis just became obsolete for most purposes once CD quality digitized audio became cheap enough to store (and later, to stream). And for musicians, once CPUs became fast enough, SW synthesis with its limitless possibilities took over from HW synthesis.

gnramires

Something like (fragment?) shaders for audio would be amazing though. Or maybe just an embedded low power CPU (running realtime). I think there's a lot of room for generative audio still (or various degrees of "rendering audio"), or applying distortions like doppler, various reverbs, or just generating things on the fly via synthesis, you can do things like make each effect unique and have various custom parameters (material pairs, impact velocity, room conditions, etc.).

I think full 3d audio is a different problem though because it, at least, requires a version of the rendering problem (for waves). It's harder in some ways than light rendering (because phase/coherence matters sometimes, the wave equation is harder to solve), but easier in others (no need as much detail as light, wavelength is large), or just plain weird (nonlinear effects from rattling and such).

EnigmaCurry

Generating sound on the GPU via shaders is definitely a thing. There's a bunch on shadertoy that do just that : https://www.shadertoy.com/results?query=&sort=popular&filter...

Arcanum-XIII

There’s still sound card with programmable DSP, quite often used to replicate high end effects. They cost a lot - and every plug-in are specialized to a brand. Still quite useful because the quality of those are very high, and don’t impact the recording process.

Or there’s still some “generic” box (I mean that you can program yourself) like the Symbolic Sound Kyma Capybara. They’re quite niche thought, like modular synth.

bick_nyers

Sounds kinda like ray tracing and physically based rendering to me

Tangurena2

Also, disk space became cheap enough that the game could store audio files (such as MP3s) instead of generating the audio on the fly by the sound card. I remember Age of Empires (released 1997) music were MIDI files and the "instruments" would be changed by the game's code to make the music sound much better. EverQuest (1999) also started with MIDI files but later expansions replaced the music with MP3 files.

happymellon

Surely this is like the raytracing scenario for GPUs.

There are always slightly harder, and better ways of doing something which the accelerators are better at. Audio acceleration I guess peaked too early or they just couldn't get the tech demos as impressive as graphics.

I remember reading about a demo, which I believe was from Matrox. They managed to get an audio 3D environment working over a pair of stereo headphones which was good enough that you could play Doom headless. Just close your eyes and you could tell where people were.

A lot of what I read about here is that prerecorded samples is good enough, in the same way that raster lighting is good enough and raytracing is a waste of time.

Sharlin

Sure, that's what I meant by "once CD quality digitized audio became cheap enough to store". Both the fact that once games started to be shipped on CD, they could literally play audio tracks straight from the CD, and the fact that faster CPUs and better codecs made it feasible to ship compressed audio and decode it in realtime.

com2kid

> The main reason for their death in my opinion, is the DRM-driven (although MS claim it wasn't because of DRM) changes to Windows drivers rules.

Creative drivers were a double digit % of all Windows BSODs. Microsoft gave Creative plenty of time to fix their drivers, creative never did, so sound drivers got booted from the kernel.

undefined

[deleted]

PaulKeeble

The best of competitive sound now is the Sennheiser GSX which is an external USB DAC/Amp. It has a good 7.1 to headphones mode that gets you about the best surround sound on headphones for games/movies you can get, it impacts the tone the least and has one of the best HRTF's I have heard in eyars. But it pales in comparison to the cards we had 20 years ago, I miss my Aureal A3D.

Terr_

Ditto, I can't say how much of it is pure nostalgia, but I feel like Counter Strike 1.x on my old Turtle Beach Montego II gave better positional sound than any game/hardware does nowadays.

scovetta

The differences may be all in my head, but I've been very happy with a USB Dragonfly DAC and a pair of quality headphones, along with high/master quality input.

wellthisisgreat

Is it better than Dolby Atmos you think?

PaulKeeble

In Overwatch it was when I tried that a few years ago. While Atmos gave you some sense of vertical positioning generally it wasn't correct and I struggled with positioning. Theoretically Atmos ought to be miles better, its object based like the sound cards of the early 2000s but in practice they have got something wrong in the headphones implementation and positioning is hard to pick out. Its better than just stereo but the positioning is a lot better on the Sennheiser device.

Whether Dolby Atmos has improved since then or other games have implemented it better I don't know. I feel like we probably need an open source implementation middleware for object sound to headphone/surround speakers to really fix the situation.

justsomehnguy

> I saw many people defending them claiming it was for better stability, avoiding blue screens and so on.

If you never seen a system BSODing from the sound drivers - I'm glad for you. I've seen enough sound card drivers crashes to tell you what that WAS a problem. Along with network cards, video cards, TV-tuner cards and almost anything what needed a driver.

> After that, the only use of sound cards became what the drivers still allowed you to do, that was mostly play sampled audio, so sound cards became kinda pointless.

Discrete sound cards became pointless because by 2001 almost every consumer motherboard had an AC'97 compatible audio coded on board.

So if you didn't need super extra fidelity 5.1235435 sound system AND didn't want to shell out additional ~$100 (SB Live! in 1999) or $2-300 (SB Audigy 2 in various variants, 2003) - you just could use the onboard one.

> having sound cards full of synthethizers that could create new audio on the fly.

NO THANKS: https://youtu.be/3AZI07_qts8?t=9

And this is a Creative card! I had my share of good synthesized music (because computers couldn't yet do a proper digitized sounds yet), but the tech should had die and it did.

hakfoo

The sound market was always a mess.

Games ended up congealing around the Sound Blaster standard, which put Creative in the centre of the sound universe. Everyone else was always just "SB compatible", which meant they were playing for the "$10 placeholder sound card" market in the Pentium days. By the time we were getting onboard audio (I think my first board with it was Socket A), it was all hidden behind DirectX, and the market became the "90 cent placeholder chip soldered onto the mobo" market, and then they're all undercut by Realtek.

Unfortunately, Creative is a mediocre steward of the premium-sound landscape. Their product matrix is complicated, support is all over the place, and the drivers are sketchy. I have an Audigy RX that I pulled out of circulation because it could crash two different boards (B550 and X570) and the general consensus was just "they're not that compatible."

I suppose that market technically didn't crush everyone else, there was still the pro-audio market, but that had entirely different needs than a typical home enthusiast. If you're building a studio PC to run specific studio software, you can put up with a narrow compatibility list and finickiness.

But it feels like there's a reasonable niche there for the "eager to throw money around" audiophile crowd. Cards full of high-markup capacitors and filters so you can claim to offer cleaner power and a lower noise floor seem tailor-made to unlocking those wallets. Where is that card? Although I suspect for that audience, they just pipe out stuff via optical to an external DAC because the inside of the PC must be full of RFI.

syntheweave

I think part of the issue is that once you get super picky about audio quality - whether from a producer or listener perspective - you also want a physical experience. You want a box with knobs and buttons on it, lots of I/O, wireless capabilities and whatever other features. The classic PC sound card wasn't that; it did make the PC play and record stuff, but it was positioned as a way for consumers to play games and for professionals to record demos(before taking it to a real recording studio). The professional digital recording systems were sold as whole systems, of which a PC could be one part, but always had a proprietary hardware element as well. [0]

For the masses, the high end today is mostly encompassed by a USB headphone DAC. Headphones get you high quality in a small form factor, and a headphone DAC doesn't need a lot of power or I/O. Once you go bigger, again, physical experience takes hold. People want their vinyl collections and so forth in their listening room, and thus where there's demand for digital, it's usually outside of the classic PC form factor too - it could be an iPhone and a Bluetooth speaker, or a dedicated receiver for the home theater setup. Going this route means it can(if built carefully) avoid crashes and updates interrupting the experience.

[0] e.g. early versions of Pro Tools https://www.pro-tools-expert.com/home-page/2018/3/27/a-brief...

pjlegato

Soundcard-like devices called "audio interfaces"[1] -- now usually USB breakout boxes -- are alive and well in the professional audio segment, targeted at musicians, recording studios, video editing shops, and similar applications.

They're not necessary for consumer apps. Consumer audio applications got "good enough" with mass produced builtin motherboard "soundcard on a chips" that basically replicated the function of the old soundcards at a much lower price point.

If you want to, say, connect 16 microphones at once and record to 16 seperate tracks, or you plan to apply a bunch of digital effects and therefore want a much higher sample rate than what your consumer audio chip can do, you can buy an audio interface.

[1] https://www.sweetwater.com/shop/studio-recording/audio-inter...

gonehome

Also required for fancy remote work from home microphone setups.

If you want the Shure SM7B you need an audio interface (and probably also a cloud lift or dynamite to bump up the gain).

Lots of streamers, podcasters, youtube people use them.

hunter2_

Actually, just an analog mic preamp (and for the SM7B or other very low output mics: one with super low noise or a cloudlifter in between) or compact mixer will get you up to line level, and then you're good to go with a motherboard sound interface if it has a line input jack.

A better ADC (like in virtually any outboard audio interface) certainly doesn't hurt and would definitely be advisable for musicians, but it's far from necessary for a youtuber / home office use case.

lukego

I like the way you say "just" :)

My experience is that a high-end device like MixPre II is fine with a dynamic mic. Focusrite or Tascam US needs a cloudlifter to get enough (clean) gain. Random stuff like BlueYeti is hopeless with crackling noise.

So maybe you don't need a great ADC but you have to choose your audio interface very carefully (for its preamp.)

TremendousJudge

Another use is rendering virtual instruments with low latency, so you can play with a keyboard and render sounds live. Built in sound chips usually have unacceptable levels of latency, even with custom drivers.

rektide

It's been a while since I broke out & tuned JACK latency, but i think 2 samples of 3ms worked fine on most devices I'd tried, even without basics like a preemptible kernel, on boring old ancient intel chipsets. I'd expect most apps have no trouble getting under 16ms on basically any x86 hardware.

I'd be shocked to find that the majority of these aftermarket devices do at all better. Many are usb, which, even if you do have a fancy isosynchronous device, I'd still expect to be significantly slower than in chipset or pcie.

I could be totally off¡ But I dont think there's really specific chipset capabilities that make a bug difference here (other than iso, which only helps to counter the downgrade of using usb). Windows has ASIO for low latency, but is that a hardware capability thing? I think it's just drivers, that modern tech like pipewire has many of the benefits for free about more closely mapping hardware resources to where apps can use it. I thought ASIO is mostly a product segmentation thing. It'd be neat to take a software virtual adapter like ASIO4ALL & see what kind of buffering really is required there, see what latency that brings consumer gear down to.

I do also remember Android fighting to get their latency down to reasonable levels, which counter to my point suggests latency in general is somewhat hard. There are much fewer system resources there, not missing a tick is harder to insure, but iirc the bigger issue was just that modems & the regular audio subsystems just had really funky audio driver paths that's been slapped together for a really long time, & some modernjzatuon was drsperately needed. This was like... 7+ years ago maybe?

squeaky-clean

> Windows has ASIO for low latency, but is that a hardware capability thing

Yeah there's replacement ASIO drivers for the onboard sound such as ASIO4ALL or FLEXASIO. But I can agree with TremendousJudge that I've never gotten reasonable low latency with them, no matter my buffer size.

CoreAudio on mac works just fine however.

TremendousJudge

I can only speak for my own experience, but the cheapest Focusrite outperformed my builtin soundcard with ASIO4ALL drivers both in latency and quality. On the builtin, getting the latency lower than 12ms resulted in noticeable glitches.

jamal-kumar

I came here to say this - USB sound cards are the kind of thing you can lease or rent out from music shops these days, which is pretty cool if you just need to DJ your cousin's wedding or something for a day on your laptop.

endorphine

Another use case is decent vinyl rips.

Godel_unicode

Digitally mastered audio laboriously printed to analog vinyl so it can be shipped to you and then ripped into digital format (and probably listened to on EarPods). Delightful.

Jedd

I get the point you're making, but you're assuming a great deal.

Vinyl records came out in the late 1940's. The first digital mastering was 1979 -- and it was quite a while before that became the default. So there's a huge window of exclusively analog mastering.

And in both cases the likelihood that a random consumer would have access to the original (digital) master is negligible.

endorphine

So you're saying that, if a recording is digitally master, that negates any reason to print it to a vinyl? If so, I would be interested in hearing more (not very familiar with this stuff).

Besides that, I would love to have access to some music, but it was only released in vinyl and it's too expensive for me. So if someone with the vinyl could do a decent rip and upload it somewhere, I'd love that. That's the point I'm trying to make.

jamal-kumar

People who spin music on discs professionally could A: travel with their expensive vinyls in a case to whatever locations there may be, including when they get gigs in places like festivals in the jungle where the heat will melt those expensive things or B: Be happy this is a solution :)

Back in the 90's in places like Goa the solution was using stuff like Sony's DAT format, or even minidiscs.

goolz

Some people prefer spinning plain vinyl rather than using a DVS + control plate.

nsxwolf

Pure digital audio and huge hard drives killed them. Back when we were chasing better and better synthesis - FM, wavetable... because playing back CD quality digital audio wasn't possible - you were lucky to have 23kHz mono, or nothing at all, your hard drive was tiny and MP3 wasn't a thing yet... Every sound card upgrade was literally music to your ears.

Now every computer has a little chip that plays back at least CD quality audio from an infinite pool of storage and RAM. Nobody wants to hear MIDI in their games anymore. I'm not even sure what a better sound card could even do for me - reduce line noise, drive high impedance headphones or something. Boring!

Tangurena2

I had a girlfriend who was a musician and back in the late 90s, home recording was rare (but starting to become a thing). We found that the cheapest sound card from a music store had lower noise and better signal-to-noise ratio than even the best (or most expensive) sound card available at computer stores.

For games today, I use the audio interface that came with the computer. For dealing with my synthesizers, I use a Scarlett by Focusrite [0].

[0] - https://focusrite.com/en/scarlett

jasonwatkinspdx

Yeah, back in that era there were products from Ensoniq and such that weren't popular with PC gamers, but were totally solid hardware for music.

Spooky23

I worked at a CompUSA in high school, and remember a specific white box sound card was frequently sought after by musicians. IIRC it was $15-20

Sometimes they would ask to open the box to see the chips on the board.

I was all about 3D acceleration and didn’t have the money or interest to do anything with audio so I never learned much more about it.

Sharlin

I guess for a few years there might have been a market for hardware MP3 decoders. I faintly remember back when playing MP3s did take a sizeable chunk of CPU time. But after a Moore cycle or two the cost was negligible, and today Spotify probably spends much more CPU on drawing its interface than decoding the audio stream…

toast0

Not really on PCs, IMHO. With well optimized software, you could decode in realtime on an i486@100Mhz, a pentium 75 could use winamp and mIRC simultaneously, IIRC. If you didn't have enough CPU to decode mp3, it probably made more sense to upgrade that rather than a mp3 decoder card (if they existed). Mpeg-1/2 (video) decoder cards were more necessary.

dougmwne

Thanks for the flashback. I too remember when playing an MP3 took a good chunk of processor utilization, maybe 40%, and doing too many other things at the same time would cause it to stutter. I also remember when 1080p and 4k were taxing respectively.

nsxwolf

And anything I can think of that I'd want a sound card to do these days is being handled by GPUs - like that NVidia AI noise cancellation, where your kids can scream 5 feet away and no one can hear them on Zoom.

therealplato

The only computer i've ever had with what I consider an acceptable level of line noise was a macbook pro

even a $15 USB-to-dual-3.5mm adapter sounded significantly better on other machines

I experienced this annoying line noise on ASUS, MSI and Dell motherboards

samatman

Audio production, I would claim, is the only niche the Macintosh has continually dominated since the first Steve Jobs era.

Even desktop publishing took a turn towards Windows in the mid-late 90s, whether it stayed 'dominant' is a fuzzy question but it was clearly losing market share.

But you won't find a professional music studio without a Mac, this has been true since the late 80s. This is not to say it can't be done with other equipment, just that as a matter of practice it isn't.

bentcorner

I've heard that having your analog audio lines in the same box as a bunch of other high frequency power circuits is usually a recipe for line noise, which is why external DACs can typically provide a better experience.

I have a Sound Blaster something-or-other in my PC but it's not connected to anything these days. I use a tiny Apple headphone dongle that was ~$10, has 0 noise and sounds great.

gxqoz

My friend had a great sound card while I didn't. I remember being jealous of how good midis from vgmusic.com sounded on his machine compared to mine. Night and day.

nsxwolf

I used to call the Sierra 800 number and press a key to speak to a representative, just because the hold music consisted of selections of their in-game music played on the Roland MT-32. It sounded amazing compared to my Sound Blaster.

an_aparallel

HI HN, long time reader - first post I felt super compelled to respond to.

This is a massive bugaboo in the audio industry in my opinion.

I have always been a PCI soundcard user - and still am to this day, but industry trends are stopping this. I think a big part of this is due to laptops/ipads and the like becoming more popular devices, as well as from a useability standpoint - companies optimise for succesful adoption into a users system - than technical specifications.

I started my DAW with a Terratec soundcard with midi + stereo audio ins and outs roughly 20 years ago.

Fastforward 7 years - i bought an early USB interface - the NI Audio Kontrol 1 to use with a laptop. I could run everything on it - take it out and about - cool!!

Fastworward another few years - and i got more serious about audio and bought a Lynx PCIe AES card (now without midi) - to use with an Apogee Rosetta 800 (8 in / 8 out). Now we're getting there. But - not an all in one solution.

In 2022 - surprisingly - the only (?) companies doing full PCIe audio solutions are Lynx and RME. In a fresh session in FL Studio or Ableton - with a sample buffer rate of 64 (the lowest) i enjoy latency of 0.72m/s. This cant be beaten by USB. However - that's not a deal breaker for most people sadly.

It greatly saddens me that audio in general is a second class citizen with regards to tech advancement. It still blows my mind that the Atari STE with MIDI in built onto its circuit still thwarts the tightness in the midi department - of a brand new full specced blazing machine. We need more development for Realtime O/S in the midi world.

akx

0.72m/s? I hope you mean 0.72 milliseconds, which sounds about right for 44.1khz and 64 samples.

Anyway, have you actually measured the _true_ latency from when your computer thinks it's sending out a signal and when it comes out of the speakers, and comparing it between a (good) USB interface and your preferred PCIe solution? After all, I have an old Focusrite Scarlett 2i2 gen1 and I too can technically crank the buffer size down to 64 in FL Studio and post about it on the internet...

an_aparallel

i cant get 0.72ms from my laptop for example. and havent tested it in any official way - i cannot/have not ever got this type of performance on any other solutions before this.

ive not tried thunderbolt before - indeed may be a good stand in for PCI as the comments mention below

akx

I did my math again, and a latency (buffer size in milliseconds) of 0.72ms (0.072 s) with a buffer size of 64 samples means your sampling rate would have to be about 400 KHz (stereo). Are you sure it's not 7.2 milliseconds (to bring it to the usual range of 40 KHz sampling rate)?

timc3

I moved over to Thunderbolt plus Audio over Ethernet. This is where the growth is (Dante etc) in the high end. I think I have 60 channels of IO and thinking of adding another 40.

USB is good enough for a lot of people, though I am not a fan so that covers the average prosumer.

MIDI is a totally different subject, but I can run MIDI clock from audio which is as tight as it gets these days. See USAMO from Expert Sleepers.

an_aparallel

USAMO doesnt help with something like finger drumming/keys - i wish there was a reversed version of it, i believe it was talked about at some point - converting output into audio - decoded in the pc - for tighter timing.

te_chris

Thunderbolt is the way, truth and light - with m1. I have a presonus quantum 2626 connected to my Mac m1pro over thunderbolt and can work at 32 sample buffer size in ableton.

h2odragon

The onboard sound chips became good enough, and for those for whom they weren't good enough the noise reduction bonus of an external DAC was worth it anyway. Computers are generally bad for analog signals within a few inches of the case.

I think another factor is MP3 players and phone audio; people stopped using their computer as the (interface to) media source when other things took that function over for them.

hedgehog

There was a point of "good enough" and then not long after that (2005ish?) essentially all motherboards started including onboard audio. I would imagine the market for separate audio cards shrunk below sustainable at that point. Most everything else in the case (GPUs, RAM) are headed there too, it's just a matter of incremental development on the existing trajectory.

modeless

There are lots of complex reasons people are postulating here, but I think it's pretty simple. The CPU can do good enough audio rendering without hardware acceleration. The CPU can't do good enough graphics rendering without hardware acceleration. So video accelerators stayed, and sound accelerators died.

Add-on sound devices still exist, but they are simple because they don't include extensive hardware acceleration anything like what a GPU has. In fact, if you want hardware acceleration for audio processing algorithms today, like really fancy 3D sound propagation or something, GPUs would actually be great at that, and they support digital audio output too.

saltcured

Yes, I think this is about right. I see a lot of threads focusing on APIs and ignoring another thing that happened right around the late 1990s to early 2000s: MMX introduced SIMD to the PC platform. Suddenly real-time DSP algorithms were feasible for playback and synthesis on the host CPU instead of requiring a peripheral with an embedded coprocessor. This allowed soundcards to be refactored as "just" hardware IO channels, with other signal processing effects happening in the application.

At roughly the same time, there were more peripheral buses like USB and Firewire being introduced, which meant that an add-on peripheral did not need to be an internal ISA/PCI card in order to have sufficient bandwidth for rich audio streams. These external devices could also be built with lower noise/interference compared to the boards inside a computer.

And of course, silicon integration always increased so that the bundled onboard IO chip became good enough for many users. So, add-on peripherals had to move up market or into niche settings. That is a bit like how the iGPU in Intel CPUs got rid of the market for basic VGA/XGA/etc. graphics cards for office machines.

kllrnohj

That's definitely missing some reasons. Back in the day onboard audio also was just terrible. Like it hissed at all volumes & other similar just bad dacs and bad amps and bad shielding problems. Which soundcards mostly all fixed.

These days basic "audio correctness" is readily available from onboard audio, though. Motherboards have gotten much, much better at noise isolating the audio area, and dacs & amps have generally improved.

magicalhippo

This coupled with noise. For those who don't care too much about noise, an onboard cheap DAC is perfectly fine.

For those that care about noise, you don't want the analog audio anywhere inside the case since it's a horribly noisy place. So you get an external DAC.

theevilsharpie

A couple of reasons:

- During the MS-DOS era, there wasn't really a standard API for sound, so using a cheap, off-brand sound chip (including anything that might be integrated) often meant compatibility problems. Even though it might not necessarily have offered the highest quality sound, Creative's Sound Blaster line was the gold standard for compatibility during this time. Standardized sound APIs have largely eliminated this issue.

-Throughout the '90s, music for games (and a number of other applications) was distributed as MIDI (or MIDI-like) instructions to be generated by a synthesizer, and the quality of the music was very much dependent on the synthesizer used. The Roland Sound Canvas series was the gold standard at the time (in part due to its quality, and in part because that's what the composers themselves used), but it was very expensive and out of reach to the mass market. Software synthesizers were either too slow, or the quality sucked. That gave an opportunity for sound card manufacturers like Creative to offer higher-quality hardware synthesizers on their sound cards than what cheap/integrated cards could do. These days, most audio is PCM, and hardware is perfectly capable of high-quality software sound synthesis, so hardware synthesis has become a non-issue and modern consumer sound hardware doesn't even have hardware synthesis capabilities anymore.

- During the '00s, sound cards began to offer accelerated environmental and positional audio (e.g., Aureal3D, Creative EAX), which games quickly adopted to improve the sense of immersion. However, changes in the Windows audio architecture introduced with Windows Vista broke this functionality without a replacement. Advances in CPU hardware have since allowed this type of processing to be done on the CPU (e.g., XAudio 3D, OpenAL Soft) with acceptable performance.

In the current era, we do have dedicated soundcards, although not in the form of PCIe add-in boards. External DACs (either dedicated USB, or integrated into a display or AV receiver) are popular, as are the DACs used by wireless/USB headphones. Also, there has been some work done to utilize the computational capability of GPUs for real-time audio ray tracing.

ksec

On-Board was good enough and cheap enough. It was as simple as that. A lot of the Audio processing moved to the CPU. Dedicated Sound Processor Effect requires Gaming support.

There was also Aureal. Both Creative and Aureal had their own specific API to try and create a similar moat like Glide from 3DFx but failed. And then Realtek took over.

Creative could have competed with onboard Audio as well. But they were too worry about losing their Sound Blaster Revenue, so they somehow diverged into other things like GPU ( 3DLabs ), MP3 players, Speakers, etc etc. And every single one of them failed.

If you are looking for modern Audio Engineering, you could look at PS5. But powerful DSP isn't exactly rocket science anymore. A lot of the improvement has to do with software.

Creative used to be the pride of Singapore. It is sad the company was badly managed and never made the leap to the next stage.

tenebrisalietum

AC97 (1997) was the first blow - this was Intel's improvement on the defacto SB16 interface (and not compatible with it) and was around the time audio started being integrated into motherboards.

This is also around the time it started to be common for pre-built systems to integrate functionality into the motherboard, such as VGA, audio, USB, and in some cases even AGP video all as part of a chipset.

The peak of PC audio probably matches the peak of the "HTPC" wave that happened in the first half of the 2000's - PCs designed to be put under your TV and replace your stereo.

But also, laptops started getting cheaper and more popular as the late 90's turned into the 2000's and beyond - where integration of components was even more valued. Then smartphones started to take over in the 2010's.

The culture is different now. These days, the young people don't have stereos anymore, they might have at best have a TV soundbar or some really good wireless speakers, or a couple of bluetooth speakers, and the phone is the centerpiece of the personal audio experience now.

Hi-Fi that's not dedicated to making your car rattle or be \blasted at 500w-per-channel volume over a bar/club PA speaker is dead.

Desktop PCs are for businesses which need only good enough audio for business purposes, and gamers who probably want to spend money on a GPU over audio.

rickdeckard

Intel came with AC'97 as a "good enough" onboard solution for audio, with standard drivers and all mainstream capabilities. No MIDI-port, no fancy spatial audio, just good-enough stereo out and mic/line-in.

It forced the dedicated soundcard vendors to justify the add-on price by pushing features like multichannel, Surround sound codecs, hardware controls etc, but none of those features were of mainstream interest.

Total sales volume for dedicated soundcards dropped, economics of scale dropped, prices had to increase, pushing the products even more into niche...

__david__

This is correct but there's one other part—most of the cards used to have built in midi synthesizers, but those became more or less obsolete when storage got past a certain point. Games on CD-ROMs could just ship redbook quality audio and that's infinitely more flexible than canned standard midi sounds. CPUs got fast enough so that mixing multiple audio channels and even running effects on them wasn't taxing enough to warrant any sort of hardware acceleration, and so the AC'97 standard of just a plain stereo DAC is really all anyone ever needed at that point.

rickdeckard

Yeah, that's part of the "good enough" onboard audio.

But I doubt that gaming was the key driver at that point, it surely was the demand for work PCs to play the new .wav soundeffects of Win95, which led to mediocre onboard audio with Software MIDI-synths. I vividly remember VIA onboard Audio everywhere, with gamers still putting Soundblasters in their PCs (fighting the BIOS to free up the IRQ and DMA ports) for the better quality and games still being developed for "Soundblaster 16".

Good times :)

echelon

> no fancy spatial audio

That's disappointing. Where can we go if we want spatial audio?

It feels like spatial audio would be a huge boon to games instead of calculating and modeling sound directionality on the CPU.

rickdeckard

Spatial Audio came back later (late 90s) in a standardized way with Microsoft DirectSound.

Before that, most spatial audio was either some very specific codec of the soundcard vendor a game had to explicitly support, or some psychoacoustic post-processing which added little benefit and was mostly there for the wow-effect in bundled demo apps...

I vaguely remember CreativeLabs pushing a custom 3D Sound API with "EAX" to justify their dedicated HW, with dedicated logo and everything, and several games supporting it.

It worked well for a while but gaming was a much smaller market back then so it probably wasn't sustainable to target a niche of an already small market...

It's the typical story of an industry where providing the mainstream needs funded development for providing the exceptional, and then someone came along to undercut by providing only the mainstream needs

rickdeckard

> That's disappointing. Where can we go if we want spatial audio?

You could build a Dolby Atmos PC setup for those handful of games which support it (history repeating).

Or you get yourself a PS5...

mmastrac

USB killed it. Keep your signal digital until you hit the speakers (or a short audio cable). No interference, 44.1kHz from end-to-end (or more).

If you don't like the DAC in the headphones, you can also find a high-quality USB DAC and use the audio cable from there.

tomnipotent

> you can also find a high-quality USB DAC

For anyone curious, check out Schiit Audio. I have the Magnius/Modius paired with Sennheiser HD660s headphones and I couldn't be happier. JDS Labs Atom is also a good choice.

jgauth

> No interference

This has not been my experience at all. For my 5" powered studio monitors, the _only_ way to get a interference-free signal from my desktop computer was with an optical cable to an external DAC.

squarefoot

As others replied, you may have grounding problems, that is, either the lack of grounding or too much of it (ground loop). An effective way to solve the problem is to isolate the signal by putting an audio transformer in between the outputs and the amplifier (or amplified speakers), one for each output. I've done this for desktops and laptops and it brought the noises to zero. Just make sure the transformers are of decent quality and suitable for audio.

wheels

That's probably a grounding problem, and was probably just coincidentally broken since optical cables don't carry electrical signals, and hence can't be a part of ground loops.

jimmaswell

Can't go wrong with optical cables for this reason - they're foolproof against interference. That's why I'll always use them when possible.

linedash

You're giving me flashbacks to the stupid amount of time it took for me to identify a coil whine issue on my motherboard that gets worse when the CPU is in power saving mode.

I 'worked around' the issue for the longest time by leaving covid based processing going on folding@home, which would force my CPU into turbo mode.

Eventually I found out that if I disabled cstates, it mostly went away.

orangepurple

Hot glue on the coil should shut it up

undefined

[deleted]

dheshhdhshd

Can you please tell me what optical cable you used? Struggled with this issue for years and if an optical cable somewhere in the chain can fix my problem i'd love to buy one.

sigstoat

there's basically only one kind of optical cable used in consumer audio (S/PDIF+TOSLINK, or however you want to refer to it). the trick is to arranging for both ends to use it.

vladvasiliu

GP probably has a sound card with optical output and speakers with optical input. This is usually called Toslink or SP/DIF (the latter may refer to a coax link - wired so won't help with ground loops).

The fiber is standard and you don't need any fancy Monster fiber costing $100 a foot.

orangepurple

You may have a ground loop and that needs to be fixed

anigbrowl

USB

If you're serious about audio you just plug a cable into your breakout box and have your interfaces. converts, and preamps there. Your sound hardware can be anything from a pure i/o to an elaborate instrument under computer control. You can do audio synthesis and compositing on the CPU, GPU (not so different from a DSP) or external hardware.

Soundcards are only 'gone' in the sense that PCI cards are less important because many people use laptops and the audio built into motherboards is more than Good Enough for everyday purposes.

timc3

Probably going to sound pedantic, but a breakout box is different from an external audio interface or convertor. Breakout box typically only converts from one socket to another socket, like this one: https://www.tascam.eu/en/bo-16dxout

External audio interface (sometimes wrongly called a sound card): https://www.rme-audio.de/fireface-ufx.html

Convertor: https://www.rme-audio.de/m-32-m-16-ad.html

Actually a sound card: https://www.rme-audio.de/hdspe-aio-pro.html which can utilise a breakout cable or external convertor

Daily Digest email

Get the top HN stories in your inbox every day.

Ask HN: Whatever happened to dedicated sound cards? - Hacker News