Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I figured it had something to do with the application of digital filters. There are only two significant samples of a source at half the sampling rate, so it should be able to represent a frequency at exactly half the sampling rate but I can't see how it could accurately represent a frequency at just a few hz below 1/2 the sampling rate. Notice I'm using the word frequency. A sine wave with a frequency of 22,050hz encoded at 16 bit 44Khz is not going to look anything like a sine wave.


Right, 22,050hz looks exactly like a triangle wave on a computer screen. The thing is that a triangle wave is composed of a fundamental sine wave (22kHz), and a series of ascending odd harmonics above it. So after the filters nix everything above 22kHz it looks exactly like a sine wave on a scope.

So you're right that you lose information as you go higher in frequency, but there is also commensurately less need for information to recreate it precisely because the filters remove the detail anyway (and if not the filters, the human ear).


Sure 22,050hz and 11,025hz get smoothed out into perfect sine waves and the human ear can't hear 22Khz anyway. But the Nyquest frequency isn't some magical threshold that you cross and suddenly everything is perfectly preserved. It's a folding frequency that determines where aliasing is going to occur, or rather where it's not going to occur. A 44Khz sampling rate is based roughly on western tuning (440Hz A) and makes no attempt to accurately capture sounds and frequencies that are not tuned to western music. As you move away from the folding frequency, there are frequencies in the human audible range that can not be represented, so they're discarded or attenuated by anti-aliasing filters. As far as I'm concerned CD audio is outdated tech that most of the world just doesn't care enough to drop. It's ridiculous in a world of 5K retina screens that people can't see the value in higher resolution audio.


>It's ridiculous in a world of 5K retina screens that people can't see the value in higher resolution audio.

Not if they can't hear the value in higher resolution audio. For many people the only difference in HD audio over 16-bit 44.1 Khz is that the files are bigger. If someone can't hear the difference, it's no surprise that they don't care to move to a new format.

The screen analogy isn't perfect as most people can still readily tell the difference between an HD image and a significantly lower resolution one. (Though yeah, we're getting closer to pixel densities surpassing people's ability to resolve pixels as well, provided they're not putting their nose to the screen. It won't be too long now.)


There's a difference between not being able to hear and not knowing what to listen for. Listen to the highs on a well tuned hi-fi system and you can hear the difference between CD and SACD. Listen for the sound of a singer taking a breath or the slow ring of a cymbal. If you've never heard these things in person to begin with then you're at a disadvantage trying to hear how badly they are represented in recording technology from the 1980s.

Don't argue from a position of ignorance. Make friends with a recording engineer and have them play you a 32bit mix followed by a 16 bit mixdown.


Yes, and also remeber that as you approach the nyquest frequency, the ability to encode phase is lost. People always talk abou sampling capturing different frequencies but they forget to think about phase.


Which gets back to your original point about both sample rate and bit depth contributing to the dynamic range. I'm very curious: can you give me some search keywords that would get me to the math behind that? Also, I've never heard the claim that the 44.1kHz rate biases toward 440Hz tuning, is there somewhere I can read more about that?


pulse code modulation, pulse density modulation...

44.1kHz/100 = 441hz. But that's nonsense in the same way that saying that a signal at the Nyqvist frequency can be accurately encoded. @diroussle pointed out that you lose phase but there's another consideration, sync. If your signal at Nyquist is not in sync with the sampling frequency, then it's going to be represented as a signal offset, an out of phase line.


I understand PCM, 1 bit DACs, et al. What I'd love to see are some equations relating the three quantities of bit rate, sampling frequency, and distortion. Turns out to be very hard to Google.

In any case, thanks for your patience, I'm glad to have cause to reconsider my position on this topic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: