Lossy codecs try to find an approximate signal that sounds as close as possible, but is easier to compress. They drop parts of the signal entirely, or reduce their resolution, based on models of human hearing to identify the parts that are least likely to be noticeable if they are missing. E.g. not all frequencies can be heard equally well, so quality is dropped on the ones that are heard less clearly anyways. And loud signals on one frequency can make signals on another frequency or following quickly after harder to perceive.
Most such audio codecs are based to some degree on variants of fourier transforms, so this modification is done by dropping or reducing resolution of parts of the output.
When you listen to music there's a lot of "fine detail" that you can't really hear that gets buried by louder sounds.
CD audio is perfectly lossless - it encodes the signal that you put in by measuring a voltage 44100 times per second and recording that exactly. When you play it back you get exactly the same signal back out. The only problem is, this takes up a lot of space, roughly 10MB per minute for stereo audio.
MP3 audio is lossy in that rather than storing the exact values of a waveform, it stores a description of how short segments of the waveform change. The higher the bitrate, the better the description, and the more detailed the reproduction. A low bitrate MP3 is like trying to redraw the original waveform from a vague description with a paint roller, a high bitrate MP3 is like trying to draw it with a mapping pen from a really detailed description.
FLAC audio is lossless because it takes the precise values of the audio, and uses a technique similar to zip files to find similar-looking blocks of data. Think in terms of having a one-second silence recorded as "Zero, then 44099 more of 'em" rather than "zero zero zero zero zero..." and so on 44100 times.
> CD audio is perfectly lossless - it encodes the signal that you put in by measuring a voltage 44100 times per second and recording that exactly. When you play it back you get exactly the same signal back out.
Not at all! CD audio amplitude is quantized to 16 bits, and temporally sampled at 44100 Hz as you say. Certainly very high quality, but there's absolutely a loss (that nobody can really hear).
I know that when I was about 10yo, I could hear a tiny suggestion of something audible when I tested my hearing at around 23-24kHz. I guess that if you had some really loud content around these frequencies and had equipment that could reproduce it perfectly, it would be not impossible for it to influence my listening experience those many years ago :)
Although there's also a difference between being able to hear a single tone, and being able to reliably perceive a difference in some more complex bit of noise.
Testing with https://www.audiocheck.net/blindtests_frequency.php, my hearing limit in the white noise case for example is at least 1 to 2 kHz lower than my hearing limit for a single frequency sinus tone.
Oh absolutely, hence the stressed out words in cursive. It isn't going to make any meaningful difference even for a 10yo with perfect hearing - for listening, 44100Hz sample rate is more than enough, and 48000Hz provides large enough headroom to already make any semi-reasonable "audiophile" sleep calm.
I used to be able to locate the computer section of WH Smiths or John Menzies in the 1980s and go and play on the ZX Spectrums and Commodore 64s (and other oddities - the one in Arbroath had a couple of Memotech MTX512s!) by hearing the 15kHz scan line whistle from the CRT tellies they used for displays.
Lossless to all practical purposes, then. With 16 bits of amplitude quantisation the smallest bit is below the noise floor of all but the quietest possible amplification, and while 44.1kHz isn't a lot it still places the corner frequency of your antialiasing filter comfortably above human hearing.
It's got way more audio bandwidth than most of the analogue masters that everyone raves about.
No, there's nothing magically audible happening with phase shifts near the steep cutoff of the antialiasing filter that isn't happening with the gentle rolloff of tape, either. Not that you could hear anyway, and even though my hearing is better than most 48-year-old industrial music enthusiasts, not that I could hear either.
For cats, even your very best equipment with perfect reproduction sounds like AM radio because their hearing tops out at 80kHz.
Just like any general purpose compression: everything - flac could be used just like zip/zlib/gzip as a general purpose compressor it just wouldn’t compress as well on most data that isn’t audio.
> dropped when "LOSS" occurs?
Lossy compression generally employs some type of perceptual coding where data is reorganized such that the signal data is sorted or localized according to its perceptual importance. This does partly involve removing or reducing the density of signal in the higher frequencies but it also exploits things such as masking both in time - inability to perceive signals occurring close in time of similar frequency. And frequency masking - our inability to hear quieter signals that are close in frequency to a louder one.
The key point is masking: if a sound at a given frequency is loud enough, you are less likely to perceive weaker sounds at other frequencies. So there is little point in wasting bits on those frequencies during the time intervals where they are being swamped.
The exact frequency/amplitude relationships where masking effects come into play were studied by the telecoms early on (meaning in the 1950s-1960s era), and are still a key part of most lossy encoding models these days.
From my understanding (I took two relevant undergrad courses, definitely not an expert) lossy compression can involve either losing certain frequencies (like those above/below human hearing), or losing the accuracy of the reproduction (like a certain frequency component of the sound could be the smallest bit louder/quieter or higher/lower than it was originally).