- #1
arcnets
- 508
- 0
Hi all,
when you store music as a WAV file (CD quality), it will take ~10 MB per minute. If you compress to MP3, even with the highest quality, you will only get ~1 MB per minute.
I know that the file size is determined by amplitude resolution * sample rate. If you use 16 bit, 44 kHz, and stereo, that results in ~10 MB per minute as I said.
The Shannon theorem says that, in order to digitalize an analog signal with full quality, the sample rate must be at least twice the highest frequency. So you cannot throw away any high-frequency Fourier components, or the sound will be bad.
My question is: How can the MP3 compression give a sound almost as good as the original, while throwing away ~90% of the information?
Does anyone know a simple explanation?
Thanks...
when you store music as a WAV file (CD quality), it will take ~10 MB per minute. If you compress to MP3, even with the highest quality, you will only get ~1 MB per minute.
I know that the file size is determined by amplitude resolution * sample rate. If you use 16 bit, 44 kHz, and stereo, that results in ~10 MB per minute as I said.
The Shannon theorem says that, in order to digitalize an analog signal with full quality, the sample rate must be at least twice the highest frequency. So you cannot throw away any high-frequency Fourier components, or the sound will be bad.
My question is: How can the MP3 compression give a sound almost as good as the original, while throwing away ~90% of the information?
Does anyone know a simple explanation?
Thanks...