|
Digital Drive Upsamplers, DACs, jitter, shakes and analogue withdrawals, this is it. |
For Sale Ads |
Use this form to submit comments directly to the Asylum moderators for this forum. We're particularly interested in truly outstanding posts that might be added to our FAQs.You may also use this form to provide feedback or to call attention to messages that may be in violation of our content rules.
Original Message
RE: Mastered for iTunes Critic Used Flawed Test to "Say BS"
Posted by Tony Lauck on March 14, 2012 at 20:40:31:
"Hypothesis #2. To get an objective measure of correlation, I mixed aligned versions of all WAV/AAC pairs (one in-phase, the other phase-inverted) and calculated the RMS volume of the resulting difference wave. That is, on average, how audible the difference is between the WAV file and AAC file tested. Results below."
The use of RMS differeneces to evaluate errors produced by complex encoders that are based on psycho-acoustic principles is invalid. This is akin to comparing distortion in amplifiers by a single "THD" number.
There are two main suggestions in Apple's recommendation: keep levels safely below maximum and use 24 bit input to the encoder. Both of these have been known for some time, at least by those engineers who would rather try and avoid having anything to do with AAC or MP3. The errors caused by overloaded encoders and decoders can be gross. The differences gained by encoding with 24 bit audio are subtle, more akin to the differences between 24 bit and 16 bit uncompressed PCM at 44 kHz. Even if both suggestions are followed the results will still be rotten, it's just a question of the degree of stench. If Apple were really interested in sound quality they would be offering lossless downloads in the iTunes store.