![]() ![]() |
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
84.71.224.148
In Reply to: RE: Boston Audio Society Strikes Again! posted by Charles Hansen on September 11, 2007 at 09:44:35
>Brad Meyer and David Moran of the Boston Audio Society have (once
>again) proven that we are all deaf.
In my September "As We See It" essay, reprinted this morning at
www.stereophile.com/asweseeit/907awsi, I report the results
of blind tests performed at McGill university that came to a different
conclusion to the Meyer/Moran findings: "To achieve a higher degree of
fidelity to the live analog reference, we need to convert audio using a
high sampling rate even when we do not use microphones and loudspeakers
having bandwidth extended far beyond 20kHz. Listeners judge high
sampling conversion as sounding more like the analog reference when
listening to standard audio bandwidth."
The tests were perfomred by Wieslaw Woszczyk and John Usher of McGill
University, Jan Engel of the Centre for Quantitative Methods, Ronald
Aarts and Derk Reefman of Philips. Their conclusions were contained in a
paper presented at the 31st International AES Conference, held in
London, in June 2007, "Which of Two Digital Audio Systems Meets Best
with the Analog System?", and reprinted in the Proceedings of the
Conference.
John Atkinson
Editor, Stereophile
Follow Ups:
John: I've read the non-refereed preprint you refer to. Though many of their methods are interesting and they went to a lot of trouble, I don't believe the data support the authors' conclusions. They asked their subjects to tell them which sounded more like a multichannel analog feed, a band-limited codec with 44.1kHz sampling, or another with 352.8k. By a small (and statistically insignificant) margin, the subjects thought the 44.1k codec was truer to the source. There were also separate tests with two transmission channels (two sets of microphones and speakers), one with response to 100 kHz and another band-limited to 20k. Again through an unsurprising random statistical variance, the subjects chose the 352k codec *less* often, and the 44.1k codec more often, when there was > 20k audio in the source.
The authors assume (and say so in the paper) that there has to be an audible difference between the codecs, which leads them into a tortured and illogical explanation of how this could have happened. They posit that the ultrasonic material somehow sounded bad, so subjects chose the 44.1k codec when it was present -- but subjects were asked only to say which was more like the (high-bandwidth) source, not which one they liked. So the whole argument kind of collapses in a heap at that point. -- Brad
Don't bother - stereophile doesn't pay any attention to facts when it comes to these issues. They're ad-based, not reality-based.
Truthseeker:
Thanks for the advice. Whatever the magazine *per se* may do, I have had at least a nodding acquaintance with John A. over the years, and wanted to give him, or anyone else reading over our shoulders, my take on the paper, which was intriguing but has to be read carefully. For example, they invented a sound source for the test, which was a series of amazing contraptions (photos are included) creating a quasi-repetitive noise of complex character, in order to exercise the > 20k bandwidth of their system while removing the usual musical syntax from the subjects' experience. I'd love to hear a sample of that source. Brian Eno probably would too. -- E. Brad
t
Hard to believe there is even a debate about whether recording engineers or other people who deal with recorded music on a daily basis embrace high rez over low. Absolutely unbelievable. But I guess our non-experiential objectivist friends will believe anything they read, as long as it fits their agenda.
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: