|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
In Reply to: It's called "peer review", Jit posted by E-Stat on December 04, 2002 at 18:00:48:
"You posted a link about a guy testing the audibility of phase who used some mediocre pro speakers as his "reference""Huh? Please refresh my memory on this. The only link coming close is one by a fellow(Scot's Guide) who discovered a flaw in another person's test, where they used a different load impedance to change the power level, instead of increasing the amplifier output.
He never ran any test, but rather did a mathematical analysis to show what was measured by the other party.
Follow Ups:
"Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passions, they cannot alter the state of facts and evidence." --John AdamsI agree with this quote found on another site you referenced, "The Scientific Art of Audio". True, except for when the evidence does not support the alleged "facts". I particularly like the section on "The truth about esoteric power cords". Gee, those pics do offer some compelling "facts"! The phase test is linked from here as well.
http://www.fortunecity.com/skyscraper/motorola/145/
rw
If you have a problem with the work, write to the author/school and see what response you get.
I have far better ways to spend my time than to debate with an inept grad student. Like enjoying music.Daisuke needs to spend a couple of years listening to music on what are truly state of the art audio reproduction systems before he has any understanding of what he is trying to prove. He is at present light years away from that reality.
rw
Yeah, right.
you have not had the good fortune to hear a system where such claims of inaudibility are rendered moot.Time to fall back on the "scientific mantle" that cites no sources nor reveals any experience with truly SOTA equipment. Let the untrained ear guide your way. What does Motorola have to do with high end audio anyway? The rigors of family radio reception?
rw
It's truely amazing that you think anecdotal evidence is credible.
Not to mention subjective opinion.
Since no component is perfect, one necessarily has to make choices based on preference. My preferences have themselves evolved over a long period of time. The fact that differences exist is not the question for experienced listeners using high resolution systems. The question is which differences are most important to any one listener. Is image specificity more important than harmonic integrity? Is stellar bass response more important than dynamic accuracy? There is certainly no one answer.So when one speaks of the clearly audible differences among components (using musical content, not static test tones) where the numbers are not supported by the observational data, then which conclusion do you draw?
1. The consistently perceived differences among a large sample of folks (thousands of high end component customers) are completely attributable to merely psychological factors.
OR
2. The testing methodology does not mimic the musical reproduction process closely enough to be relevant. You pointed out the challenge requiring powerful supercomputers to handle testing using complex musical content (which, by the way is how audio equipment is used by the public). Likewise, you have identified the other challenges of conducting DBT tests offered by jj. Your solution? Dumb down the tests to make them more convenient . The result of the simplification process is to make the tests largely meaningless.
Perhaps there are some valid DBT tests that do not contain the flagrant flaws evidenced in the links posted thus far. We're all waiting to see them!
rw
"Your solution? Dumb down the tests to make them more convenient. "I certainly don't know where you get this. There is never any convienient method of handling the human factor in these types of tests.
As for the the link, the phase response of the speakers would have been nice to know, but every speaker, has some phase/group delay anomoly. As long as the test was conducted with the exact same set of speakers in the exact same environment, then the "flaws" are consistent throught the testing and therefore have a minimal impact. Had either the testing environment or the speakers changed, even swapping them from left to right, then you would have intoduced a variable that would have to be seperately accounted for.
As to the quality of the speakers, is mearly a subjective opinion on your part, not to mention your comments regarding what the author should be doing with his time. Whether you want to believe it or not, those "mediocre" speakers have very likely been used in many, many studios to make mixdown choices.
Here are four such examples quoted from your posts.DUMB DOWN # 1 - Make the tests easier to conduct, not more accurate.
When asked about running multiple subjects simultaneouly, you said:
You'll never get satisfactoy statistical result without enough participants. You'll never be able to run enough participants in your lifetime without some other methodology.
So let's not bother with considering the problems with this approach, shall we? We'll go with quantity instead of quality.
DUMB DOWN # 2 - Utilize simplistic static tones as test criteria
You also have to understand that the simpler a test the better... It is quite possible to make a test signal so complex that a room full of Cray computers can't properly analyse it.
Fine. Those tests are entirely relevant for those souls who spend their leisure time listening to test tones. Does that fully characterize the performance differences when reproducing musical content on high resolution gear. I think not.
DUMB DOWN # 3 - Use testers unable to discern fine differences
This one is simply amazing:
All the tests I've participated in did not require training.
I guess it all depends upon what you are trying to prove. If you are trying to determine the audible characteristics of a family radio with the general populace, then this probably works fine. If, on the other hand, you are trying to establish ultimate audible effects of the highest resolution audio equipment, then you must have trained listeners.
Do you know how the suspension designs of exotic performance cars are finalized? While the initial modeling is done by computer they then create prototypes and perform exhaustive experiments using professional test drivers. I can imagine the laughter by Ferrari or Porsche, etc. if you were to suggest that these changes be dialed in by some guy off the street.
DUMB DOWN # 3 - Use completely unfamiliar musical material as the comparative reference
This one came as a surprise:
I suppose you also think that it is okay for the participant to be familiar with the test material,
Absolutely! Here again, what is your motive? It takes me a couple of weeks to fully grasp the musical content of a new album and explore all the nuances of a fine recording. Flash uncontrolled sections of new material at someone and indeed they are not going to hear a lot of diferences.
DUMB DOWN # 4 - Use mediocre equipment that is not "the most accurate audio equipment available"
As long as the test was conducted with the exact same set of speakers in the exact same environment, then the "flaws" are consistent throught the testing and therefore have a minimal impact.
Those flaws present are capable of completely masking the results. If you try to measure the ultimate cornering capability of a Dodge Viper on a set of $100 Pep Boys tires, you are not going to get an accurate picture of that vehicle's performance envelope.
It is easy to understand why you or your testers believe that there are few audible differences based upon how you've crippled the tests!
You're taking things completely out of context to try and build yourself a case. Sorry, I won't bite. Next time you're at Disney, say hi to Bugs.
While at Epcot, the wife and I stayed for the big 9:00 music and fireworks celebration around the lake. While the fireworks were impressive, the sound was dreadful. You'd think Mickey could afford better.The response certainly clarified your opinion advanced in previous posts on testing precepts and methodology.
rw
Yes, those pictures do. Most likely that wire is hot-dipped galvanized steel. A far cry from cryo treated OFC copper.....blah, blah, blah.So applying Ocams Razor to the situation and it isn't hard to realize what affect an esoteric power cord is going to do to improve power transfer into your equpment. Or is that conclusion to hard to derive?
Thanks for the other reminder. When the opportunity presents itself, I'll have to re-read the paper. In general, it would be a mistake to not characterize the equipment being used for the test, but again, I need to see what the context is here that you're complaining about.
Yes, those pictures do.Do what? They certainly do not prove that the use of esoteric power cords with high resolution music playback systems is inaudible.
Or is that conclusion to hard to derive?
You seem to treat this subject as though it was theoretical astrophysics where experiential proof is impossible. Here on planet earth, it is quite easy to test the efficacy of different power cords on high resolution music playback systems.
I need to see what the context is here that you're complaining about.
Here are some quite amusing excerpts from the study:
"It is paramount in conducting listening tests with the most accurate audio equipment available if the test subject is to hear any audible effects."
Agreed. Ok, so why use a dinky low end "pro" mini monitor as the choice for "the most accurate audio equipment available"? Already the guy has invalidated the study using his own criterion. The title of the study is "Audibility of Phase Distortion in Audio Signals". Did the tester carefully conduct phase tests on the test equipment itself to verify that the results were independent of the equipment used?
"Phase characteristics for this loudspeaker were not investigated...Again, the phase response for the headphones was not investigated."
I sure hope the tests that you rely upon do not possess such obvious flaws. Back to the drawing board Daisuke!
rw
"Yes, those pictures do.Do what? They certainly do not prove that the use of esoteric power cords with high resolution music playback systems is inaudible."
All credible audio reviewers (unlike you) fully state all of the equipment and musical content used for an evaluation. Any trained listener can tell the difference on a high resolution system. We're talking more than a Bose radio here Bruce.
There are no peer reviewed objective tests anywhere that bear this out. Only subjective ones that fall under the category of anecdotal evidence.
This post is made possible by the generous support of people like you and our sponsors: