|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
In Reply to: And by all means posted by Jitter_by_Coffee on December 05, 2002 at 11:51:38:
I have far better ways to spend my time than to debate with an inept grad student. Like enjoying music.Daisuke needs to spend a couple of years listening to music on what are truly state of the art audio reproduction systems before he has any understanding of what he is trying to prove. He is at present light years away from that reality.
rw
Follow Ups:
Yeah, right.
you have not had the good fortune to hear a system where such claims of inaudibility are rendered moot.Time to fall back on the "scientific mantle" that cites no sources nor reveals any experience with truly SOTA equipment. Let the untrained ear guide your way. What does Motorola have to do with high end audio anyway? The rigors of family radio reception?
rw
It's truely amazing that you think anecdotal evidence is credible.
Not to mention subjective opinion.
Since no component is perfect, one necessarily has to make choices based on preference. My preferences have themselves evolved over a long period of time. The fact that differences exist is not the question for experienced listeners using high resolution systems. The question is which differences are most important to any one listener. Is image specificity more important than harmonic integrity? Is stellar bass response more important than dynamic accuracy? There is certainly no one answer.So when one speaks of the clearly audible differences among components (using musical content, not static test tones) where the numbers are not supported by the observational data, then which conclusion do you draw?
1. The consistently perceived differences among a large sample of folks (thousands of high end component customers) are completely attributable to merely psychological factors.
OR
2. The testing methodology does not mimic the musical reproduction process closely enough to be relevant. You pointed out the challenge requiring powerful supercomputers to handle testing using complex musical content (which, by the way is how audio equipment is used by the public). Likewise, you have identified the other challenges of conducting DBT tests offered by jj. Your solution? Dumb down the tests to make them more convenient . The result of the simplification process is to make the tests largely meaningless.
Perhaps there are some valid DBT tests that do not contain the flagrant flaws evidenced in the links posted thus far. We're all waiting to see them!
rw
"Your solution? Dumb down the tests to make them more convenient. "I certainly don't know where you get this. There is never any convienient method of handling the human factor in these types of tests.
As for the the link, the phase response of the speakers would have been nice to know, but every speaker, has some phase/group delay anomoly. As long as the test was conducted with the exact same set of speakers in the exact same environment, then the "flaws" are consistent throught the testing and therefore have a minimal impact. Had either the testing environment or the speakers changed, even swapping them from left to right, then you would have intoduced a variable that would have to be seperately accounted for.
As to the quality of the speakers, is mearly a subjective opinion on your part, not to mention your comments regarding what the author should be doing with his time. Whether you want to believe it or not, those "mediocre" speakers have very likely been used in many, many studios to make mixdown choices.
Here are four such examples quoted from your posts.DUMB DOWN # 1 - Make the tests easier to conduct, not more accurate.
When asked about running multiple subjects simultaneouly, you said:
You'll never get satisfactoy statistical result without enough participants. You'll never be able to run enough participants in your lifetime without some other methodology.
So let's not bother with considering the problems with this approach, shall we? We'll go with quantity instead of quality.
DUMB DOWN # 2 - Utilize simplistic static tones as test criteria
You also have to understand that the simpler a test the better... It is quite possible to make a test signal so complex that a room full of Cray computers can't properly analyse it.
Fine. Those tests are entirely relevant for those souls who spend their leisure time listening to test tones. Does that fully characterize the performance differences when reproducing musical content on high resolution gear. I think not.
DUMB DOWN # 3 - Use testers unable to discern fine differences
This one is simply amazing:
All the tests I've participated in did not require training.
I guess it all depends upon what you are trying to prove. If you are trying to determine the audible characteristics of a family radio with the general populace, then this probably works fine. If, on the other hand, you are trying to establish ultimate audible effects of the highest resolution audio equipment, then you must have trained listeners.
Do you know how the suspension designs of exotic performance cars are finalized? While the initial modeling is done by computer they then create prototypes and perform exhaustive experiments using professional test drivers. I can imagine the laughter by Ferrari or Porsche, etc. if you were to suggest that these changes be dialed in by some guy off the street.
DUMB DOWN # 3 - Use completely unfamiliar musical material as the comparative reference
This one came as a surprise:
I suppose you also think that it is okay for the participant to be familiar with the test material,
Absolutely! Here again, what is your motive? It takes me a couple of weeks to fully grasp the musical content of a new album and explore all the nuances of a fine recording. Flash uncontrolled sections of new material at someone and indeed they are not going to hear a lot of diferences.
DUMB DOWN # 4 - Use mediocre equipment that is not "the most accurate audio equipment available"
As long as the test was conducted with the exact same set of speakers in the exact same environment, then the "flaws" are consistent throught the testing and therefore have a minimal impact.
Those flaws present are capable of completely masking the results. If you try to measure the ultimate cornering capability of a Dodge Viper on a set of $100 Pep Boys tires, you are not going to get an accurate picture of that vehicle's performance envelope.
It is easy to understand why you or your testers believe that there are few audible differences based upon how you've crippled the tests!
You're taking things completely out of context to try and build yourself a case. Sorry, I won't bite. Next time you're at Disney, say hi to Bugs.
While at Epcot, the wife and I stayed for the big 9:00 music and fireworks celebration around the lake. While the fireworks were impressive, the sound was dreadful. You'd think Mickey could afford better.The response certainly clarified your opinion advanced in previous posts on testing precepts and methodology.
rw
This post is made possible by the generous support of people like you and our sponsors: