|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
I have seen a few posts saying that DBT or ABX in general cannot reveal the differences between various components. In a nutshell (or not :-)!) why is this so? I personally have used an ABX comparator on my PC, which I know is not very scientific, but I could hear the differences between certain things, such as level mismatches, sonic differences in mp3 codecs, and distortion. I also noticed that I could not hear certain things that were supposed to be easily distinguishable by ear.
Follow Ups:
.
Why would you find a difference that doesn't exist? The great fear of high end manufacturers and pompous "reviewers" is that dbts will reveal that competent inexpensive amps etc sound just like the expensive stuff.I dont really believe this. Actually, the inexpensive stuff often sounds better.
If you check out the literature in JASA and psychometrics journals, you'll find no doubt on the matter.Now, a PROPER test is involved, this means training, listener control over pace, switching (at the LISTENER's will), and so on. There is a nearly endless list of things that have to be considered, it's long enough that I won't even try to write it.
This also means 1 subject at a time, etc, etc. Somebody recently suggested that it was ok to run multiple subjects with one switch.
NO!
Now, watch. I've been seriously libeled before by people here for telling the truth.
But that's it. Done right, ABX and DBT work.
Now, need we say that not all DBT's or ABX tests are good? I hope not.
JJ - Philalethist and Annoyer of Bullies
"This also means 1 subject at a time, etc, etc. Somebody recently suggested that it was ok to run multiple subjects with one switch.NO!"
Excuse me, but the method we use works. You're criticizing something you don't understand. It wasn't developed overnight and most certainly was done in a haphazard manner. A company our size has neither time nor money to waste on something that can't produce reliable and repeatable results.
Sorry if it is counter to what you have experienced, but you have no grounds to say it is improper.
Has a long history of problems.Ditto having one person control switching, or having sequential testing.
Both are documented in the literature. If you want to argue otherwise, you're going to have to go get your own tests documented in the literature, etc, and show that that they have neither cross-influence nor desensitization problems.
So go ahead, do it.
JJ - Philalethist and Annoyer of Bullies
See, now you're stuck in the very box that needs to be crawled out of.You'll never get satisfactoy statistical result without enough participants. You'll never be able to run enough participants in your lifetime without some other methodology.
So, you go ahead and do it too.....
While we're at it, I suppose you also think that it is okay for the participant to be familiar with the test material, that it won't introduce another bias.....
First, unless you're running a self-training test, which again is something you do only for one subject at a time, you must in fact be familiar with the material as well as the test method, etc, via training if nothing else.You claim this is "introducing a bias", well, then, why don't you submit a paper to JASA or a good psychometrics journal and have it reviewed and see what people say? Why don't you show your results that demonstrate sensitivity? Evidence, man, let's see it.
As to the number of subjects, just how many trials do you run per subject?
Oh, and no, not everyone runs multiple subjects per test. Do not presume to tell others how they work.
You're willing to anonymously introduce all of these claims, but it's time to cough up. Show me the papers, the evidence, and the test results that show that you can run a sensitive test when the subjects are not familiar with the material and you run multiple subjects at once. Show the evidence that the subjects don't cross-influence each other, as well.
You've made the claims, they runs strongly counter to both my experience and the state of the art as expressed in the literature, so let's see the evidence.
JJ - Philalethist and Annoyer of Bullies
I've already told you it's propriatary to the company I work for and am not at liberty to divulge how it's actually accomplished. I'm just pointing out that it has been done sucessfully. Maybe it does go counter to most of what's published and your own experience, but it has been done sucessfully and maybe by more than one party - Like the case with whom I work for, it is unpublished as the methods are proprietary and considered business critical."You claim this is "introducing a bias", well, then, why don't you submit a paper to JASA "
It's rather common sense that you run the risk of the subject suddenly getting lost in the performance rather than conducting the test as expected, that's bias, hardly take a paper to realize it. Again, just pointing out that that one must get outside their little boxes of comfort and look at everything differently.
I haven't seen the problem with trained subjects.Period.
So what problem were you trying to address? You keep saying "look outside the box" but I haven't got any evidence that you can show a good detection threshold.
JJ - Philalethist and Annoyer of Bullies
"I haven't seen the problem with trained subjects."That doesn't mean it doesn't happen, or is always detected, does it.
And if you think that a lack of performance on the part of a subject can't be detected, well...It's time for some specifics from you, thank you.
JJ - Philalethist and Annoyer of Bullies
What about you. Seems to me the few published litening test have been picked apart by several people for various reason, and yet you persist on doing the same thing over again, which will just be picked apart again....
Seems to me the few published litening test have been picked apart by several people for various reason...Isn't that what the scientific method is all about? Provide the details by which a given test can be evaluated? You posted a link about a guy testing the audibility of phase who used some mediocre pro speakers as his "reference". Further, he stated that he assumed for the purposes of the test that they were phase linear. Interestingly, he stated that the audibility varied greatly with the listener. Duh.
Bogus tests prove nothing.
"You posted a link about a guy testing the audibility of phase who used some mediocre pro speakers as his "reference""Huh? Please refresh my memory on this. The only link coming close is one by a fellow(Scot's Guide) who discovered a flaw in another person's test, where they used a different load impedance to change the power level, instead of increasing the amplifier output.
He never ran any test, but rather did a mathematical analysis to show what was measured by the other party.
"Facts are stubborn things; and whatever may be our wishes, our inclinations, or the dictates of our passions, they cannot alter the state of facts and evidence." --John AdamsI agree with this quote found on another site you referenced, "The Scientific Art of Audio". True, except for when the evidence does not support the alleged "facts". I particularly like the section on "The truth about esoteric power cords". Gee, those pics do offer some compelling "facts"! The phase test is linked from here as well.
http://www.fortunecity.com/skyscraper/motorola/145/
rw
If you have a problem with the work, write to the author/school and see what response you get.
I have far better ways to spend my time than to debate with an inept grad student. Like enjoying music.Daisuke needs to spend a couple of years listening to music on what are truly state of the art audio reproduction systems before he has any understanding of what he is trying to prove. He is at present light years away from that reality.
rw
Yeah, right.
you have not had the good fortune to hear a system where such claims of inaudibility are rendered moot.Time to fall back on the "scientific mantle" that cites no sources nor reveals any experience with truly SOTA equipment. Let the untrained ear guide your way. What does Motorola have to do with high end audio anyway? The rigors of family radio reception?
rw
It's truely amazing that you think anecdotal evidence is credible.
Not to mention subjective opinion.
Since no component is perfect, one necessarily has to make choices based on preference. My preferences have themselves evolved over a long period of time. The fact that differences exist is not the question for experienced listeners using high resolution systems. The question is which differences are most important to any one listener. Is image specificity more important than harmonic integrity? Is stellar bass response more important than dynamic accuracy? There is certainly no one answer.So when one speaks of the clearly audible differences among components (using musical content, not static test tones) where the numbers are not supported by the observational data, then which conclusion do you draw?
1. The consistently perceived differences among a large sample of folks (thousands of high end component customers) are completely attributable to merely psychological factors.
OR
2. The testing methodology does not mimic the musical reproduction process closely enough to be relevant. You pointed out the challenge requiring powerful supercomputers to handle testing using complex musical content (which, by the way is how audio equipment is used by the public). Likewise, you have identified the other challenges of conducting DBT tests offered by jj. Your solution? Dumb down the tests to make them more convenient . The result of the simplification process is to make the tests largely meaningless.
Perhaps there are some valid DBT tests that do not contain the flagrant flaws evidenced in the links posted thus far. We're all waiting to see them!
rw
"Your solution? Dumb down the tests to make them more convenient. "I certainly don't know where you get this. There is never any convienient method of handling the human factor in these types of tests.
As for the the link, the phase response of the speakers would have been nice to know, but every speaker, has some phase/group delay anomoly. As long as the test was conducted with the exact same set of speakers in the exact same environment, then the "flaws" are consistent throught the testing and therefore have a minimal impact. Had either the testing environment or the speakers changed, even swapping them from left to right, then you would have intoduced a variable that would have to be seperately accounted for.
As to the quality of the speakers, is mearly a subjective opinion on your part, not to mention your comments regarding what the author should be doing with his time. Whether you want to believe it or not, those "mediocre" speakers have very likely been used in many, many studios to make mixdown choices.
Here are four such examples quoted from your posts.DUMB DOWN # 1 - Make the tests easier to conduct, not more accurate.
When asked about running multiple subjects simultaneouly, you said:
You'll never get satisfactoy statistical result without enough participants. You'll never be able to run enough participants in your lifetime without some other methodology.
So let's not bother with considering the problems with this approach, shall we? We'll go with quantity instead of quality.
DUMB DOWN # 2 - Utilize simplistic static tones as test criteria
You also have to understand that the simpler a test the better... It is quite possible to make a test signal so complex that a room full of Cray computers can't properly analyse it.
Fine. Those tests are entirely relevant for those souls who spend their leisure time listening to test tones. Does that fully characterize the performance differences when reproducing musical content on high resolution gear. I think not.
DUMB DOWN # 3 - Use testers unable to discern fine differences
This one is simply amazing:
All the tests I've participated in did not require training.
I guess it all depends upon what you are trying to prove. If you are trying to determine the audible characteristics of a family radio with the general populace, then this probably works fine. If, on the other hand, you are trying to establish ultimate audible effects of the highest resolution audio equipment, then you must have trained listeners.
Do you know how the suspension designs of exotic performance cars are finalized? While the initial modeling is done by computer they then create prototypes and perform exhaustive experiments using professional test drivers. I can imagine the laughter by Ferrari or Porsche, etc. if you were to suggest that these changes be dialed in by some guy off the street.
DUMB DOWN # 3 - Use completely unfamiliar musical material as the comparative reference
This one came as a surprise:
I suppose you also think that it is okay for the participant to be familiar with the test material,
Absolutely! Here again, what is your motive? It takes me a couple of weeks to fully grasp the musical content of a new album and explore all the nuances of a fine recording. Flash uncontrolled sections of new material at someone and indeed they are not going to hear a lot of diferences.
DUMB DOWN # 4 - Use mediocre equipment that is not "the most accurate audio equipment available"
As long as the test was conducted with the exact same set of speakers in the exact same environment, then the "flaws" are consistent throught the testing and therefore have a minimal impact.
Those flaws present are capable of completely masking the results. If you try to measure the ultimate cornering capability of a Dodge Viper on a set of $100 Pep Boys tires, you are not going to get an accurate picture of that vehicle's performance envelope.
It is easy to understand why you or your testers believe that there are few audible differences based upon how you've crippled the tests!
You're taking things completely out of context to try and build yourself a case. Sorry, I won't bite. Next time you're at Disney, say hi to Bugs.
While at Epcot, the wife and I stayed for the big 9:00 music and fireworks celebration around the lake. While the fireworks were impressive, the sound was dreadful. You'd think Mickey could afford better.The response certainly clarified your opinion advanced in previous posts on testing precepts and methodology.
rw
Yes, those pictures do. Most likely that wire is hot-dipped galvanized steel. A far cry from cryo treated OFC copper.....blah, blah, blah.So applying Ocams Razor to the situation and it isn't hard to realize what affect an esoteric power cord is going to do to improve power transfer into your equpment. Or is that conclusion to hard to derive?
Thanks for the other reminder. When the opportunity presents itself, I'll have to re-read the paper. In general, it would be a mistake to not characterize the equipment being used for the test, but again, I need to see what the context is here that you're complaining about.
Yes, those pictures do.Do what? They certainly do not prove that the use of esoteric power cords with high resolution music playback systems is inaudible.
Or is that conclusion to hard to derive?
You seem to treat this subject as though it was theoretical astrophysics where experiential proof is impossible. Here on planet earth, it is quite easy to test the efficacy of different power cords on high resolution music playback systems.
I need to see what the context is here that you're complaining about.
Here are some quite amusing excerpts from the study:
"It is paramount in conducting listening tests with the most accurate audio equipment available if the test subject is to hear any audible effects."
Agreed. Ok, so why use a dinky low end "pro" mini monitor as the choice for "the most accurate audio equipment available"? Already the guy has invalidated the study using his own criterion. The title of the study is "Audibility of Phase Distortion in Audio Signals". Did the tester carefully conduct phase tests on the test equipment itself to verify that the results were independent of the equipment used?
"Phase characteristics for this loudspeaker were not investigated...Again, the phase response for the headphones was not investigated."
I sure hope the tests that you rely upon do not possess such obvious flaws. Back to the drawing board Daisuke!
rw
"Yes, those pictures do.Do what? They certainly do not prove that the use of esoteric power cords with high resolution music playback systems is inaudible."
All credible audio reviewers (unlike you) fully state all of the equipment and musical content used for an evaluation. Any trained listener can tell the difference on a high resolution system. We're talking more than a Bose radio here Bruce.
There are no peer reviewed objective tests anywhere that bear this out. Only subjective ones that fall under the category of anecdotal evidence.
But you won't provide evidence.You have come up with "problems" in the "normal" way that people run things but you won't provide evidence for that, either.
What's more, you seem unable to understand how one can evaluate the performance of a subject relative to known thresholds.
Why don't you submit a paper to the next US AES convention?
JJ - Philalethist and Annoyer of Bullies
Ya know, you're beginning to sound like some of the others around here with all the guessing you're doing.Starting with "my methods" - never, ever said that, made it clear numerous times. Start over.
You're putting forth the test, and without providing a single detail, dismissing everything everyone says to you. So you've taken on responsiblity personally for these ideas, and you can live with it.Now, you have yet to offer any evidence about subjects having trouble with familiar material. On the converse, there are reams of data showing than unfamiliarity with the entire setup, music, etc, desensitize test.
You claim that there is no problem with running multiple subjects. On the converse, there are reams of data showing:
1) That multiple subjects affect each others' scoring.
2) That an inability to do a clean, fast, listener-controlled switch desensitizes a test.
3) That primary auditory memory starts to fade inside of 200 milliseconds.How, then, can you run fixed-switched tests that do not allow time-proximate comparison (where "proximate" means 200 milliseconds) and have sensitivity.
You have made some very strong claims, and you won't provide any evidence, yet you whine incessantly when others point this out.
If you want to convince anyone that this test you talk about does anything beyond run a desensitized test that will not get to already-measured JND's you have to do better.
Lots better.
Like I said, submit a paper to the 115th AES on this.
JJ - Philalethist and Annoyer of Bullies
" and you can live with it."I'm not the one with the problem, so it's just fine by me. And furthermore, my employment is worth far more than a speaker cable test.
"You have made some very strong claims, and you won't provide any evidence, yet you whine incessantly when others point this out."
Oh bullshit. You likewise have made some strong claims and haven't backed up anything you propose either. You haven't even answered the one question of bias I asked.
"If you want to convince anyone that this test you talk about does anything beyond run a desensitized test that will not get to already-measured JND's you have to do better."
So far, you have no hard evidence either, the published cable listening tests have all been criticized in one way or another, inculuding not having enough statistical evidence. (Even if I don't agree with their arguments against the results) Yet you have just persisted on touting that we do the same thing. Some say that's the definition of insanity. Repeating the same thing and expecting different results.
Been nice jerking your chain jj. Throughout this, you've shown to be stuck in the same dogmatic rut that the "Annoyer of Bullies" likes to poke at.
Maybe now I know where those stereotypical "ABX'er" comments originated.I've stated the common understanding. The evidence is visible in the AES, ASA, various psychology journals and texts, psychometry journals, and the like. It's all over the place.
Your claim about statistical size in cable tests begs the actual hypothesis entirely. It only takes one repeat performer to push that performer's p1 and p2 so far down as to be sure they detected something.
I haven't CLAIMED that the tests represent the entire population, and neither have many other people, so making that criticism is a waste of time.
The evidence is clear: Untrained listeners don't hear as much.
The evidence is clear: Tests that are not time-proxmiate are not as sensitive.
The evidence is clear: Multiple subjects cross-influence and add noise to the test.
Those are well-demonstrated in the literature. You must show that the test you espouse can do as well, despite the long list of demonstrations otherwise, before you have any credibility.
WHERE IS YOUR EVIDENCE?
JJ - Philalethist and Annoyer of Bullies
Still won't admit that no current test is perfect, will you?
As you and everyone else knows, you have materially and unquestionably misrepresented my position.Your statement constitues a full and explicit accusation of professional misconduct on my part.
I require a full and complete retraction of the false position you have laid at my door.
Your trolling is starting look quite familiar, and your willingness to create false positions speaks ill of you.
JJ - Philalethist and Annoyer of Bullies
Oh bullshit. Like you haven't done the same to me. Get over yourself.
Your claims go very strongly against many people's experience.In particular the length of auditory memory simply destroys any chance of having a good result without proximate switching that is listener controlled.
And, of course, the "mob aspect" will rule as well.
If you want credibility, put up a paper for review.
You're starting to look like a straw man working for the anti-DBT camp in my mind.
JJ - Philalethist and Annoyer of Bullies
"Your claims go very strongly against many people's experience."Sorry, it's fact. Probably been done other places too, but again, because it is proprietary you will find no publically available documentation.
"And, of course, the "mob aspect" will rule as well."
Mob? Whose mob? Yours for not wanting to explore the possibility that in order to satisfy all the DBT critics, you'll have to better than what you are proposing now? The fact that there may be labs which HAVE overcome the difficulties with the published data? The only fact here is that your mind is closed to the possibility that some of the published difficulties have been overcome and you have no interest in persuing it.
"In particular the length of auditory memory simply destroys any chance of having a good result without proximate switching that is listener controlled."
Okay, so you can EXACTLY remember a complex auditory input for 10 minutes or more?
"If you want credibility, put up a paper for review."
Are you really that dense? One, I have made it abundantly clear multiple time that this is NOT my doing. Two, it is PROPRIETARY, I have also made this abundantly clear multiple times. Does someone need to knock it into your head with a hammer and chisel?
"You're starting to look like a straw man working for the anti-DBT camp in my mind."
-Mob? Whose mob? Yours for not wanting to explore the possibility
that in order to satisfy all the DBT critics, you'll have to better than what you are proposing now? The fact that there may be labs which HAVE overcome the difficulties with the published data? The only fact here is that your mind is closed to the possibility that some of the published difficulties have been overcome and you have no interest in persuing it.-Persuing what? That's a lie. You haven't offered anything TO persue, only a bunch of unsubstantiated, extremely extraordinary claims. You have contradicted some of the most strongly demonstrated principles of auditory testing, but you won't even offer anecdotal evidence, let alone any extraordinary proof of your extraordinary claims.
First, it's you who claims that the test you report on, but won't
take responsibility for (even here) is perfect. I'm not making novel claims, you are. You claim an advance, but you won't provide any evidence to evaluate.Second, the mob aspect of multiple subjects is not a conjecture, it's
a known, done deal. If you're going to use multiple subjects you're
going to have to show some major evidence that you've come up with a way to completely avoid that.Then you deceptively state - Okay, so you can EXACTLY remember a
complex auditory input for 10 minutes or more? -As you are well and truly aware, unless you don't even read what I
write, I am the one asserting that 200 milliseconds is the far limit
for comparisons of small acoustic differences. I have no idea where your "10 minutes" comes from, nor why you have deceptively implied that I claim any such thing.No, you can't recall that long. That's why any test that does not
allow each and every listener to switch AT WILL is extremely suspect.
You've indicted your own test. It's your test that has a delay between the similar parts of the same presentation, not mine. Couldn't you at least get your story straight?Finally, you ARE making claims here, and extraordinary ones. Claiming
that it's not your test, that it's proprietary, and the like, are
lame, weak excuses. You've described the test, so it's not proprietary OR a trade secret. You've made claims about its sensitivity, so you've obligated yourself to support those claims,
so:Submit a paper, and we'll see what comes of it.
You made the claims, now deliver the evidence.
You've made extraordinary claims, and used multiple instances of
extremely deceptive "logic" in your defenses. To wit: You imply,
completely without justification, that somehow 10 minutes of detailed
partial loudness memory are required for time proximate testing, when
that is just the opposite of the truth. You state baldly that I am
claiming that "tests are perfect" when in fact you are the only one
here implying that, and for your test. You won't say when or where these tests happen, so we can't confirm your claims that way. You won't even use your real name, and deny yourself even that bit of credibility.You may know something, but your presentation suggests only that
you're talking (if your own description is correct) about a seriously
insensitive test. You will put up no evidence, but you repeat the same empty claims over and over again.You appear to have nothing to offer. Write the paper, and get it peer-reviewed. Until you do, I'm afraid that I'll have to regard your claims as specious.
Frankly, you read like a straw-man instantiation of the subjectivist's claims about people who do DBT's.
JJ - Philalethist and Annoyer of Bullies
Maybe you should just go revisit some recent history:http://www.stereophile.com/showarchives.cgi?141
And then maybe you'll come to realize that the same old same old ain't gonna cut it. That's all I've been trying to do JJ, is get you to realize this, but you are so damned entrenched you can't see it.
Put up or ...
JJ - Philalethist and Annoyer of Bullies
as to Jit's source. Moreover, I'm really interested in the motivation of a "scientific test" using untrained ears (from a previous post) on unfamiliar material. It's not difficult to dumb down any test to prove whatever you desire.
by some bogus switch box that is "assumed" to be perfect!
Having said that, have you any evidence that any well-designed 'switchbox' causes any degredation?While I can imagine problems with some switching systems, I've also seen some designed clearly to open both ends of a cable, etc, including the ground conductors.
I've no idea if there was one made for commercial use.
As always, I do not suggest that it's easy to run a good DBT. It's not.
JJ - Philalethist and Annoyer of Bullies
...have you any evidence that any well-designed 'switchbox' causes any degredation?Do you have any evidence that all evaluators use a "well designed switchbox"?
I would agree with Adi's approach.
And why would you ask such a ridiculous question.Take it case by case, and do it yourself. Homework isn't a such a bad thing.
JJ - Philalethist and Annoyer of Bullies
Overall, I think we agree on this topic. I was merely suggesting that theory and practice are frequently not shared. What is a "properly designed" switchbox anyway?
> > What is a "properly designed" switchbox anyway?According to mtry at AR, amp, preamp, receiver or CD player can be designed to distort the signal and sound different. Therefore, by inference, everything else is properly designed and transparent when "operated within the specs".
> > Having said that, have you any evidence that any well-designed 'switchbox' causes any degredation?JJ,
Did anyone conduct a reliable DBT with the ABX box in the signal path and without--basically a DBT for the ABX box itself?
Use a variety of conditions that you know are at threshold. See what happens.
JJ - Philalethist and Annoyer of Bullies
> > Use a variety of conditions that you know are at threshold. See what happens.Are you suggesting,
1. since ABX box is designed to be outside the human auditory thresholds, it need to be tested?
and/or
2. "control conditions" (for DBTs) is better than ABX methodology?
And see what happens.
JJ - Philalethist and Annoyer of Bullies
JJ,Although, I havent done a ABX myself, I am fairly aware of its procedures.
I am a little lost about the precise message you are trying to convey.
jj:Are you aware of any ABX tests that you would consider valid which yielded non-null results for different audio cables?
While it's not my favorite test, Greenhill's test of speaker cables certainly showed differences.while others have, albiet in not exactly standard circumstances (meaning the cables are used in circumstances beyond the usual), shown difference, they are not published.
But even not the bvest tests have shown differences, Phil.
This does not make some people happy, because what those differences showed were precisely in line with expected results from basic psychoacoustic experiments.
JJ - Philalethist and Annoyer of Bullies
jj:As I recall, the Greenhill test involved speaker cables of significantly different lengths and gauge, and so was not particularly applicable to real life situations.
The next question, then, in my mind, becomes whether a reasonable number of serious, valid tests have been run on cables to afford a reasonable opportunity to validate the claim that cables of similar length and gauge can produce audible differences.
My own dilemma in all this (one incidentally I’m unaware of ever loosing a second of sleep over) is that I consider myself a rational person, who believes that perceived physical phenomena should, at least theoretically, be subject to scientific validation if they are to be accepted as real phenomena. For example, I’m painfully aware that eye witness testimony can be highly unreliable, even though the witness believes beyond doubt the validity of his own perceptions.
Yet, in the past, I have said that my own perceptions of differences between many cables (based, admittedly, solely on sighted auditions) seem as “real” to me as my perception of my kitchen table. I assume no one would suggest that I need to conduct scientific tests to determine if my kitchen table really exists. But my perceptions of cable differences based on sighted auditions, I believe should require scientific validation before I can claim that these perceptions are truly the result of audible differences.
At this point, my own personal conclusion is that the testing in this area has been so sparse that the claims of the subjectivists haven’t really been given a fair chance. On the other hand, one must ask, I believe, as to why the industry (and here I include the entire high end industry, including makers of components) has not demonstrated any willingness to attempt to demonstrate that actual sonic differences between different cables and different components really exist.
I assume that the response of many in the industry and many audiophiles would be that the differences are so “obvious”, there is no need for control testing verification. If that is the response, my rational side is left feeling (to the extent a rational side can feel) very unsatisfied.
In addition, I think it is fair to ask the question of those who are attempting to measure electronic differences between cables involve differences that would be of the magnitude to produce actual sonic differences that, based on all that is currently known about the threshold of human hearing, could actually be heard. Again, I would assume that the response would be that the differences they are hearing are so “obvious”, that currently research into the threshold of human hearing must be inadequate. Again, if this is the reaction, I’m left feeling unsatisfied.
First, your perceptions, no matter how they come about, are real to you. They may not be testable, or falsifiable, etc, but they are certainly real to you. I'm not sure if you've noticed, but I've been known to tee off very seriously on people who insist that perceptions, based on "real" things or on purely human effects (like the human tendency to overdetect, that is to say detect a difference when non exists, instead of missing "real" differences), are "hallucinations" or other such stuff.Now, the overdetection issue may in fact come in here.
It's commonly thought that there are 3 levels of "memory", starting with the near-periphery, or what I call "loudness" memory. This is a very detailed, but very fleeting memory. This loudness memory is reduced into a set of features, or middle memory (the psychologists have a fancy name for it) where much information is lost. Finally, feature memory is reduced to conceptual memory, where a great deal more information is lost. (I'm talking auditory here, but visual processing seems to be similar in some respects, of course the details are enormously different.)
In this process, both the middle and higher stages can be guided by expectation, random (or nonrandom) thoughts, attention, etc, and as a result one will remember DIFFERENT THINGS from the same audio stimulus.
By the way, even the partial loudness perception can be somewhat guided by concious thought.
This is a simple consequence of how people work.
This, of course, is one of the reasons that learning is important, but it's also a reason that one MUST have a falsifiable hypothesis in testing auditory stimulii.
It's also a reason that DBT's are may be MORE rather than LESS sensitive than sighted tests, because sighted tests introduce expectations and noise that can INTERFERE with the subjects' concentration.
Now, the Greenhill test was, if I recall correctly, within reasonable bounds as far as things that people would or might use in a system. Some of the wires were small, some large, but in fact such things were and still are sold for use.
I would hesitate to call all of them audiophile uses, perhaps.
As to the speculation about some of the reports, I'm simply not going to comment. I've been called enough names already.
The sensitivity of the ear at low levels, for instance, is remarkably close to the atmospheric (molecular) noise level. Those kinds of observations have been confirmed by DBT. The masking performance of the ear is likewise estimatable by knowledge of neural firing rates, etc, calculated from entirely different information, and it is confirmed quite well in DBT. And so on and so on.
Now, we know "everything"? OBviously not. However, it is still quite possible in the lack of full information to reject hypotheses that can be shown false, or for which the evidence is overwhelmingly negative.
That's all I'll say, sorry, I'm tired of having some of the people here claim "JJ is no scientist", I'm tired of them making accusations of professional misconduct on my part, and so on, so from this point forward (No, Phil, it's not you I'm annoyed with, nor is it John Esc...), I'm simply going to point to the literature.
There's no point in my saying something that is mainstream, pretty hard to question, and that has a great deal of evidence behind it, only to see an obviously coordinated campaign of vilification rain down on me.
JJ - Philalethist and Annoyer of Bullies
That's all I'll say, sorry, I'm tired of having some of the people here claim "JJ is no scientist", I'm tired of them making accusations of professional misconduct on my part, and so on, so from this point forward (No, Phil, it's not you I'm annoyed with, nor is it John Esc...), I'm simply going to point to the literature.I fully sympathize with you. I've been attacked by both sides, to the point of having my competency and ethical standards as a lawyer attacked. This is, after all, just a hobby.
I don't know if you would be willing to answer the following question, but I'll try. You say:
The sensitivity of the ear at low levels, for instance, is remarkably close to the atmospheric (molecular) noise level. Those kinds of observations have been confirmed by DBT. The masking performance of the ear is likewise estimatable by knowledge of neural firing rates, etc, calculated from entirely different information, and it is confirmed quite well in DBT. And so on and so on.
Does what is known about the sensitivity of human hearing suggest that distortion measured at 110dB below 50mV could possibly be audible?
My dilemma remains that my personal experience tells me that speaker cables, interconnects, power cables and power line conditioners all can have a significant effect on sonic performance. With one or two minor changes in the combination of cables and power line conditioners I use, I can make a recorded performance go from a living, breathing event to a dull, lifeless thud. I realize that to the extent I believe that this experience is due to actual audible differences, that puts me at odds with most of the science in this area. But I continue to examine my own “expectations” and other psychological factors that could be affecting my experience and simply can find no correlation between those factors and the results of the different combinations I have tried.
For me, and I suspect for many audiophiles, it’s not merely a question of which wire sounds better. It is achieving that extremely rare and elusive (regardless of how much money one spends) combination of wires and components when the music suddenly comes alive and the magic arrives – when the goose bumps pop out, the feet start to tap, the spirit soars with the music and one is no longer listening to equipment but is fully engrossed in the music. Moreover, I find when this magic has been achieved, it is repeated over and over with countless well-known and new recordings. It is no mere fluke limited to a specific recording, a particular day, a particular time of day, or a particular mood.
Either the placebo effect is so pervasive that it has my mind permanently in its grip, or science simply hasn’t devised or conducted the right tests or discovered the valid theory to confirm and explain such an experience, but I can not imagine ever accepting the fact that the magnitude of difference between the magic that comes from the proper combination of components and wires and the ordinary performance of a very expensive, but emotionally un-involving system, is all in my imagination.
What about working 16 bit and better cd players or properly working Solid State amplifers.Cables...let's get the bigger difference stuff out of the way...if amps all sound alike then why bother going to cables? I know that depending on the gauge and depending on the length...there should be some differences...or the wire maker does something strange to the wire.
Ultimately the tests are not practical to regular Joe Shmoe...Unless of course they've tested what you want to buy...it would also be nice if Joe Shmoe was tested not Joline Doe III.
Of course I often wonder why companies like APEX and RCA don't conduct these tests.
If I'm president and CEO of RCA I can stand up and say..."see nobody could tell the difference between our $59.00 portable cd player and the Linn CD12...you the buyer can have the same level of audible performance as a $20,000.00 cd player for $59.00."
...if amps all sound alike then why bother going to cables?They don't
...you the buyer can have the same level of audible performance as a $20,000.00 cd player for $59.00.
Same answer as jj.
rw
"...you the buyer can have the same level of audible performance as a $20,000.00 cd player for $59.00.Same answer as jj."
OK...how about we increase the RCA cd player to a certain $89.00 model and decrease the 20k cd player to "some 1k cd players." Surely the Sensible Sound has no reason to lie about their findings(well to sell magazines for their targeted readers - but besides that).
How bout RCA pick on Sony...The masses don't know Linn anyway. SO let's take Sony's SCD1(in CD mode). Too expensive then we'll take one of the lower ES models around 1k.
Then RCA can change their slogan slightly by noting the SPECIFIC model of the SONY - or ANY OTHER known 1k cd player.
Makes great advertising for the cheapo companies - helps them sell more of their product...yet none of these companies can put on a prodfessionally done no holds bar test to help the consumer not get sucked in by audio jewelry.
Hmm Let's say I own Pioneer Electronics Corp. and I have introduced my 201 reciever for $199.00. My sales have been down lately(like Pioneer is going through right now).
So let's see, what would be a great way for me to INCREASE sales? I know...I will take the top of the line Krell and I will bring in 200 listeners(separately of course) and conduct the highest standards ever to DBT testing between the two amps. Making sure that I use 95db sensitive or better speakers at 8ohm with no wild impedence swings.
After the people FAIL to significantly distinguish a difference(and they likely will fail - geez Pionner already had a model like this no?). I can then bring out my advertising campaign -
"Pioneer's NEW REVOLUTIONARY design has just determined that our new $199.00 receiver is just as good AUDIBLY as the most expensive amplifier in the world the Krell__________. Pioneer has made this revolution to bring high end audio to those consumers that have not, UNTIL NOW, had the opportunity to get KRELL sound for a mere Pittance. Our new 201, naturally, isn't made of exotic materials but our engineers have bypassed that need and made an amplifier that trained listeners(which is part of a good/real DBT)(what the heck make all the listeners composers and musicians and golden ears), could not distinguish.
Yes Folks that's right our new 201 has equalled the best amplifier in the world for sonic purity, as far as it is detectable by the human year, and instead of $70,000.00 you pay $199.00(if you buy our $399.00 model(the 501) then you can drive 2-4ohm speakers as well).
That's right NO ONE(except perhaps those with ultra sonic hearing) will be able to tell the difference.
Hey I should run either or all of Pioneer, Yamaha, Sony, Denon, Technics, YORX, Sanyo, RCA, Electrohome, LG, AIWA, Marantz, Phillips, Onkyo, Cambridge Audio, Bryston, NAD, Arcam etc.
Geez I wonder why none of them has made this statement(Heck especially Bryston). Look what they have to GAIN from that advertising. Lots of folks would like a 70K amp for $199.00(ok $399.00 if you have tough speakers).
I wonder why NONE of them NOR any other company does such a thing? Surely I'm not a marketing genius and I'm the only person to think of this right? I mean they would have the JAES to back them up on any technical aspect of the science sop they SURELY are not afraid of being sued, right ? There would have to be a reason for being sued Right? AHHH!! now we know. There is a reason to be afraid of being sued isn't there?
Bugger!!! Maybe you really do have to spend more to get more. Damn It!!!
Read your post. I was hoping for better from this forum. Sounds to me like you are equating good sound with expensive equipment. This is, to my mind, as bad a position as stating that everything sounds the same. It may be more interesting if you posted some kind of reasoning instead of declaring yourself as some kind of marketing genius. I don't think you will be running any of the companies you mentioned any time soon. Maybe you, like so many others have done, should take a stab at dressing up run-of-the-mill Belden wire and selling it at incredible prices based on unsubstantiated claims and be satisfied that the insane audio press has taken such complete control of audiophile's minds that it is a widely held belief that no objective testing and no amount of measurements can ever prove or disprove anything. Pity the fool who spends mega dollars on hype shrouded pseudo audio mysteries when there are so many good recordings to be bought.
Geez I wonder why none of them has made this statement...Easy. It is not true.
Ohh but that would mean SS amps sound different(or make an audible difference when running a speaker) no? Or perhaps the Sensible Sound was incorrect in their RCA test(Geez I wish they would have listed the indistinguishable 1k CD players from the $89.00RCA). Of course I've always wondered why they didn't...I mean they could not be sued...errr Or maybe they could...Does RCA own the Sensible Sound? Hmmm.
Ohh but that would mean SS amps sound different(or make an audible difference when running a speaker) no?Yes.
But I'm too polite to suggest the answer there.
JJ - Philalethist and Annoyer of Bullies
My take is that the human mind does poorly in making a decision like this. I also found that ABX testing was useless for me to tell the differences between similar products or designs, yet my friends and I could hear the differences in more open testing, such as a blind A-B test. I mentioned this in a LTE in1979 to Dr. Lipshitz and the other avid ABXers, but I have never gotten a satisfactory response. They just don't believe me. Once Dr. Lipshitz wrote me an LTE where he said 'roughly': "Your math is OK, your measurements are OK, BUT it is impossible for you to hear differences between caps. ... We have tried it with ABX measurements, and always got a negative result. " These are not the exact words, but that was the point. What can I say? Walt Jung sent Dr Lipshitz some of the worst sounding tantalum caps he had ever found. Guess what the ABX test found? NOTHING!
Did they drop them into a speaker crossover, or into an amp?? How did they switch them??? Woudn't the audibillity be usage/circuit specific?? Also is it more audible with different types of signals such as when they test compression codecs? Some codecs do better with some signal types than others.
I assume that you are talking about tantalum caps in this case. You have to ask Dr. Lipshitz about the test particulars. I am not here to discuss ABX details, but they can be found in issues of 'The Audio Amateur' in the late '70's.
nt
I think that it is possible to some degree to reveal the differences between various components using proper ABX/DBT but what would not be possible is to determine what is better to the other!! Why SS?? Why SE?? Why CD?? Why LP Why Live music?? Why this wine not the other (more alcohol of cause) why your wife and not her sister??? Too much taste involved to determine what is better than the other!!
Maybe 'cause snippets of music are not enough to get one's imagination going?
Another problem is that if no difference is found this does not say that no difference was there!! It might as well be the listener or the test itself or some other isue! As a guitarplayer I have tried different types of strings all with different results! But these results have not always been as obvious to others(if at all;o) not used to the instrument as I! Therefore I think small differences often show up better in your own setup!I think it is quite basic brain knowledge that "stuf" have to be stored for some time before entering some more permanent storage (about 5 minutes was the last I heard)Therefore as you say small snippets of music won't work that well. A musician playing the same instrument for years or one of us lunies listening to the same setup for hours (without tweaking) will have a much better background to evaluete any changes than one listening to bits and pieces on an unknown setup.
My take two ;o)Have fun!
This post is made possible by the generous support of people like you and our sponsors: