![]() ![]() |
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
4.237.92.125
In Reply to: High end audio = a field where people can claim to have exceptional skills and never have to prove it ! That's rare. posted by Richard BassNut Greene on April 26, 2006 at 09:07:57:
I don't think anyone is claiming to have exceptional skills, Mr.
Greene. But as I said in another posting, I have taken part in over
100 formal blind listening tests. Many of those tests produced null
results. But my results also have shown that I could distinguish a)
absolute polarity, b) different capacitor dielectrics, c) an
interconnect with the ground-shield connection made at one end from
the other, d) a solid-state amp from a tube amp, e) many different
phono cartridges, e) many different speakers, including correctly
identifying a speaker model under blind conditions from my previous
experience of it. And so on
So why do I have to do more tests? AS a result of my considerable
experience of blind testing, I am content that the differences
discussed in magazines, while often small, are still real and not
unimportant. You obviously disagree, in which case I respectfully
suggest you either simply stop reading Stereophile or start your own
magazine to review components the way _you_ think it should be done.
I note, BTW, that you have still to offer any actual argument in
rebuttal of my "As We See It." :-( And the question is begged: how
many formal blind tests have _you_ taken part in to be so sure of
their efficacy?
John Atkinson
Editor, Stereophile
![]()
Follow Ups:
Self-proclaimed (are there any other types?) Golden Ears:
Please take your blood pressure pills and other medications now!I'm challenging your hearing ability beliefs again ... and now JA is 'on my side' ... although he never intended to be!
JA claims to have taken part in "over 100 formal blind listening tests".
Ignoring speakers and cartridges where differences are even audible to objective audiophiles, he has a mere four positive blind test results out of 100 to support the "Audio Myth" that all components sound different!
However, if ANY audiophile guessed at random during 100 blind tests, he would be expected to have 4 positive test results (assuming a positive test result is 12 right answers out of 16 trials) ... and that seems similar to JA's "batting average" (for components where audible differences were in doubt).
I feel sorry for JA now, because his own test results suggest he has "normal hearing ability", and that's not good enough to support the "Audio Myth".
Based on sighted auditions where component differences are almost always heard (or imagined), often in a few minutes, an "Audio Myth" developed that real audiophiles should never say they can't hear a difference between two components.
"Can't hear a difference" is equivalent to saying "I don't know" in response to a question -- something unfortunately viewed in this country as evidence of ignorance, as if a fast wrong answer, or a political non-answer, is evidence of intelligence!
The "Audio Myth" that all components sound different has never been proven -- not one audiophile has even come close when the components were playing music at the same SPL, and the brand names were hidden.
I can't imagine any audiophile who would not want to prove he could hear component differences almost all the time ... if he could do so
Audiophiles want to hear differences, and they don't walk out of blind tests claiming the test design was no good.
Only audiophile "armchair quarterbacks" claim blind test participant bias against hearing differences, and poor test methodology ... without even showing up to observe the test!.
A lot of money is spent based on the Audio Myth, and those spending the money are very reluctant to test their beliefs ... or show respect to any audiophiles who report their own test results that reflect less than "golden ears".
Over three decades, many audiophiles have heard differences in transducers (speakers and cartridges) while listening to music, but have had great difficulty hearing differences among electronics, and have never heard differences among wires of typical lengths intended for audio use.
Blind comparisons where listeners are allowed to believe there are two components in use, when there is really one component, have similar results: Lots of "differences" reported when comparing "A" with the imaginary "B" (usually differences reported in 50% to 75% of the trials when a component is compared with itself).
The most obvious conclusion is that we audiophiles are so strongly biased to reporting hearing component differences, that we claim to hear differences even when differences are impossible (A compared with A).
Some of the A-B component differences are likely to be small SPL differences thought to be meaningful sound quality differences worth paying money for.
Other "differences" are most likely imagined, probably because they were expected, or caused by the general (male?) reluctance to say "I don't know".
To JA:
Your editorial "throwing mud" at blind test designs implies some of the blind tests you were asked to particpate in were flawed.So please tell us exactly what percentage of the blind tests you joined that had poor experiment designs?
(Note: a "?" signifies my question to you)I assume you would have walked out of any blind test after observing evidence of poor test design (why waste hours of your time?) ... although it's strange you never mention how many tests you walked out of beyond the 100 where you stayed.
Or are you implying that you were invited to 100 blind tests and EVERY ONE OF THEM was properly designed? If so, this would suggest your editorial about poor test design was not supported by your own blind test experiences ... but was mere speculation about (throwing mud on) OTHER blind tests done by OTHER audiophiles, when you were not even observing the test methodology.
Your "100 formal blind listening tests" offer us more evidence the Audio Myth is really a myth, as if we needed more evidence, from a person who claims to know the difference between a test conducted properly, and poor test design.
I've added a few valuable comments to what is already very weak proof, although apparently the best you can offer, that all audio components sound different:
YOU WROTE:
"I have taken part in over 100 formal blind listening tests. Many of those tests produced null results."
RG comments:
For now I'll assume your results were real (not the expected 4 out of 100 positives from lucky guessing):
- Interesting that component differences are heard in virtually all sighted auditions ... yet you had "many" null results. Of course in plain English, "null results" simply means you were unable to support the Audio Myth with your own ears in a well-designed test (and you claim to know the difference) which casts great suspicion on how easily component differences are "heard" in vitually every sighted audition.YOU WROTE:
"But my results also have shown that I could distinguish a)
absolute polarity,"
RG:
This been done before, although using carefully selected recordings, speakers and rooms (or headphones). Was this the case for you?
You don't mention your score, but then this type of test has nothing to do with the typical component tests in Stereophile.YOU WROTE
"b) different capacitor dielectrics"
RG:
I find it hard to believe this test was done with music, because others have failed to hear differences among cap brands in speaker crossovers, assuming they were identical filters, while listening to music. Once again you didn't mention your score, and once again this doesn't apply to the typical Stereophile component tests.YOU WROTE:
"c) an interconnect with the ground-shield connection made at one end from the other,"
RG:
Right versus wrong grounding connections for a specific brand/model interconnect has no apparent correlation with an ability to hear sound quality differences among interconnects while listening to music, as has been reported in Stereophile.YOU WROTE:
"d) a solid-state amp from a tube amp,"
RG:
Small differences are sometimes heard even by objective audiophiles, most likely due to the high output impedance effect on speaker frequency response. No apparent correlation to an ability to hear differences among solid state amplifiers, that has been reported in Stereophile.YOU WROTE:
" e) many different phono cartridges, e) many different speakers,"
RG:
You must be trying to raise your blind test "batting average" by including blind tests of components that have been previously proven to have sound quality differences by objective audiophiles!
These results are expected and not proof of superior hearing ability!YOU WROTE:
" including correctly identifying a speaker model under blind conditions from my previous experience of it."
RG:
I hate to tell you that many speakers have memorable colorations .. and this result will not impress anyone!YOU WROTE:
"And so on"
RG:
If you have already presented your best evidence of being able to hear differences among all audio components, then I'm afraid "and so on" will not be worth reading!I have participated in four DBTs and over 12 SBTs.
My experiences reflect too many people claiming to hear differences in sighted warm-up auditions and coaching others ... who are not able to hear differences minutes later under blind conditions.As a result of these tests, I no longer make claims of superior hearing ability that I might have made in the prior 20 years (before any blind test experience) as a (self-appointed) "golden ear".
![]()
You must enjoy wasting your time in blind tests, or are you implying that the blind tests you participate in use the "proper methodology" and are valid?But then you reverse yourself by bragging about positive results in some blind tests. So now are you implying that when you have positive results, the tests were done properly and you deserve accolades ... while the null results were from poorly managed tests ... or don't matter?
So I guess your conclusion is blind tests can't be trusted ... unless you are there to see they are done correctly ... and you have positive results.
Not that your positive results mean much in relation to the products tested in Stereophile: We already knew loudspeakers and cartridges are likely to have audible differences ... and sometimes tube amps may have small audible differences. That's been done before. And your claimed results were no surprise. Of course we really don't know if your test results were very decisive ... or just better than random guessing -- we only know you claimed to "pass the tests", but for some reason you didn't mention your scores.
Now what about all those null results you gloss over?
Are you going to claim they are meaningless?
In fact they are important as a psychological test to demonstrate a listener bias toward claiming to hear differences among components.How often were you confident you heard differences in the sighted warm-up auditions only to find out that minutes later you really heard no differences when the brand names were hidden.
I'll answer the question for you because I assume you'll ignore it: -- In the sighted warm-up auditions most blind test participants thought they heard differences (as in almost all sighted auditions) ... so the fact that they could not hear differences minutes later under blind conditions strongly suggests the "audible" differences were nothing but their imaginations (expectations).
The "imagined differences" hypothesis is easily tested by allowing audiophiles to compare a component with itself under "blind conditions", while allowing them assume there are two components.
The result will typically be "differences heard" in 50% to 75% of the "A-A" comparisons.
Knowing the brand of component being heard adds no value to a description of the sound quality. That knowledge can only bias the listener.
Comparing two components playing at different SPL's can not add value to a description of sound quality differences ... not to mention comparing two components heard in different rooms.
I know you or your reviewers will never risk demonstrating to witnesses the claimed ability to be able to hear differences among all audio components ... even though doing this would boost subscriptions and make you world famous as a real "proven" golden ear.
Of course I should mention it's quite rare for people to be known for having exceptional skills, yet they have never proven or demonstrated their skills to others. High end audio is very unusual in that regard. Most public "experts" want to prove their skills or knowledge to others. If not, can we have confidence they are anything but self-proclaimed experts?
I do enjoy your "verbal tapdancing" to explain why blind auditions are no good ... while you participate in over 100 of them ... and brag about the few "positives".The staff of Stereophile claims to have excellent hearing ability.
No one can prove it, or will try (at least not in "public").
Even those who do try will dismiss most of their results as "nulls" ... yet will never discuss why sighted comparisons almost never have "nulls", even the sighted warm-up auditions minutes before a blind test!
Why do all stereo components sound different?
Stereophile answer: Because we say so ... and we could prove it ... but no test is good enough ... unless we participate and have positive results ... which happened a few times, mainly for speakers and cartridges that we already knew were likely to sound different!
Very well put.
![]()
> I guess your conclusion is blind tests can't be trusted...
No, all I have said is that a blind test being blind is not in itself
a sufficient reason for the test being either rigorous or scientific.
And if it is neither rigorous nor scientific, then the test results
are meaningless.
> unless you are there to see they are done correctly ... and you have
> positive results.
I have not said this. Please note that a significant proportion of
the blind tests to which I was referring were organized by others.
I took part as a listener and was was neither involved in the test
design nor the statistical analysis of the test results.
> The staff of Stereophile claims to have excellent hearing ability.
No-one on the magazine's staff has claimed this. If you think we have,
please give a reference. But you should note that the long series of
blind tests Stereophile published on speakers throughout the 1990s
allowed me to analyze my reviewers' test results individually. That
analysis certainly persuaded me that the Stereophile reviewers who
took part in those tests are careful, skilled, internally consistent
listeners.
I pay you the respect, Mr. Greene, of answering your questions, though
you don't appear to like or comprehend those answers, given your
referring to what I say as "tap dancing." Now please answer the
question I have repeatedly put to _you_: how many _formal_ blind
listening tests have you taken part in, either as listener or as the
organizer, that you are so sure of the tests' efficacy?
Almost all the tests to which I have been referring have been
published. You can read how the tests were performed and how the
results were analyzed. By contrast, you are very quick to call people
names but curiously quiet when it comes to describing your own
experience. And if you have _no_ experience of formal blind tests,
then you are actually expressing your _beliefs_, not facts, which is
curious indeed for someone who appears to be claiming the scientific
high ground. :-)
John Atkinson
Editor, Stereophile
I respond to your question marks FAR more often than you respond to mine. And I don't need a blind test to prove that.
.
.
.> I guess your conclusion is blind tests can't be trusted...
YOU WROTE:
"No, all I have said is that a blind test being blind is not in itself
a sufficient reason for the test being either rigorous or scientific.
And if it is neither rigorous nor scientific, then the test results
are meaningless."
RG:
That applies to Stereophile test reports too -- if humans are involved mistakes can be made. That's all your editorial saya!If YOU are a test participant and have a positive result, does that mean a the blind test was "rigorous" and "scientific"?
How about your many null results?
How rigorous and scientific are Stereophile tests where the reviewers know the brand/model in use (potential bias) and can say anything they feel like saying without ever proving they can really hear a difference versus their own component?
.
.
.
> unless you are there to see they are done correctly ... and you have
> positive results.
YOU WROTE:
"I have not said this. Please note that a significant proportion of
the blind tests to which I was referring were organized by others.
I took part as a listener and was was neither involved in the test
design nor the statistical analysis of the test results."
RG:
Your editorial throws mud at blind tests without presenting any supporting data. In a prior post, I asked you what percentage of the "over 100" blind tests YOU were invited to had improper test design. I'm asking again because you did not respond to my question. If you have no supporting data, then you were merely "throwing mud" at blind tests in general, hoping some will stick.
.
.
.
> The staff of Stereophile claims to have excellent hearing ability.
YOU WROTE:
"No-one on the magazine's staff has claimed this. If you think we have, please give a reference."
RG
The magazine is based on an unproven belief that all components sound different, those differences are easily heard by the reviewers, and the reviewers are fully qualified to descibe a component's sound quality in great detail.If that's not claiming excellent hearing ability, then I'll eat my hat. Your "give a reference" challenge is an insult to the intelligence of audiophiles visiting this forum.
.
.
.
YOU WROTE:
"But you should note that the long series of blind tests Stereophile published on speakers throughout the 1990s allowed me to analyze my reviewers' test results individually. That analysis certainly persuaded me that the Stereophile reviewers who took part in those tests are careful, skilled, internally consistent
listeners."
RG:
You should do a lot more of these comparisons. I auditioned a speaker that did well in a Stereophile blind test in 1990, did well in an Audio magazine sighted test in 1994, and I later bought a pair.
I still enjoy my EPOS ES11 speakers today. Stereophile reviewers may be great listeners and very qualified to judge loudspeakers. However there is no evidence they are qualified to judge other types components where differences have been MUCH more difficult to hear under controlled conditions. No evidence is required to prove they are qualified to judge wires, for one example. That is the problem.
.
.
.
YOU WROTE:
"I pay you the respect, Mr. Greene, of answering your questions, though you don't appear to like or comprehend those answers, given your referring to what I say as "tap dancing." Now please answer the
question I have repeatedly put to _you_: how many _formal_ blind
listening tests have you taken part in, either as listener or as the
organizer, that you are so sure of the tests' efficacy?"
RG
Your posts in response to me don't seem very respectful.
You ignore questions you don't feel like anwsering and answer some questions only with another question or a challenge! I get as much respect from you ... as Rodney Dangerfield got from his wife.Many of my question marks have not led to any responses.
Of course you don't owe me any responses.
However I did respond to your "how many tests?" question in a May 1 post at this forum. I'm not on-line everyday and may not answer instantly.
.
.
.
YOU WROTE:
"By contrast, you are very quick to call people names but curiously quiet when it comes to describing your own experience."
RG
A "tap dancer" is someone who ignores specific questions that they don't care to answer, and throws back other questions hoping to get out of answering the original question. You are skilled at this. So are many politicians and CEOs. If "tap dancer" is the worst "name" anyone ever calls you in your life, then you should be proud.
.
.
.
YOU WROTE:
"And if you have _no_ experience of formal blind tests, then you are actually expressing your _beliefs_, not facts, which is curious indeed for someone who appears to be claiming the scientific high ground."
RG
My experience with blind tests is not relevant.
Yours is relevant because you have made it relevant by presenting blind test positives here (actually partial blind test results -- no data on the components tested - and no actual test scores) to brag about your hearing ability. "Many nulls" were mentioned once and quickly forgotten, as if they were meaningless.
I make no claims of superior hearing ability, nor do I influence people to buy specific components, as you do. Therefore I have no superior hearing ability claims to PROVE by presenting MY blind test results.You cast doubt upon the experiment designs used by others for their blind tests in your editorial -- tests you did not observe ... while saying absolutely nothing negative about the experiment designs of "over 100" blind tests YOU participated in. How many of them suffered from poor experiment design?
Your editorial is equivalent to saying nothing more than: "If humans were involved, mistakes could have been made". But doesn't that conclusion apply just as strongly to the sighted test reports in Stereophile?
![]()
> I answered your question in my May 1 post & answered every other
> question you asked in this post.
But not until after 2 days of silence, and after I asked the question
three times over a period of almost a week! And now you're complaining
that I haven't yet answered your questions a day after you posted them?
Patience, please, Mr. Greene, I will repsond fully when I have the
time to address your points. In the meantime, you mentioned that you
have taken part in a small number of blind tests and implied that they
gave null results.
As I explained to you, you cannot conclude from null results that no
difference exists, only that if one exists, it was not detected under
the specific conditions of the test. You have 2 equal hypotheses that
explain your results: 1) there was no real audible difference, 2) the
test was inadequately sensitive to detect a small but real difference.
What further work did you do to investigate which of these hypotheses
was more correct? As I said, almost all the blind tests in which I
have taken part have been published, with a lot of detail given
concerning the test procedures and the statistical analysis. You need
to do the same, offer that information, if you wish to be taken
seriously. After all, you appear to be the one claiming the
scientific and moral high ground here.
John Atkinson
Editor, Stereophile
![]()
Below are direct quotations from my prior post -- these are my questions that you ignored, and I suspect you will never answer ... although you did find time to criticize ME for taking a few days to respond to YOUR questions, simply because I'm not on-line everyday of my life!:These questions in my prior post were ignored by you:
"If YOU are a test participant and have a positive result, does that mean a the blind test was "rigorous" and "scientific"?"
"How about your many null results?"
"How rigorous and scientific are Stereophile tests where the reviewers know the brand/model in use (potential bias) and can say anything they feel like saying without ever proving they can really hear a difference versus their own component?"
"Your editorial throws mud at blind tests without presenting any supporting data. In a prior post, I asked you what percentage of the "over 100" blind tests YOU were invited to had improper test design. I'm asking again because you did not respond to my question."
"You cast doubt upon the experiment designs used by others for their blind tests in your editorial -- tests you did not observe ... while saying absolutely nothing negative about the experiment designs of "over 100" blind tests YOU participated in. How many of them suffered from poor experiment design?
"Your editorial is equivalent to saying nothing more than: "If humans were involved, mistakes could have been made". But doesn't that conclusion apply just as strongly to the sighted test reports in Stereophile?"
TODAY'S CONCLUSION:
You "tap dance" around my questions that you can't, or won't, answer ... yet have the nerve to complain that I took a few days to answer your question!Your editorial throwing mud at blind tests is a meaningless editorial.
You point out the very obvious fact that when humans are involved, mistakes can be made.
That message doesn't require an editorial.
You imply that you would know the difference between a good and bad test design.
But you failed to tell us about the test design errors you discovered in your "over 100" blind tests.
Were all the tests you participated in designed perfectly? (another question to ignore!)
If not, please tell us the percentage where you discovered test design errors?
That data would support your editorial conclusion that any test is subject to human error.
> Waiting for YOUR answers to MY questions does require patience ---
> Infinite Patience is required (you don't respond)
My point was, Mr. Greene, that you don't see anything wrong in taking
up to a week to respond to my questions, yet you start complaining
that I am ignoring you just _one day_ after you ask me a question.
You seem to have impatience issues, or perhaps you just like to see
others dance at your command.
As I said, I will respond when I have the necessary time.
Still waiting for you to provide the nesceassry details of the blind
tests you claim to to have taken.
John Atkinson
Editor, Stereophile
![]()
You've just wasted the first two minutes of my infinite patience!You did ask a question in the last line in an April 26 post.
I was not on-line April 27.
I read your post and responded on April 28.
I was not on-line April 29 and April 30.
On May 1, I responded a second time to your April 26 post after realizing I had overlooked your question in my first response on April 28. Sorry I missed your question in my first response. But let's not forget you ignore some of my questions, or respond with non-answers (words that fail to directly address the question).
The details of my blind tests would lead only to a conclusion that I should not make claims of having superior hearing ability (every component sounds different), along with others in the double-blind tests.
The limited details you provided from your blind tests suggests the same conclusion. That conclusion is not an insult, but is a description of normal human hearing ability that differs greatly from the "Audio Myth" (every component sounds different) every time it has been tested over three decades.
Other than transducers, it seems knowing the brand of electronic component or wire in use has a huge effect on a listeners ability to differentiate its sound quality from other brands. The most logical explanation of this effect is that many component differences "heard" in sighted auditions are imagined, or the result of small meaningless SPL differences. The results of your blind tests, and mine, do not refute this logical explanation of why every component is thought to sound different, even when compared with itself and no SPL differences are possible.
![]()
> The staff of Stereophile claims to have excellent hearing ability."No-one on the magazine's staff has claimed this. If you think we have, please give a reference."
Sorry John, but claiming to hear the differences in capacitor dielectrics in a component is one such claim.
![]()
Well at least when zealots like yourself go on about the power of bias one would have to admit you'd know a thing or two about the subject!Whether that knowledge is conscious knowledge is another question, but you guys really aren't that interesting so ... who cares!
Just shrivel up already, ya Canuck.
![]()
> claiming to hear the differences in capacitor dielectrics in a
> component is one such claim.
That's what the statistical analysis of the results suggested, but admittedly it was only to the 95% criterion. However, that was enough
to satisfy my curiousity on the matter.
I think you and Bassnut neet to take a look at the language you use.
It is unnecessarily confrontational when you use emotionally loaded
words like "boast" and "claiming to hear" when I have been quite clear
in describing the results of the tests I have taken over the years.
And I'll ask you the same querstion I asked Bassnut: how many
_formal_ blind tests have you particpated in to be so sure of their
efficacy when it comes to detetcting small but real differences? To
object so strongly when someone like myself, who has taken part in
many such tests over the past 30 years, raises legitimate criticisms?
John Atkinson
Editor, Stereophile
![]()
and make no claims of superior hearing. For instance, I don't claim to hear the differences between capacitors. But I have put together a no excuses system with few components have been through your doors. I have no Musical Fidelity, Levinson, Krell, JM Labs, Italian or other cool guy French stuff. All trusting my own ears and judgement. I'd bet my trusty old Allen Wright modified 777 Sony would "objectively" better the latest dcs disc player.Every time an article is written, to me, that article qualifies as a claim because of the way in that they are written. With definitive descriptions.
My reason for now questioning your reviewers is that pretty much everything now sounds really good to you all. Although the more the equipment costs, the more creative the superlatives are going to be.
I have been to many shows and have heard some really crappy equipment that was claimed as excellent. One in particular. It got a rave. It was a 100 watt tube amp by Tim deParavinci. It was like 28 K USD. It sounded like absolute crap at the Show that year. And it was not a room interaction issue. Sure enough, it had got a rave review a month or two before. I asked and was told that the very amps that I heard were the ones used in the review. I thought that they may have been damaged in shipping or something. A few months later I read that it had some kind of major design flaw in it that limited dynamics. No wonder it sounded so bad when I goosed it above 75 Dbs in that small room. It was as though Tim got a pass for something that was fundamentally wrong. Next time just tell him what is wrong and let him fix it for Rev B. But then again that is said to be against Stereophile practice. Not that changing cables, wires, tubes, tuning the room... to get the sound just right isn't doing same as was common practice in the J-10 years.
> I am not a magazine writer and make no claims of superior hearing.
Neither do we claim we have superior hearing, Ozzie. We just have a
lot more experience listening under controlled conditions than the
typical audiophile.
> For instance, I don't claim to hear the differences between
> capacitors.
What tests have you taken part in, where the difference in capacitor
dielectric was the only variable? As I said, I am not "claiming" to
hear capacitor differences, mere stating, correctly, that in a blind
test of capacitors, statistical analysis of the test results indicated
that I could identify the difference to the 95% confidence level. As I
wrote, that satisfied my curiosity on the matter. Why do you have a
problem with that statement?
> What exactly is the 95% criterion?
In a formal blind test with statistical analysis of the results, it is
usual to state the confidence of those results. The normal criterion
for identification is that there be a 1 in 20 chance or less that the
test results were due to chance, ie, a 95% probability that a real
difference was detected. Some workers insist on a 1 in 100 chance that
the result be not due to chance, or an even smaller percentage.> I have put together a no excuses system with few components have been
> through your doors. I have no Musical Fidelity, Levinson, Krell, JM
> Labs, Italian or other cool guy French stuff. All trusting my own
> ears and judgement...
I have no problem with any of this, Ozzie. But why do you object so
strenuously to magazine writers exercising exactly the same listening
skills that you describe?
> Every time an article is written, to me, that article qualifies as a
> claim because of the way in that they are written. With definitive
> descriptions.
But with respect, that is still a very different matter from a writer
claiming to have superior hearing, as you described, or "boasting"
that he did, as Richard Bassnut Greene wrote. We have simply have more
experience is all.
John Atkinson
Editor, Stereophile
... while simultaneously impling that Stereophile reviewers DO have superior hearing ability.Is it possible we continually misinterpret the 'superior hearing ability' implications and claims?
Or is it more likely that when one tries to pin JA down as to what he REALLY means ... he backpedals at breathtaking speed, claiming he never 'wrote those exact words' .. but then he never clarifies what his writing implies to the readers (we have superior hearing ability).
Two examples:
JA WROTE:
"Neither do we claim we have superior hearing, Ozzie. We just have a
lot more experience listening under controlled conditions than the
typical audiophile."RG Comments:
Most readers would interpret this to be a claim of superior hearing ability (the ability to differentiate among audio components while listening to music).
YOU WROTE:
"As I said, I am not "claiming" to hear capacitor differences, mere stating, correctly, that in a blind test of capacitors, statistical analysis of the test results indicated that I could identify the difference to the 95% confidence level. As I wrote, that satisfied my curiosity on the matter. Why do you have a problem with that
statement?"RG:
Most readers would interpret this to be a boast of superior hearing ability.Those readers who have read about blind tests would know that no other audiophile in the world has claimed publicly to be able to hear cap differences under double-blind conditions, so these readers would view JA's claim to be a boast of having exceptional hearing ability.
![]()
JA WROTE:
"Neither do we claim we have superior hearing, Ozzie. We just have a
lot more experience listening under controlled conditions than the
typical audiophile."RG Comments:
Most readers would interpret this to be a claim of superior hearing ability (the ability to differentiate among audio components while listening to music).So from this I have to conclude that the only controlled conditions is when it is equipment under test, not a person's ability to differentiate the claimed differences? Not to mention that all of this experience should make DBTs much easier for reviewers to identify different components under test.
BassNut, a quick and dirty means of doing a controlled DBT for them would be for two reviewers to take two components that they have recently reviewed. Have the same system configuration and see if they can tell which component is which. Even let them use their notes or review to help identify the differences. From what John had claimed above, this should be a piece of cake.
![]()
That's because all other variables from the one tested didn't enter into the results. With those which had a positive audible difference as the result, there was the possibility of an error in the methodology which lead to that result, not diffferences in the products themselves which is why peer review is invariably required for acceptance for publication in a professional journal as opposed to a hobbyist magazine. It's impossible to win a rational arguement with a "golden ears." They set the standard and what they say becomes the gospel.
![]()
Because they want to make the same unchallenged claim?If audiophiles are so gullible, why take any test whose results could make you look bad, and disappoint your fan club?
![]()
From Harry Brown's book "Why Most Investment Strategies Fail", a guy receives a letter from a stock forecaster he doesn't know and it says a certain stock will go up tomorrow. And sure enough it does. He gets another letter the next day and it says the sock will go down and sure enough it does. He gets a total of five correct predictions in a row and each one is right. At the end of the week, the broker says OK, I've given you five correct predictions for free, the next one will cost you $10,000. Should he pay? The scam is easy to understand. On the first day, he sends out 16 letters half of which say the stock will go up and half of which say it will go down. To the 8 he got right, the next day he sends half letters saying the following day it will go up and the other half get letters saying it will go down. You know the rest, one out of 16 will get 5 correct predictions making it look like the guy is a genius. What does this have to do with DBT audio tests? Just this, there is a certain percentage of tests JA will be right on a significant number of trials, not because he actually could hear a difference but because of the statistical probability that eventually he will. Taking 100 such tests, unless the number of trials in each one is enormous, there is a likelihood that for some of them he will get a significant number right simply by random chance. And what about the other participants? How many participants, how many trials. JA doesn't give you that data. It's easy to be tricked, you have to know ALL of the details of the test before you can say whether or not his conclusion is fair and based on actually hearing a difference and not on random chance. It's not that statistics lie, it's just that you have to know the entire context to be able to say what they mean. In his case, they may mean nothing. We just don't know.
![]()
> It took you over 100 blind tests to come to the conclusion that
> blind tests are a waste of time?
It is vexing to have critics of my writing putting words in my mouth.
I have never said that blind tests are "a waste of time." What I have
repeatedly written that designing a blind test where the only
variable (or variables) is/are those that the test designer wishes to
examine where those viariables, though real, are small, is not
easy. I have repeatedly written that it is difficult and
time-consuming if you are not to poduce false negative results.
> Now what about all those null results you gloss over?
> Are you going to claim they are meaningless?
Again you put words in my mouth, Mr. Greene. If you had a better
understanding of statistics and experimental design than you appear
to have, you will know that a test producing a null result does not
"prove" there was no audible difference between the DUTs. All it means
is that if there was a difference, it could not be detected _under
the specific conditions of the test.
With the null results I mentioned, either the test design was flawed
or there was not a real audible difference. Without further work,
there is no way of distinguishing between those two hypotheses.
You are very quick to question me, Mr. Greene, but less eager to
answer the questions I put to you. So again, I ask you: how many
formal blind listening tests have you taken part in as either a
subject or an organizer to be so sure of their efficacy in an audio
context? To be so sure that any criticism I might make is
intrinsically illegitmate? (I note that you have signally failed to
offer any actual rebuttal of the points I have made.)
John Atkinson
Editor, Stereophile
> I know you or your reviewers will never risk demonstrating to
> witnesses the claimed ability to be able to hear differences among
> all audio components ...
But as I said in the post you refer to, I have done so. Just not to
your satisfaction, apparently.
![]()
The most likely explanation for why sighted auditions virtually never have "null results", while blind auditions frequently have null results, is that many sound quality "differences heard" in sighted auditions are merely imagined differences.No other hypothesis to explain the different results from sighted and blind auditions is logical.
Frequent null results are evidence gathered over three decades that consistently and strongly support the belief that all components DO NOT sound different.
Null results don't prove a belief is wrong, any more than flipping a coin and seeing heads 100 times in a row means the coin has two heads.
But as the decades go by without proof that all components sound different, it becomes more likely that the belief will never graduate into a fact.
While it is almost impossible to prove ANY belief is wrong ... it also seems quite impossible to prove this belief is correct!
Your editorial implying that blind tests may have experiment design errors, with no supporting data on how many design errors YOU discovered in the "over 100" blind tests YOU participated in, suggests you are merely throwing mud at blind tests ('humans can make errors') hoping some mud will stick.
An objective editorial?
No.
Evidence of your "agenda"?
...that where the difference between the scientific approach to getting at the truth and the evidence of testimonial endorsements is sufficient is a matter of life and death, nothing less than double blind tests are accepted. Once upon a time, the medicine show came to town and guys would get up in front of an audience and proclaim the cures their tonics offered for everything from baldness to rheumatism, cancer to kidney stones. Lots of people died because the cures didn't work, sometimes they were even dangerous by themselves. In audio equipment, all that's at stake is money. The FTC hasn't gotten involved because high end audio equipment is still far below their radar screen and will probably remain there, much to the relief of those who derive a profit out of it one way or another. What's most unusual and surprising to me is that those who buy most of this equipment whom you'd think would demand much better information, better measurements which correlate to subjective performance, objective proof that what they are considering purchasing actually does sound better, work better, seem the most adamant about not forcing or even urging manufacturers and reviewers to conduct such tests. Personally, I sit on the sidelines and just gawk in awe at such willingness to constantly swap, trade, upgrade, change, and each time lose money in search of a nebulous goal which cannot be achieved. Absolutely fascinating this obsession some people have. It takes all kinds. I don't applaud your role in all this but I recognize it as an inevitable factor. If you don't do it, someone else will.
![]()
" But my results also have shown that I could distinguish ... d) a solid-state amp from a tube amp"Well you demonstrated you could distinguish a particular amplifier "A" from another particular amplifier "B" and one happened to be solid state and the other tube. I did that myself at someone's home recently and of the 20 or 30 people present, I don't think there was anyone who didn't hear a difference and there was general agreement of what that difference was. And what was it? The tube amplifier, apparantly a Chinese clone or similar to a Dynaco Stereo 70 had considerably elevated and to most listeners colored response in the upper midrange and lower treble compared to a digital solid state amplifier from Italy which seemed completely neutral. But of course, these were sighted comparisons, who knows what would have resulted from a DBT.
![]()
"a) absolute polarity, b) different capacitor dielectrics, c) an
interconnect with the ground-shield connection made at one end from
the other, d) a solid-state amp from a tube amp, e) many different
phono cartridges, f) many different speakers, including correctly
identifying a speaker model under blind conditions from my previous
experience of it. And so on."You said it, now document it at the Show. I'll spot you d, e and f and probably c, provided there is noise involved. a??? But b, that I'd like to see you prove conclusively. Heck, I'd pay to see that.
![]()
> You said it, now document it at the Show.
Why do I have to? Almost all the tests I mentioned have been
published in various magazines. Why do I have to go to all the
expense and inconvenience _again_?
> I'll spot you d, e and f and probably c, provided there is noise
> involved.
Very generous of you.
> a??? But b, that I'd like to see you prove conclusively. Heck, I'd
> pay to see that.
So, if you have the money organize some tests yourself. Get yourself
some listeners, design a test where the only variable under test is
what you think it is -- as I have written, that is _not_ trivial --
and report on the result. Me, I have _done_ all that work. Why do I
have to do it again? Surely not just to satisfy your idle curiosity?
John Atkinson
Editor, Stereophile
![]()
. . . it's been found to be audible in blind tests, repeatedly. Not every audio system reveals it on music, particularly those whose speakers have drivers wired out-of-phase (as some crossover topologies require) and/or steep crossover slopes. IMS it's plainly audible.
![]()
This post is made possible by the generous support of people like you and our sponsors: