![]() ![]() |
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
209.240.205.63
James Randi has sent emails to 11 reviewers he found on the Shakti stones and Belt websites who endorse these products and the manufacturers to claim his $1 millon prize if they can prove their claims. David Robinson is one as well as Clark Johnson. I doubt any will respond.
Follow Ups:
As a university professor who spends most of his life conducting empirical research, I find this entire thread to be rather frustrating. Folks bash certain tests or refuse to even consider others, without any mention of the statistical properties of these tests. No statistical tests are perfect, but statisticians can establish their properties and use these to guide their use. It strikes me that we ought to establish a few fundamental properties of any proposed test of audio equipment. What follows is not meant to be definitive, but merely suggestive of the kinds of properties that can be established. I also try to be honest about the limitations.1) What is the false positive rate of the test? Using a single component (but not informing the subject of this), how often does the subject detect "differences?" This should be easy to compute for many tests and ought to be reported when possible. Of course, it may be difficult to compute for some tests (e.g. sighted testing), but not necessarily impossible if one uses some ingenuity. (E.g., place the same amplifier in different "shells" so that the subject believes they are different.) I would be highly suspicious of any test for which I could not statistically determine the false positive rate, as I would not know what to make of any reported findings. (BTW, the very foundation of hypothesis testing, significance levels, is intuitively closely related to the false positive rate.)
2) What is the false negative rate of the test? This is thornier to measure, as we need to expose subjects to sounds that are objectively different. It is certainly possible to do this. Take a sound signal and alter it by a measureable amount (perhaps pass it through an equalizer -- this is just one idea and I am sure that forum members can come up with better ones). Then ask subjects if they can distinguish between the "original" and "altered" signals. If the test has a high false negative rate (no difference observed) then either the test is flawed or the difference in sound is too subtle to be observed.
One flaw with this approach is that the alterations may fail to capture the kinds of differences allegedly created by audio products. But it offers the huge advantage of being measureable. We could use this measurement to compare the validity of different tests. Tests that produce high false negative rates (perhaps including ABX?) should be avoided. The alteration can also be used a benchmark of its own -- just how much does a signal have to be altered for subjects to detect a difference?
Finally, as others have pointed out, it is crucial to report various points in the test distribution. What was the performance of the test for the average listener? What about a listener in the 95% percentile? This could help us determine whether a particular test was useful for comparing performance of equipment as perceived by the average listener or only as perceived by golden ears.
?
WarmestTimbo in Oz
The Skyptical Mensurer and Audio ScroungerRead about and view system at:
about this thread, the point that's been successfully obfuscated by people with a vested belief system at stake, is that it started with "show me".You said you discern a big difference when the stones are nearby - show me.
Don't show me the "average" persone recognizes a big difference, don't show me that there's a statistical probalbity that a big difference is discernable by the general or audio-obssessed populations.
*You* (the reviewers in question) said you detect a big difference - show me.
![]()
My concerns remain. Whatever test you design for one individual may be subject to false positives and false negatives. Without knowing the underlying properties of that test, we cannot determine whether that individual has "shown you" anything.(Worst case scenario, we design a test that, for that individual, has a high false positive rate, only we don't know it. That individual reports a "positive" result -- detects a difference. Is that a real difference induced by the stones, or merely a testing artifact?)
![]()
False positives or not, reliably "detecting" a difference gets the $1M. Only detecting a difference matters.
![]()
I assumed a success criteria - the detection of a difference, as opposed to the detection of the presence of the actual stones.
More complicated than sexual pleasure...
![]()
Due to the inherent difficulty in the ABX method of testing and statistical analysis of the results gain therefrom, I propose the following test:The reviewer is placed in a locked room, facing him are three doors. Behind one door is his favorite system with all his favorite tweaks, fully approved by the listener before the test can proceed. Behind the other door is a generic system. Behind the third door is a ravenous tiger. The genereic system is a known,that is the button that selects it is a given in the test.
The tiger's growl is played, one at a time, through all three doors (never mind the logistics of moving the tiger in and out of "his" door area). At some point, of his choosing, the reviewer must state which door has his system. That door will then be opened. No statistical analysis will be required, no judgements of taste will be accepted. He will have proven what he can hear by the results; he will either be looking at his system, or be eaten by the tiger (presuming that tigers dont gag on reviewers). ;)
The survivors, if any, will win the prize. Any surviving reviewer, even if merely lucky, will be given considerably greater credence by me.
Note: I have in fact bought the Shakti Onlines, they never worked for me. But I don't believe they're a complete scam, and genuinely believe they may work for some.
![]()
Seeing how a million dollars would be enough for most of these people to retire on, I don't think that it would be unreasonable for them to take the challenge if they are reasonably confident in their abilities.
![]()
For example, wine reviewers do double blind tests constantly. And they prove their special skill by producing consistent and reliable results. As a matter fact, most wine reviews are mode under blind test conditions. That's why they have credibility and respect with the general public.Gee, wine reviewers would jump at the chance to win $1M to prove their talents using standard test methods. They certainly wouldn't give all those lame audiophile excuses (like those below) why they can't be tested.
![]()
"And they prove their special skill by producing consistent and reliable results. As a matter fact, most wine reviews are mode under blind test conditions. That's why they have credibility and respect with the general public."Actually, they don't prove much at all with their blind taste tests, only that they do it blind. They are not doing scientific double-blind taste identifications in an ABX comparison, for example. They don't do it repeatedly and apply statistics to their results to prove their blind tasting is in fact correctly identifying anything. All they do is a single taste or a couple of tastes and make a rating on the wine. They could still be quacks for all we know about the results. If you think they can identify vineyards and years just with a taste, you're believing in a myth as shown in TV or the movies.
If a somilier (sp?) can tell the type of grape, the year, and the chateau that produced a given wine in a blind test, why can't reviewer's do the same between two products like cables? If these so-called "golden ears" were as good as they think they are, they'd step up to the plate and let 'er rip!
![]()
Professional taste testers (not just wine) have years of experience and training working under test conditions. And in most taste tests, the testers are given an appropriate selection of other foods to "calibrate" their pallette before they get to taste the food under consideration. The test procedures are repeated and revised over time to increase the sensitivity of the test. Contrast that to the audio world, where blind tests are conducted using people who have no test experience or training and don't include any controls or calibration standards. The tests are one-off, no attempt is made to ensure repeatability of prior results, and the people conducting the tests aren't interested in measure and improving the sensitivity of the tests.
all the talk about what is sublte and subjective and abx and so on is irrelevant to what Randi is asking.He is simply asking a group of people that claim to be professionals (they make their living on what they write for audio) to prove that they can do what they say,or imply, that they can do; ie can they actually hear what they say they can. Thats it. It is not a challange to the rest of us audiophiles; most of us never claim to be able to hear all these differences, do to various devices, in any environment other than our own systems.
It is simply a challange to "professionals" to prove that they have the objective qualifications to be such.
Give Me Ambiguity or Give Me Something Else!
![]()
none of the PFO wrtiers make their living from writing for PFO - we all have other careers that pay our bills. So none of our wwriters are "paid professional reviewers," though they do receive a nominal payment for their contributions.
Dave Clark
![]()
And as I point out in a post below, these reviewers would be foolish indeed to accept a challenge using the classic ABX methodolgy, due to it's highly limited sensitivity.
every imaginable type of blind listening. I suggested non-DBT, non-switching box, extended (no time constraints whatsoever other than the review deadline) periods of blind comparisons over a period of days, weeks or months, in familiar surroundings (a reviewers own listening room), with familiar ancilliaries, using a wide variety of music of the reviewer's choice, with matched but varied levels, and with the reviewers being aware OR unaware of the names of products under test. Rejected, as was every single other suggestion for any type of blind comparison whatsoever. Additionally, despite the claims made in their ads (sometimes reinforced in reviews), I have never heard a hi-end cable, component or speaker manufacturer ask for blind comparisons in reviews of their products, and I suspect they would object too, no matter what the methodology is.
![]()
First, discussion of DBT (or ABX) is not allowed on the Cable Asylum, for the reasons given at:
http://www.audioasylum.com/audio/dbt.htmlSecond, as I have stated many many times, I am personally NOT against DBT's, or ABX, or any valid scientific method of conducting a listening test. I personally have participated in, and conducted many a DBT, written an AES paper on the subject, and posted at many different sites about how one goes about conducting a valid listening test. I posted the URLs for the posts I have done here on the subject, look for my earlier posts in this thread.
What I am against, is the use of a handful of amateur and highly flawed classic ABX style listening tests being used as if they were valid scientific evidence. Many naysayers cite the very small handful of popular press published articles on sonic differences in audio components as if they were solid gold factual science, or bring up web anecdotes about listening tests that had null results, as if they were somehow the absolute proof of a negative.
There is a lot of myth and BS floating about with regard to DBT's, and unfortunately, most of it is being promoted by the naysayers out there.
As for your laundry list of listening test variations, you need to realize that ANY sort of TESTING scenario is by ddefinition, and artificial one, and NOT the same as listening to music for enjoyment.
Any test situation automatically creates an unusual and abnormal set of listening conditons, and it is these kinds of problems that make it dificult, if not entirely impossible, to be able to use, without qualification, any sort of formal or controlled listening test methods to actually arrive at some valid data.AA on the whole is not anti-DBT, but I do think that most folks who post here are not nearly as averse to listening for themsleves and not automatically dismissing what they hear as some sort of delusion.
That seems to stick in the craw of some folks pretty bad, and that is just tough.
The question is, are 'most folks who post here' willing to entertain the likelihood that what they hear when they 'listen for themselves' under sighted conditions, might well be a delusion? Because that is clearly what the science says: sighted comparison is subject to several sorts of bias. If they are unwilling to entertain that as a significant possibility -- if they simply refuse to believe it -- tough. They're wrong.Listening for enjoyment is still listening. If the 'abnormal conditions' aregument is cogent, then it is cogent for ALL scientific
observations. Do you really want to go that far? Or will you admit that it's just special pleading for '(sighted) listening for enjoyment' as a means for determingin audible difference -- a demonstrably flawed method of drawing conclusions about the real world from perceptionsYour claim that most of the BS about DBTs is being promulgated by naysayers is only true if you mean the *naysayers* about DBT...the one's who simply refuse to accept its utility. My experience is that *they* are posting the vast majority of inaccurate claims about DBT.
As for the 'very small handful of popular press articles' using what you insist are flawed DBT -- contrast them to the vast ocean of popular press reviews that are based entirely on sighted comparisons. Which is the greater influence on audiophilia, and which is in most pressing need of redress?
[ Because that is clearly what the science says: sighted comparison is subject to several sorts of bias. If they are unwilling to entertain that as a significant possibility -- if they simply refuse to believe it -- tough. They're wrong. ]Just because something is "subject to several sorts of bias", does not mean that all of these sorts of bias arer present or manifesting to a significant enough degree to be a real problem. The potential for a bais is there, but not mandated or a certainty. It is as if the DBT maniacs want to assume that ALL sighted listening is automatically and completely invalid, and ANY DBT is magically become valid (despite the blindness not having a thing to do with how well the listening test was actually done).
I think that it is entirely possible for some one to gain enough experience, and have enough exposure to high performance audio, that they can learn to listen past most of their biases and prejudice's, and not fool themsleves all the time.
There are several situations where folks are not going to be under the typical influence of a positive bias or set of biases:
1. Where they are expecting the worst, that is, when they expect A to sound better than B, but the reverse occurs.
None of the super DBT proponents has ever explained this one away satisfactorily, because if it were ONLY bias or placebo at work, then they would have heard what they expected to hear.
2. When they aren't expecting anything, such as a sonic revealation. A good friend has upgraded component Z, and they were not aware that this had occured.
3. When they experience an ephiphany, such as when they suddenly hear something in a very familiar piece of music, something that they had never noticed before, and now it was plainly heard. This usually has to do with a significant step up in resolving power of a component or due to a tweak. No expectations, no clue that this previously unheard musical sound or signal even existed, thus, no possiblity of a bias or placebo effect being the cause.
This one has never been satisfactorily explained away either, despite some serious efforts to do so.
4. Audio professionals who listen for a living, some nearly every day of the year. This would include, but not be limited to: sound recording engineers, mixing engineers, producers, musicians, equipment designers, mastering engineers, etc.
These folks have amassed a huge amount of exposure and experience, and if they are successful at what they do, then they have learned to listen with enough disconnect from their own wishes and feelings, to more nearly hear what is actually occuring. They HAVE to, in order to do a good job.Yes, you can fool yourself, but given some time and experience, I think that many audiophiles and music lovers can learn to hear what is actually occuring, rather that what they WANT to hear, at least most of the time.
I think it is quite telling that most hard core naysayers are adamant about not trusting their own hearing at all, they are completely adrift in the sea of audio component quality, with only a handful of old tired measurements to go by, unable to draw any aid from their own hearing, because they distrust it so. The problem is, it is not enough for them to be adrift, they insist that everyone else should follow them into the dark too.
[ If the 'abnormal conditions' aregument is cogent, then it is cogent for ALL scientific observations. Do you really want to go that far? ]
Yes, this is true, and thus, ANY listening test where the subject is asked to make a judgement, but especially one that has a forced choice protocall, has this same problem.
[ Your claim that most of the BS about DBTs is being promulgated by naysayers is only true if you mean the *naysayers* about DBT...the one's who simply refuse to accept its utility. My experience is that *they* are posting the vast majority of inaccurate claims about DBT. ]
"refuse to accept it's utility". And what utility would that be? It certainly can not be said that any of the flawed popular press articles on audio component DBTs is of any utility to anyone. So what exactly are you refering to?
What inaccurate claims abnout DBT are being made? You haven't specified anything, what exactly are these claims as you see it?[ As for the 'very small handful of popular press articles' using what you insist are flawed DBT -- contrast them to the vast ocean of popular press reviews that are based entirely on sighted comparisons. Which is the greater influence on audiophilia, and which is in most pressing need of redress? ]
Even if we concede that such popular press articles using DBT have their heart in the right place, so what? They still got a siginficant portion of the science wrong! Good intentions are not enough.
For specifics on those flaws, se:
http://www.audioasylum.com/forums/prophead/messages/2190.html
http://www.audioasylum.com/forums/prophead/messages/2579.html
http://www.audioasylum.com/forums/prophead/messages/2580.htmlTell me why the flaws I call out, which apply to MOST, if not all of the popular press DBT's that have been published, are not relevant or are not actually flaws. Otherwise, tell me why we should accept these flawed tests as some sort of evidence, when they came up with null results, and did not have ANY metrics in place to gauge the sensitivity of the tests? Remember, without those metrics having been run, a null result has absolutely no other meaning, and is certainly NOT equivalent to a negative.
I believe it should be possible through extensive collaboration and peer review to develop test methods, equipment, training materials, and standards whose reliability, sensitivity, and bias are well characterized and whose results can be widely accepted. But I don't think there are enough people with sufficient interest, time, and money to make it happen.The people on one side of the debate aren't interested in measuring and characterizing the reliability or sensitivity of listening tests let alone improving them. That's because they have already reached their conclusions and are only looking for a null result - which an unreliable or insensitive test can always provide. The people on the other side of the debate believe that most listening tests are traps rigged against them, and in the case of reviewers, subjecting themselves to listening tests involves only risk and no gain.
I don't think there is any hope of convincing either side that there is a middle ground. If you want blind testing to be included in reviews, the best thing you can do is support the people who do that. If enough people do, then eventually rags like Stereophile and TAS will have an incentive to change their review practices.
Methods and standards for controlled listening comparison for audio components *have* been formalized, by the AES.
![]()
The issue that started all this was that a handful of people claimed to be able to tell if some rocks were in the vicinity of their stereo equipment.
to be made is that the test involved mutually agreed upon tests. As far as I saw, it did not Specify an ABX test.
The first was the test arrangements for proof of paranormal/psychic powers and that does rely on mutually agreed tests.Another post quoted the letter sent to the audio reviewers and it specified an ABX test.
I assume the reason for the difference is that the reviewers are not claiming paranormal powers.
And that means Jon's concern is valid since there seems to be reasonable grounds for having reservations about the use of ABX tests to show the existence of audible differences.
My big concern about Randi's approach is that it's a showman approach and it seems intended to set people up for ridicule. Even the wording of the invitation amounts to subtle or less than subtle ridicule of the reviewers.
I don't get the feeling that Randi is interested in establishing whether or not certain paranormal powers exist or whether reviewers actually hear what they claim, but rather that Randi is interested in demolishing certain sorts of claims and that his arrangements are intended to do just that. It would be surprising if they weren't since he stands to lose a million dollars if somebody delivers, so obviously it's in his interest to make it as hard as possible for people to deliver. That implies that the testing arrangements will always have a bias in Randi's favour wherever possible, and it's his perogative to make it as tough as possible if he's going to pay a million for a positive result.
On the other hand, I can see why a lot of reviewers who honestly report their experiences during a review would also quite legitimately avoid participating in a process which a number of statisticians have stated is biased against them, and which is being conducted by a person who makes a name for himself by publicly ridiculing others.
Of course he isn't interested in whether or not paranormal powers exist. He knows they don't. And he isn't out to ridicule people. But, like crooked faith healers, remote viewers, people who talk to the dead, and dowsers - if the shoe fits.....
![]()
See the "Rescuer."
![]()
if the ABX was specified with no input from the reviewers allowed. However, if they were permitted to specify the switch or cabling they "believed in" as being neutral enough for testing,so that they feel satisfied with the accuracy of the test, it would be fair. And/or if the test were modifed to be an A/B/C test with the B being a known unit. I never did like the idea of the ABX type of test.However, I dont see any reviewer doing this since they have little to gain (no way in hell can they prove their abilities)and, like you said, they have much to lose in the way of reputation and, perhaps, credibility.
I do believe, however, that if enough reviewers or golden ears got together, and showed that they were serious about testing their "abilities" they could negotiate a test more likely to have a chance for sucess. Randi's proposal is merely the opening shot, he will negotiate, as he did for the dowsing test he did in Europe.
Give Me Ambiguity or Give Me Something Else!
![]()
...to the idea that the methodology itself is flawed? Most of the people arguing against ABX are not doing so because they feel the switch box or other gear doesn't meet their standard of fidelity. To think this is the problem is missing the point by miles.
![]()
I did suggest something I like better, the comparison to a known third unit. So the brain doesnt have to guess what the heck is going on, there is an agreed upon "reference unit" that the testee can switch to at will, clearly labled on the switch...the other two units being the one with, and the one without, whatever it is one is testing. I think this would be a rather sensitive test since you do have a reference to lock in on, and arent putting the brain thru a totally random experience. HTML tag not allowed
I thought both A and B *were* known in an ABX test. It's the X that is unknown.
![]()
the switch brought in the amps (or tested device)randomly so you didnt know which was which. I cant imagine a human doing well on that test. So even is two of the amps were known the random switching would confuse anyone.I think there are many other protocols, but this is the one I remember.
Give Me Ambiguity or Give Me Something Else!
![]()
So how would you prove it?
Magnetar
![]()
They don't have to prove it. They are acknowledged experts. What more proof is needed?
![]()
we're not talking about components,wire or speakers but Shakti stones and stuff from Belt like writing OK on your cds and they will sound better. This is the crap these people should prove.
![]()
Shakti stones, Mpingo dots or any highly resonant material placed on speaker cabinets act as secondary sound sources. I have heard the effect with various speakers and treatment devices. You might find the additional acoustic output beneficial to the overall sound of your system, or not. Currently I do not use any such devices.As for Peter Belt, many of his tweaks function like compensation networks, adding capacitance, shunts, ground loops, RFI and noise.
You may or may not like the effect on the sound of your equipment. I have seen one preamp which went into oscillation after Belt's mods were applied, understandable given their nature.So, no need for James Randi, no paranormal occurrances, just physics. Sorry about that.
What are you wasting time around here for?
![]()
I always seem to be making bad career choices and the "wrong kind" of friends.
My kind of guys.
![]()
So Mr. Shakti comes back from the beach with a bucket of pebbles, cleans them up and sells jars to gullible audiophiles for $100 each. Wow...that's easier than printing money!Interesting how quickly faults are found in the $1M test rather than question the unsubstantiated claims of those magic beach pebbles. Bravo James Randi!
![]()
Magnetar
![]()
Perhaps Mr. Randi has been sold a bill of goods with regard to ABXThe classic ABX listening test using the ABX switchbox has so many problems and flaws, that any one with half a brain would be a fool to attempt to participate in such a "test". Given all the problems with the classic ABX switchbox based listening tests, the listening test sensitivity is SO greatly reduced, that the effects of the more subtle audio tweaks and components would be swamped out.
For instance, people that have been involved is such ABX tests have found difficulty in hearing differences between a cheap Pioneer receiver and a Krell amplifier. Is there ANYONE who thinks that they actually sound the same or perform the same?
There are numerous problems with the classic ABX methodology as practiced by the switchbox advocates, some of which I outline in these posts:
http://www.audioasylum.com/forums/prophead/messages/2190.html
http://www.audioasylum.com/forums/prophead/messages/2579.html
http://www.audioasylum.com/forums/prophead/messages/2580.htmlThe primary strength of the classic ABX switchbox based listening tests are centered around amplitude difference detection with relatively steady test signals. The only 'published' data for this, the classic ABX switch box type listening test's strong point, indicates that it may be capable of detecting a 0.3 dB broadband level defference. The data is questionable, because it was never formaly published in a peer-reviewed professional journal.
Now for other more sophisticated types of blind listening tests, folks have been able to get down to 0.1 dB broadband level differences. Right off the bat, the ABX tests strong suite is only 1/3 as sensitive as other types of listening tests (which also have their own problems and lack of sensitivity issues). It is highly likely that the classic ABX test is actually LESS than 1/3 as sensitive as actual listening experiences without the pressures and rigors of blind testing involved in some form or fashion.What the classic ABX switchbox type listening tests are weakest at, are signals that do not maintain a steady level, and have lots of activity in the time domain, that is, actual music.
There is very little data on how sensitive the classic ABX switchbox type tests are to: distortion of all types, including THD, HD, IM, multitones, envelope distortion, transient distortion, low level distortion, etc., WITH MUSIC. There is virtually no peer-reviewed professional journal data on how sensitive it is to detecting very subtle sonic differences with actual music as the test signal, the few that have been done and 'published' on the internet or via chat board posts, show an abysmal capability to detect significant levels of distortion and low level aspects of music playback.
Given that the classic ABX with switchbox style listening tests ar so insensitive to subtle audio details, it would be sheer folly to try and use that method to detect audio tweaks or component performance.
Folks always seem to forget, that unless you have the results of a huge number of separate listening test trials, using a test methodology that has been ajudged for it's inherent sensitivity level (never done with the ABX tests often cited a some sort of 'proof' of this or that), you really can not draw any other conclusons from a single test which comes up with a null result (failure to achieve a statistically signficiant positive result). Folks often equate such a null result with a negative, but this is not a truly scientific way of looking at the results.
Of course, Mr. Randi is looking for a statistically significant positive result, as the basis for claiming his 'prize' but it is implicit and a part of the scientific process that if some one were to actually participate in such a single listening test that had no metric for sensitivity established, that a failure to achieve a statistically significant positive result has no other meaning at all, especially including that of "there were no differences".
Magnetar
![]()
Jon, I so admire your patience dealing with DBT's, their doubters and proponents.I agree how difficult it is to set up a DBT that does not yield null results. I was present years ago at a large public DBT with the ABX box which randomly compared 10ft of 26 gauge speaker wire with 10ft of Monster Cable. One or two astute listeners (one female) made 16 of 16 correct identifications. I thought I could identify each wire correctly all of the time but became confused during the test and did not participate in all the trial runs.
Here's a suggestion for checking your DBT's sensitivity: don't bother with unknowns until you determine which known differences that do indeed exist can be reliably identified. I recommend known but subtle differences such as mono vs. stereo music reproduction, 50Hz-5kHz vs 20Hz-20kHz bandwidth, correct relative polarity vs inverted, 0.1% THD vs 5%THD, and the like. If your test gives null results here it will not reliably identify speaker wires, IC's, .1dB level differences, Krell vs Pioneer and so on.
![]()
"I thought I could identify each wire correctly all of the time but became confused during the test and did not participate in all the trial runs."RG
What could I add to that?
.
.
.
"I recommend known but subtle differences such as mono vs. stereo music reproduction, 50Hz-5kHz vs 20Hz-20kHz bandwidth, correct relative polarity vs inverted, 0.1% THD vs 5%THD, and the like. If your test gives null results here it will not reliably identify speaker wires, IC's, .1dB level differences, Krell vs Pioneer and so on."RG:
Your logic is completely flawed.
You assume a test is reliable for speaker wires, IC's etc. if it proves relable for several other comparisons. That makes no sense.
In fact, the only way to estimate whether people hear differences among wires and ICs is to ask them. This is (or should be) done during a sighted warm-up audition -- if some listeners claim to hear differences among wires under sighted conditions, or are not sure,
only then should they be asked to listen under blind conditions.People who hear no differences during the warm-up sighted audition should not be asked to participate in the following blind audition.
The goal is to assume people know what they hear under sighted conditions (what other choice do you have?) ... but then try to verify their claims under single or double-blind conditions that replicate the sighted audition methodology as closely as possible.
The ABX methodology is not mandatory at all.Even then the "test" only applies to the participants involved with the audio hardware and software used on that day. Results can never be extrapolated to audiophiles-in-general, with one exception:
-- Participants may report that some or many of them claimed to hear differences under sighted conditions ... that were not audible under blind conditions only minutes later. That, in fact, is exactly what I witnessed first-hand while participating in many double and
single-blind auditions in the past 10-15 years.
Most blind and double-blind tests yield null results, and not just in comparison of audio gears. For example, Pepsi is 3 calories sweeter than Coke per 12 oz serving, yet no blind test has ever reliably revealed a difference between them in taste. For many years Pepsi depended on consumer inability to identify this characteristic of their beverage, instead falsely advertising it as "the light refreshment".I think it would be worthwhile to investigate the generally high incidence of null results in blind testing, rather than point to it as some kind of standard, regardless of what is being tested.
I repeat, if a DBT cannot reliably identify differences that do exist, how can it be expected to give useful information about unknowns?
![]()
"I repeat, if a DBT cannot reliably identify differences that do exist, how can it be expected to give useful information about unknowns?"RG
If you compare mono vs. stereo using ABX methodology, and a participant can't hear a difference, then how do you know whether the participant or the test methodology is at fault?We can't attach probes to an audiophile's brain to prove what he claims to hear are real, and not imagined, so we assume he is reporting real differences during a sighted audition.
Then we observe that minutes later under double-blind conditions that he can't prove his ability to hear the differences when the identity of two components are hidden. You seem to be biased toward blaming the ABX methodology when once again we don't know whether the participant or the test methodology is the cause of this.One possible explanation of why differences are so much more common under sighted conditions is that hearing false differences is common among audiophiles.
The results of "deception tests" where participants listen to one component and are tricked into believing that there is really and A and B component support my hypothesis (about 3 or of 4 participants say they prefer "A" or "B" when they are exactly the same component and no real audible difference is possible -- once again evidence of bias toward imagined differences among components).
"One possible explanation of why differences are so much more common under sighted conditions is that hearing false differences is common among audiophiles"I think that this is the most likely explanation. I don't think that it's delusionary or the like, I think that it's just human nature. The placebo effect is very real and easily reproducible time and time again.
Another factor is that, compared to many other mammals, our hearing just isn't that great. We don't have these same kind of arguments when debating the picture quality of video. The differences are obvious - this is partially because our sense of vision is much more acute then our hearing. When performing A/B comparisions on these technologies the differences jump out at you.
Personally I also think that mental factors play a large part in our perception of music. I typically leave my gear on. When I start listening to the music, it takes me about 10 minutes before I "settle in" and the music "comes to life". I realize that the equipment is outputing the same signal the entire time and the difference in perception is mental.
I think that other people experience similiar effects when they talk about music sounding better at different times of the day. I also think that one's mental state has a profound effect on how well their equipment sounds at any one moment.
Personally I also think that mental factors play a large part in our perception of music. I typically leave my gear on. When I start listening to the music, it takes me about 10 minutes before I "settle in" and the music "comes to life". I realize that the equipment is outputing the same signal the entire time and the difference in perception is mental.
I think that other people experience similiar effects when they talk about music sounding better at different times of the day. I also think that one's mental state has a profound effect on how well their equipment sounds at any one moment.
This is one reason why professional food testers work in a tightly controlled environment and why they warm up and calibrate their senses using a variety of standard samples before beginning the test.
This is also a reason why conducting a "simple" double blind listening test without rigor or controls produces meaningless results.
Dave
If you compare mono vs. stereo using ABX methodology, and a participant can't hear a difference, then how do you know whether the participant or the test methodology is at fault?
If you only test a single individual on a single criteria, you don't know and you can't know. Your test has to have controls otherwise you can't conclude anything. Ideally you want controls in the form of individuals whose abilities are known based on the results of other tests and you also want controls in the form of test cases/criteria that produce known outcomes in the population represented by your sample. Otherwise, you're not really testing, you're just playing games.
We can't attach probes to an audiophile's brain to prove what he claims to hear are real, and not imagined, so we assume he is reporting real differences during a sighted audition.
That assumption is not valid for obvious reasons. You can't use sighted tests to establish the suitability of the test setup or the ability of the participants. If you're willing to trust the outcome of sighted listening tests then why would you be trying to conduct an ABX test in the first place?
As I said, you need to have controls and/or calibration standards in your experiment otherwise you have no basis from which to draw conclusions. We don't use a piece of test equipment to measure something unless we know it's been calibrated, we know it's suitable for this type of measurement, and we know it has enough range/sensitivity to cover our expected result.
Brian's suggestion is to start testing with gross changes and work your way down to more subtle changes until you start getting null results. By doing that, you establish a lower bound on the sensitivity of your test and it's participants. If you also include participants who have demonstrated a certain level of ability in other tests, that will help you determine whether you're being limited by the test methodology & setup or limited by your participants.
The subjects and the test are one and the same, so blaming one or the other for null results under blind conditions is specious.Please note I did not say that being able to identify known differences under blind conditions validates other results under the same conditions. I said it is unlikely unknowns could be reliably identified if knowns cannot be under test conditions.
I suggest the following DBT for yourself, ABX box or not. Use a music source where pitch can be altered. Under "sighted" conditions identify the degree of pitch change necessary for you to identify incorrect pitch 100% of the time. Say under these conditions you can reliably identify 1/4 tone sharp pitch. Now repeat under blind conditions, preferably double blind. I suspect your sensitivity to pitch change will deteriorate and you will require at least a half-tone sharpness to identify the pitch change 100% of the time. Indeed you may need an entire tone. What does this say about you, and about double-blind testing?
![]()
About once a year I borrow an audio component to compare with my own component at home.Except for speakers, where differences are obvious, I try to compare the two components under single-blind conditions with A-B volumes matched the best I can with a 1000Hz. sine wave tone.
Given the short time (one or two days) I have to listen to the borrowed component, I try to eliminate possible bias from reading a good review in an audio magazine or from the "new is better" syndrome.
I've found that single-blind auditions of amplifiers, CD players and especially wires while listening to music ... make me less likely to buy new amplifiers, CD players and wires becauses differences are small or not audible ... and far more interested in auditioning new speakers (at home) and room acoustics treatments.
Knowing what brand and model of component I am listening to should not have an opportunity to affect my judgement on sound quality.
A person could imagine a great ability to hear pitch differences under sighted conditions and then find out this ability was mainly imagined under blind conditions. That in no way means the results under blind conditions were affected by the test methodology.
Do the words "I suspect" in your last paragraph mean the entire paragraph is based on your speculation ... and not based on your experience?
![]()
In my listening room, I can easily hear differences between CDP's, Interconnects and speaker wires. Haven't tried amplifiers. I'm of the belief that if your home audio system smears the time/phase of the musical signal (and lets face it, almost all do) trying to hear cable/amp/pre-amp/inter-connect differences becomes a moot point.
![]()
The pitch change test was my thought experiment. Perhaps you can find a better one. The question remains: where differences do exist, and are apparent in a sighted test to the point of certainty, but such differences disappear under blind conditions (as they usually do), do you blame yourself (the test subject) or the conditions of the blind test for your ignominious failure to identify something as obvious as pitch 1/4 tone sharp? To say nothing of the less obvious pitch 1/4 or even 1/2 tone flat!?
You said "A person could imagine a great ability to hear pitch differences under sighted conditions and then find out this ability was mainly imagined under blind conditions. That in no way means the results under blind conditions were affected by the test methodology."For a start, the person does not "imagine a great ability to hear pitch differences under sighted conditions" - what they do is to believe they perceive actual pitch differences under sighted conditions. There is an important conceptual difference between believing you hear a difference and beliving or imagining that you have an ability to hear a difference. In the first case you believe that you are perceiving something external to yourself and present in the world, while in the second case you are believing or imagining something about yourself. We may come to a belief about our abilities because of what we hear, or believe we hear, but I think we are unlikely to start out with a belief that we can hear something when we've had no experiences with it whatsoever. In other words, we draw our evidence for any beliefs we have about our abilities from a different set of events to the events that generate our beliefs as to whether or not we hear a difference between two sounds. In fact, it's important not to confuse the two things because belief in an ability to hear something is not necessary in order for the person to hear it. I've had people tell me that they didn't believe that good gear would make a difference and that they wouldn't be able to hear a difference between their system and mine, and then been quite surprised to discover that they could hear differences when they heard mine, and I think we can all produce stories about people, including ourselves, holding beliefs about our abilities not to be able to do something and then being quite surprised to find that we actually could do it.
I also dislike the use of the word 'imagine' in your description. While we sometimes do say that 'I must have imagined it' when we discover that we were in error on a matter of perception, there are numerous times when we make errors in relation to perception and we explain those errors in quite understandable ways that are based on facts. I may not recognise a voice because the speaker deliberately muffled it, or I may think I see something in the shadows but be mistaken because the degree of shadow was not uniform in the area I was looking at and things were misleading as a result. We don't talk about imagination in such cases and, in fact, I doubt whether we regard imagination as a particularly common cause of perceptual error, yet the sort of statement you made here tends to imply that imagination is a common or the major cause of error, at least in relation to claims about what we hear with audio. I'm not saying that imagination can not or does not play a part - I'm just questioning how much of a part it actually plays and I don't think anyone yet has tried to work out what proportion of perceptual errors in audio-related tests are caused by imagination or something else. I think an equally important question I'd like to see answered is what proportion of the time we are mistaken, because I also think that the proportion of actual errors is likely to be reasonably low. I believe the reason we tend to trust our senses is simply that most of the time we actually do get it right, and it's important not to lose sight of that simple fact. If we got it wrong most of the time, we'd be double and triple checking every one of our perceptions before we acted on them and we definitely don't live our lives doing that. Note: I'm not saying we don't make mistakes or get it wrong, but I am trying to put those times in some sort of perspective.
Finally, you said "That in no way means the results under blind conditions were affected by the test methodology." I'm not certain that it doesn't mean exactly that. For a start, you can regard the results as being determined by the accuracy of the judgements and ignore the 'marking' process, or say that the results are determined by the accuracy of the marking process. In actuality, both claims are true since both factors will influence the result obtained. What we assume when we do the test is that the judgement processes of the person being tested are exactly as reliable as they are when the person is not under test, that their perceptual capabilities are unaffected by the test (a test that impaired your capabilities in the process would be no test at all), and that the characteristics of the sounds being listened to are also unaffected. If that is the case, the only reason for a difference in results is the change in the accuracy of the marking process and that process is part of the test methodology, so results under blind conditions, or any conditions for that matter, are affected by test methodology. They certainly aren't determined by the test methodology (once again, it would be no test if they were) but the methodology will have an effect. It's precisely this fact that is the issue when statisticians talk about the propensity for a particular sort of protocol to generate type 1 or type 2 errors.
More importantly, there is a very real difference between the sort of results obtained when I listen to something in my system at home, and when I listen to the same thing in a test of any sort. When I play with a tweak at home, I'm interested in establishing first whether or not I think it makes a difference and then, if it does, whether I prefer the sound with or without the tweak. In a test, other people (and perhaps me also, but that isn't necessary) are interested in whether or not I can reliably hear a difference between the sound with and without the tweak. If I can't reliably tell the difference, it doesn't mean that I'm not right some of the time, and it also doesn't mean that I'm wrong if, as a result of a number of listening experiences with and without the tweak, I come to the conclusion that it does make a difference. Listening tests of any sort will not necessarily answer the question of whether or not a person can correctly tell whether something actually makes a difference if the test indicates that they can't reliably tell whether or not the tweak is present every time they are asked. To make that point in a different way, let's say I know 2 identical twins who are capable of fooling people as to which one of them is which. I know both of them, and I know there is a difference between them, but they may well be able to fool me whenever they want. That doesn't make my belief that they are different wrong or silly. My point is simply that my unreliability in getting it right every time I make the judgement about whether or not the tweak is present doesn't necessarily make me less reliable in being able to form a judgement over a number of trials that the tweak does make a difference. I'm not aware of any testing that has specifically addressed that point.
I'm not trying to dismiss testing in raising the above issues. All I'm trying to do is to make the point that testing doesn't necessarily provide the sort of guarantees that a lot of people assume it does, that it isn't necessarily completely impartial, and that it does not necessarily address the issue that we would all really like answered, ie how reliable are we at forming an opinion of whether or not there is a difference from the results of a number of listening trials where we may or may not have got it right in respect of each individual trial.
My wife's statistics teacher said about statistics: "What statistics reveal is interesting, what they conceal is vital". The same thing can be said about testing of any sort. We need to be very clear about what is being concealed if we are to be able to get the most benefit from what is revealed.
David Aiken
I think we are basically in agrement here, and if you made the heculean effort to read the referenced URL's, you would see that I have made reference to the need for determining the sensitivity of any given listening test.I think most folks would be shocked and amazed at just how INSENSITIVE the clasic ABX with switchbox listening test is.
With the same switchbox attached, participants involved in the preliminary sighted warm-up auditions had little trouble "hearing differences" among components ... "differences" that were suddenly impossible to hear minutes later under double-blind conditions.Those real results suggest sighted auditions are "too sensitive"
and sometimes overwhelmed by imagined (false) differences brought into the audition inside an audiophile's over-active imagination.If the ABX test methodology and/or switchbox are "INSENSITIVE" as you claim in ALL CAPS, then the sighted warm-up auditions would not have so many people claiming to hear differences. The only difference in the sighted audition is the listeners can not select "X" -- they listen to A and B through the same switchbox that's used for the double-blind portion of the audition that follows. They know what component A is and they know what B is ... just like they do later in the double-blind portion of the audition.
![]()
Indeed. Most of your posts on DBT/ABX are imposible to reply intelligently to, you play which ever side of the fence suits you.If you are taken to task over the lack of scientific method or procedure in your "auditions", then you claim it was not a scientific test anyway, rather, it was "just an audition", and should not be held to scientific standards. When it is pointed out that such "auditions" are pretty much worthless as any sort of valid scientific evidence, you want to claim the protection of the mantle of science primarily via the blind aspect of the "audition", and act as if these auditions do have some scientific worth somehow. Yet they still are seriously flawed and without merit.
So on to address the specific point you raised, which I have done so before, and more than once, but apparently you do not remember any of those times. So take notes this time, and make a copy, and tape it to your wall next to your computer.
[ With the same switchbox attached, participants involved in the preliminary sighted warm-up auditions had little trouble "hearing differences" among components ... ]
And this was in one of your infamous non-scientific "auditions". One that was not written up, or published, or any of those silly things.
Sighted listening before a blind listening session has absolutely no relevance to the blind portion. Nothing that was claimed to be occuring during this initial sighted portion can be taken to apply to the blind portion, as the listening conditions and mental state of the listening subjects has changed. The situation is no longer the same one.[ Those real results suggest sighted auditions are "too sensitive"
and sometimes overwhelmed by imagined (false) differences brought into the audition inside an audiophile's over-active imagination. ]Show me the peer-reviewed professional journal paper on this. Show me any nationally published and available to the general public account of this.
The "results" from your unscientific auditions are so uncontrolled and haphazard, with amateur and inexperienced interpetation of what these haphazard results mean, that they are worse than worthless, they are misleading and confusing. Look at you (and several others taken in by the ABX mumbo-jumbo), you are apparently permanently confused about the reality of what actually occured, and what it meant.
Untrained listeners, who have not been previously exposed to such incomplete and ill-advised procedures, might well unknowingly claim they thought they were hearing differences between A and B.
AND?
Whether they were or not, it doen't mean anything, because once the ABX forced choice portion starts, they are no longer in the informal Coffee Klatch mode, they are now being forced to analyze the music and make a judgement call, something they were not asked to do earlier, nor were they adequately trained to deal with this dichotomy.If none of the above is clear enough, then perhaps this will make it clearer.
Uncontrolled sighted listening before the formal blind ABX listening portion can not be used to validate the formal blind portion. Period. Not allowed using normal scientific conventions.
The only way to validate the sensitivity of the formal blind ABX portion, is to actually conduct formal sensitivty trials DURING that portion.This is not my personal opinion, it is science and a fact.
I know that this will not be sufficient to deal with your thorough brainwashing on the subject, but at least those who have NOT been ABX'd will understand.
BTW, as I have been exposed to one of these informal "auditions" that you describe, and also conducted interviews of folks who participated in them, I am aware of the single huge major flaw they all tended to have, one I did not stress enough in my Prop Head posts:
the initial sighted portion of the test was often far, far too long (in some cases, several hours), and by the time the listeners were actually doing the formal ABX forced choice portion (again, with little or no actual training or explanations), they were so badly fatigued, that they couldn't have heard the difference between an A bomb and an H bomb, much less DUT A vs., DUT B!I am also aware of the huge amount of encouragement that the test manipulators were giving to folks during the initial sighted portions, agreeing with the slightest hint of differentiation, and going out of their way to re-inforce what some of the listeners thought were concrete sonic impressions. This reminded me of the similar shell game that a lot of ABX naysayers like to play, where they insist that someone make forced choices between A and B, when A and B are the same, and then take the results and twist them into looking like some sort of listening test ephiphany, that we all WANT to hear differences, even when there are none.
So your recounting of these wholly useless and uncontrolled "auditions" has very little relevance to real listening tests, and should not be confused with some real facts about such tests.
I think that the most telling thing is, you continually use the very same ABX inspired wording and phrasing, which is designed to make it seem as if the event actually should be taken seriously, but such phrasing only fools the amateurs and inexperienced folks for a while.
Since you do continually use the same phrasing and wording that is used by only the most hardcore and most adamant of ABX related naysayers, that then raises the question: why pose as a general audiophile who happend to be exposed to some ABX listening sessions once upon a time, when you clearly must be more heavily involved with the ABX camp than you are willing to admit to.Give it up Richard, the ABX propoganda was outed a long time ago, and there is little point in retelling your sad tales of science gone wrong now.
... with the usual meaningless character attacks on those whose listening experiences do not support your "I'm an audio guru and you're not" agenda:RISCH IN MOTION:
"Most of your posts on DBT/ABX are imposible to reply intelligently to, you play whichever side of the fence suits you.""So on to address the specific point you raised, which I have done so before ... but apparently you do not remember ... So take notes this time, and make a copy, and tape it to your wall next to your computer."
"Look at you (and several others taken in by the ABX mumbo-jumbo), you are apparently permanently confused about the reality of what actually occured ... "
"I know that this will not be sufficient to deal with your thorough brainwashing on the subject ..."
"So your recounting of these wholly useless and uncontrolled "auditions" has very little relevance ..."
"Give it up Richard, the ABX propoganda was outed a long time ago, and there is little point in retelling your sad tales ..."
RG:
When one has nothing of value to add, attacks on the poster's character and knowledge are all that's left.That's "RISCH IN MOTION".
...have not addressed the issues regarding your listening 'auditions', and the procedural problems they have that render them INVALID as any sort of scientific evidence.My patience grows short with such stonewalling as you conduct over and over and over, the same old half-baked ABX story, the same old propoganda, the same old illogical statements.
Not once have you ever even attempted to address the real issues with the ABX listening sessions you supposedly attended, and continue to tout as if they really meant something. I truly believe they were intended to confuse and demoralize audiophiles and music lovers into thinking they can't hear a dern thing, and they were deluding themselves. It obviously convinced you, lock, stock and barrel. That is the real shame, the other is that you continue to state these experiences as if they were worth something, had actual meaning.
Oh well.
Quite a few people in warm-up sighted auditions report hearing differences that they no longer hear minutes later under double-blind conditions.The audio advice these people would provide immediately after the sighted audition is not the same as the audio advice they'd provide after the double-blind audition.
Nothing you write can change what I have repeatedly witnessed
first-hand. People listened under sighted conditions through the
ABX switchbox and reported hearing differences that you claim should be obscured by the switchbox ... simply because the differences are not heard minutes later under double-blind conditions.Either the switchbox obscures differences under both sighted and blind conditions or it does not obscure differences at all.
Your (lack of) logic, however, is clearly obscured.
I have answered this in my post, the one you failed to read.Don't let real logic or truth stop you, you must carry on the naysayer traditions!
Jon Risch
![]()
How did you test the box for sensitivity? How did you test the test to test the box for sensitivity?
Magnetar
![]()
The efficacy of the methodology, the test setup, and the biases and abilities of the participants are variables in _any_ experiment. That's why you need controls. Preferably, you want a control group of individuals who have demonstrated their abilities in other experiments and a set of control criteria whose expected outcomes are well known. The individuals who act as controls are needed to establish the efficacy of your test methodology and setup relative to other experiments and the test criteria which act as controls are needed to establish the abilities and biases of the test participants. Without them, you'll never know whether your outcomes are representative or just a function of your setup or sample.You wouldn't make a measurement of amplifier distortion without calibrating your test equipment, and you wouldn't conduct clinical drug trials without controls. So why would you accept the results of listening tests without controls? Before listening tests can be taken seriously in general, the community needs to come up with agreed upon test standards that can be used to establish the credibility and sensitivity of a test setup and its participants. But given the way people behave in this industry/hobby, I'm afraid there's little chance of that ever happening in my lifetime.
One thing that would at least help though is to do what Brian suggested and repeat your test for a number of different criteria ranging from obvious to insignificant so that you can interpret how the participants fared on the criteria of interest relative to other, more well known, less controversial criteria. That would be a step in the right direction anyway.
"the community needs to come up with agreed upon test standards"I agree with this. The audiophile community will be stuck spinning their wheels forever until we can come up with a standarized testing methodology. Obviously sighted testing is flawed (the method used by most professional reviewers BTW). Some claim ABX and DBT to be flawed. Some claim that scientific measurments are not able to measure certain sound qualities.
Also your point about controls is well taken. I would suggest that the control concept can be expanded to include the hardware. The problem with most audiophile reviews is that they are conducted with one piece at a time. It would be much more useful to compare a new cd player against a group of "reference" cd players that are already a known quantity (a shoot out).
I wonder what testing methodology could possibly appease all camps? Perhaps that's what is needed to elevate our hobby to the next level?
I would suggest that the control concept can be expanded to include the hardware. The problem with most audiophile reviews is that they are conducted with one piece at a time. It would be much more useful to compare a new cd player against a group of "reference" cd players that are already a known quantity (a shoot out).
I agree with you 100% on this point. Most standalone hardware reviews are full of winding prose wrapped around the same overused adjectives and I'm lucky if I can find a few tiny nuggets of useful information out of a whole review. But if you compare a piece of hardware against another piece that I'm familiar with, I can get more out of one sentence than three pages of normal review text. Some reviewers try to make comparisons where possible but others seem to avoid it.
I would love it if a review publication would take your suggestion and maintain a set of reference examples of each component type that have varying sonic characteristics. One reference system is not enough. Then they could conduct periodic comparisons tests in which new components were tested against the reference components by a panel, and publish consensus findings as well as comments by each panel member.
That is indeed one of the main problems, to be perfectly scientific about it, you can not use the test, to test that same test. Invalid.So how did I know the clasic ABX switchbox type testing was not the most sensitive method available? I tried it. I started conducting controlled A/B listening tests back in 1978, and had access to an genuine ABX switchbox in the early 80's. Upon attmepting to use the unit, I was aazed to discover it would not generate the same results as the controlled A/B listening tests. Thinking I was doing something wrong, I delved over all the informatio I could get my hands on about DBT's, and specifically ABX. I was a member of the AES by 1980, and had access to all the various papers. I also had a University library at my disposal.
Once I started to delve into the specifics, I realized that there were certain inherent limitations to ALL blind testing, and became aware of the various specific problems with the classic ABX method as used by Nousaine and Krueger. This after consulting with various people in the industry, including statisticians, other audio engineers, and some folks heavily into medical testing procedures.
After experiencing first hand the problems and issues with classic ABX testing, I developed my own methods and procedures, and finally presented those in a presentation at the 91st AES convention in 1991, AES preprint #3178, "A User Friendly Methodology for Subjective Listening Tests".
In point of fact, I don't think that anyone has ever truly provided any hard information on how much sensitivity the classic ABX switchbox tests had/have, because they were seldom, if ever, checked for their sensitivity or resolving power, it was often assumed that it was "the most sensitive method available", and no further questions were asked.
If you can point to some hard information, something other than the joke that is the ABX website pages, then please feel free to forward that information on to me.In my case, I used the classic ABX switchbox and methods, and found them lacking compared to a roughly equivalent kind of blind test situation. Having attempted to find out just how sensitive the use of the ABX switchbox and method were, I was pretty much unable to achieve any where near the sensitivity that either a very high quality DPDT switchbox had with a modified ABX methodology, OR a cable swap with modified methodology. It was hard to pin down hearing low level distortion, mild clipping, SS amp crossover notch distortion, and other obvious distortions that were clearly audible without an ABX switchbox in the system; given that and the listening test results, it was clear that there was a serious problem with the classic ABX switchbox type listening tests.
Can't you use an extra million bucks?If you know how to test, run it by randi and collect.
Magnetar
![]()
I personally have never made any claims, or related any experiences about the three particular tweaks in question.I do believe that I may have once posted HOW a Shakti stone could conceivably influence or affect an audio/video component, but I certainly never made any claims as to it's efficacy.
So why should I be concerned about trying to collect on his most recent offer?
Surely he will take you up on your Belden wire brews vs Zipcord.
Magnetar
![]()
I'm not that stupid, and fortunately, neither are very many other people.
I agree how difficult it is to set up a DBT that does not yield null results. I was present years ago at a large public DBT with the ABX box which randomly compared 10ft of 26 gauge speaker wire with 10ft of Monster Cable. One or two astute listeners (one female) made 16 of 16 correct identifications. I thought I could identify each wire correctly all of the time but became confused during the test and did not participate in all the trial runs.
Because everybody's listening ability varies, you can't interpret the outcome if you pool everybody's results together. Enough trials must be conducted to make each individual's results statistically significant. Once you've established a valid result for each individual, then you can worry about how the group performed by analyzing the distribution subject's confidence levels or something like that.
Here's a suggestion for checking your DBT's sensitivity: don't bother with unknowns until you determine which known differences that do indeed exist can be reliably identified. I recommend known but subtle differences such as mono vs. stereo music reproduction, 50Hz-5kHz vs 20Hz-20kHz bandwidth, correct relative polarity vs inverted, 0.1% THD vs 5%THD, and the like. If your test gives null results here it will not reliably identify speaker wires, IC's, .1dB level differences, Krell vs Pioneer and so on.
Every set of trials must have a control and every measurement device must be calibrated, otherwise the test results can't be interpreted. I would hope this is obvious, but from some of the responses to this thread I'm not so sure.
I'd be interest to find out if I can tell mono from stereo under DBT conditions, and I've had experience!!
![]()
Well, Jon, you're certainly the expert at woking with half a brain, so I really can't dispute your contention.However, there's no requirement to use an ABX box to test magic rocks.
In fact, I'm really at a loss how to connect a magic rock to an ABX box, but I'm sure you can come up with just the right magic cable if necessary, right?
![]()
After he mentions ABX at the website, as related by Magnetar, I was assuming he wanted to use the ABX switchbox to score the listening sessions. You do not have to actually hook-up cables and run the signal through the box to use it as a score-keeper.Many of the same problems and flaws that classic ABX listening tests have would still be relevant for the testing of the various items Randi mentions.
As for hooking up the switchbox to the rocks, this is simplicity itself, just drill a small hole in the rock, and plug in the cable. ;-)
double blind testing is just fine and dandy (and demonstrably effective) for FDA drug trials, but can't be done with audio?
![]()
In most trials, all the participants believe they're getting the drug, but some aren't. That's not really blind, because the people who get the real drug are taking it under sighted conditions just like they would if it were prescribed. Also, there is no comparison being made and no A-B choice for the participant to make. Finally, the results are usually obtained through medical examination & test, not through sensory reports by the participants. So it's nothing like an audio ABX test. Bad comparison.Dave
You are completely missing the point.Aside from the fact that drugs involve life and death situations, as well as HUGE and highly public millions of dollars, and audio is neither a life or death scenario, nor is it concerned with milions of dollars for each product.
Since almost every drug has side effects and may actually kill someone, drugs should be tested to the highest and most stringent standards known to man. No excuses, no exceptions. This then involves the potential that a truly useful drug may fail to show suficient testing results to warrant further investigation or risk by the drug manufacturer.
Management and the acounttants want results, with no ambiguities. The therefore will accept and embrace what is know as an expedient solution with regard to statistical testing, and drug testing in particular. They essentially take the tack that if a drug fails to provide statistically significant positive results in a study, it is then assummed to have been in actuality, of no significant help or usefullness. This despite the strict scientific interpretations that should be used. In other words, they act as if the failure to achieve a statisticaly significant positive is equivalent to a negative.Again, this is done out of expediency, because for them to do the studies over and over, is costly, time consuming, and involves a certain level of risk to the company for every individual that participates. No doubt that for really promising drugs, they go a little further down that road, and conduct more than one isolated test series. But on the whole, they are being miserly with the time and money and the risk, in order to maximize profits. Does this mean that we occasioonally miss out on a drug breakthrough? Yes, yes it does.
With audio, first, hardly anyone has the resources or the werewithal to conduct listening tests regularly, to an acceptably professional level that would be accepted by everyone from both camps, the costs would be staggering considering the overall level of money involved for the audio industry vs, the drug companies. No one is going to die from a Shakti stone or a fancy interconnect, no one will be incarcerated for life, etc.
Finally, for the drugs, there is hardly much of a choice in the matter, given the government regulations and mandates involved, do you think the drug companies do the testing out of the goodness of their hearts? How many fully regulated and supervised drug tests would actually occur, if they were not mandated by the FDA? I think that you might find some reluctant managers and accountants in the drug companies too, and the amount and level of testing would go down significantly if it were not mandated by the FDA.
One final comparison. The often referenced classic ABX switchbox listening tests were more like Theodoric the Barber letting blood than they were like a modern FDA mandated drug test, the procedures and methodology was awful, not up to true academic or professional auditory testing standards. It was wholly on the level of amateurs and wannabe's playing with science like it was a new toy.
As I have said before, just because any given listening test is blind or double blind, does not mean anything else about that test. It could easily be the world's worst listening test, that also happened to be using a blind procedure. Mnay people see to want to imbue a DBT or blind test with some sort of magical properties, as if the blindness automatically fixes eveything else too. Too bad it doesn't do that!
"They essentially take the tack that if a drug fails to provide statistically significant positive results in a study, it is then assummed to have been in actuality, of no significant help or usefullness. This despite the strict scientific interpretations that should be used. In other words, they act as if the failure to achieve a statisticaly significant positive is equivalent to a negative."Drugs are compared to a placebo otherwise almost ALL drugs would show a positive result.
Lately we've been learning that some of the most widely prescribed anti-depressants (not the ones in the link) have been causing increased rates of suicide among the young especially and actually have a negative effect on users compared to either placebo or taking no medicine whatsoever.
It's a separate debate to audio but please do not uphold drug testing as being models for scientific evaluation when it is in fact limited by the testing procedures and dare I say it, the independence of the testers.
Best Regards,
Chris redmond.
![]()
keep raising the non-issue that audio isn't life-or-death who are missing the point.A methodology that works, works. You may reasonably argue that it's killing a fly with a sledgehammer, but it's simply unreasonable to argue that it's inapplicable.
And with the instance at hand, despite all the diversions attempted, the issue is that of a single individual demonstrating that he can indeed do what he claims.
The proposed testing does not attempt to substantiate a claim that everyone ought to be able to meet a certain level of performance, or have a certain response to stimuli. It's about one person, the claimant.
Know what?
I can lift 1000 lbs.
Over my head. One handed.
I've got a half-ton block of pig iron in my garage - I lift it every morning.
I really should be in the Guinness Book.
You want to watch? Sorry, that's not possible. Observation drains my energy. Video? Nope, same problem, the method of observation affects my performance. You want to weigh the block? Wish it were possible, but a sensitive enough scale in that range doesn't exist. What if it came up 0.01 ounces short of 1000 lbs? Then all my results would be subject to challenge and ridicule. I can't afford the risk to my reputation.
I'm going to lift my block now.
You were handling it OK until this point. Personal attacks do not strengthen your argument.
Regards,
Geoff
![]()
"For instance, people that have been involved is such ABX tests have found difficulty in hearing differences between a cheap Pioneer receiver and a Krell amplifier. Is there ANYONE who thinks that they actually sound the same or perform the same? "Well, actually that's exactly the impression I have from listening to both under the restriction that both are operated within their design limits. That is the restriction quoted earlier by the fellow from QSC. Many others have stated the same restriction. Where products like Krell have an advantage is that it's design restrictions are much broader -- it will drive more difficult and lower impedance loads, it can supply much greater current for a sustained period, when driven into clipping (if that's possible!) it will sound more pleasent, it will experience a much longer useful life, it can play more loudly in extra large livingrooms, etc. etc.
The fact that at typical home listening levels of a couple of watts into non-wierd loudspeakers, a mass maket reciever and a high-end super-amp are indistinguishable does not negate the benefits of the high-end unit under less benign circumstances.
ABX tests are conducted under "benign" circumstances. That "people that have been involved is such ABX tests have found difficulty in hearing differences between a cheap Pioneer receiver and a Krell amplifier" does not refute the ABX test but rather illuminates the circumstances under which virtually all amps perform alike and suggests circumstances where one may have an advantage. Just because a Ferrari or a Hummvee has no advantage over a Toyota Prius driving two blocks to the corner grocery, does not mean that there are no circumstances where they don't excell, nor does it negate the value of a comparrison to someone who only drives as far as the corner grocery.
If you were to construct a Krell vs. Pioneer reciever ABX test that comparred them while driving a very low impedance, low sensitivity, highly reactive loadspeaker with the requirement that it fill a 40' x 60' livving room with an SPL comparable to the NY philharmonic doing a Beethoven symphony, chances are good that your Krell would come out on top. And no one would argue the point.
![]()
I do mean under 'benign' circumstances.If you honestly can't tell the difference between a cheap Pioneer receiver and a Krell, then I suppose you might actually believe all the naysayer crap.
Jon Risch
![]()
It's all in the brand name, the retail price, company image -- no need to listen to two products in the same system playing at the same volume before declaring which one sounds better because you already "know".How scientific and objective.
Conclusions without listening.
And you have the nerve to criticize blind audition methodology that you did not even observe ?
How was that test setup? Was the preamp section of the receiver used it both cases? That would explain it sounding similiar...
Simple test. Take any A/V receiver currently on the market that has a direct button - set amplifier to 2 channel stereo and bass treble to flat. Listen to music - then push direct button ----- AHHHH hear that difference not only is it not a level change or even something as grandios as an Amp change - it is simply listening to the amp with a few SWITCHES taken out of the chain - you are hearing a mere switch let alone a whole other amplifier.You could also listen to amp and then add Bryston power amp 3BNRB to receiver - use my 95db 8ohm Wharfedale Vanguard horns - dead easy to drive - I had the flagship Pioneer Elite VSX 95 - 125WattsRMS continuous full bandwidth .00025%THD - Bryston = whole other sound - made me realize the speakers were actually good enough to keep and that bloated flabby bottom end was due to crappy amp. PS - less than 80db SPL. My home demo of the Bryston got me into high end audio - I currently have a Marantz 4300 receiver.
Let them argue ad nausium over differences - the science like medical science is full of it - giving themselves an excuse to make money - medicine that doesn't work yet people pay thousands - or for folks like Randi to have a very profitable career debunking everything - look how rich he is from it.
You've been around long enough John - you aren't going to convince these people and they feel the same thing. There is more to worry about than whether some cable is better etc.
![]()
....and thereby drowns out the tweak in question. Ok..... This seems easily recified, though, whether in theory or practice. However, do you conceptually agree with the statement that blind testing will reveal whether there is a real difference?
![]()
There are inherent problems with blind testing, and thus, the exact methodology and procedures are critical in determining just how sensitive and useful the test might be able to be.For instance, see:
http://www.audioasylum.com/forums/prophead/messages/2579.html
about 1/3 the way down the postI point out the basic dichotomy for a formal listening test and listening to music for enjoyment, the difference between a casual listening mode and an analytical listening mode.
ANDUsing Up the Error Budget in the Studio:
http://www.audioasylum.com/audio/cables/messages/30013.html
where I make the argument that there is a potential threshold effect, where the slightest change in signal ttransfer quality can suddenly make a noticable sonic difference, due to the "error budget" being almost all used up, and then suddenly going past the point of audibility by adding one more tiny additional signal abberation.The ABX switchbox just confounds this problem even further, additional cables, the relay contacts, the magnetic field of the relay coil impinging on the signal carrying wires, the digital logic circuits inside the ABX switchbox corrupting the analog signals, etc.
Just because a particular listening test is conducted blind, this says ABSOLUTELY NOTHING about the quality of the REST of the test. It could be the worlds WORST listening test ever, all of it conducted double blind.
x
![]()
The blind testing has other problems in itself and work is being done on the TEST aspect of the DBT in relation to many psychological fields - cognitive psychology in children for a start. How does a test impede the thought processes. It is hardly a stretch to the audible testing field. Under a test if a subject is told to pick the best unit or a specific unit A or B - did you know that people will choose A or a B EVEN if the unit is identical...in other words if the tester plays A all the time the subject will make a selection between the two. The Engineers running these tests will call it a victory - it isn't. A person under test can be fooled and or pressured into making decisions attemtping, as our brains do, to solve the unsolvable - no too different than seeing imageds in clouds or pictures in optical illusions. We formulate what we know from personal reference points.For validity to be high - the environment needs to be as close as possible - testing environments are not. Hi Fi CHoice magazine while not a DBT they do have several listeners in a panel listening blind and level matched - the difference is it's not putting a stressor on the participants - differences are expected and scored as part of a panel - the only magazine doing this remotely close --- and they get differences - some pretty good brands take a beating.
But I understand the appeal - a rough and ready DBT can at least counter the folks saying the A or B makes a HUGE difference - if it were huge they could tell to a very high level even under a test parameter. The subtle claims - well they're subtle so it seems unreasonable to expect the same level of proficiency at differrentiating speakers from amplifiers. But we are relying on stats after all.
![]()
ok..... So now it won't work because it is a test, and forces an answer. ok, I have 2 example tests that will work, one relevant and one not:Coke vs. Pepsi.- I can pick out which is coke, and which is pepsi. 100 percent of the time. Put 2 glasses of coke in front of me, and I'll know it. Insist that one is not coke, and I'll still know you are lying. Further, adulterate one with, say, lemon juice, and I'll be able to pick out the adulterated one. If I were to say it had been adulterated, but in reality it hadn't, that would be proof that I cannot taste the difference.
Stereo with special tweek vs, w/o.- Say install tweek of choice on your stereo which you are most familiar with (for our purposes, it must be invisible, like some sort of black box tweek, and it's nature is unimportant. In one setup it is present in the line, and in another it isn't. If you cannot hear the difference, then there is no effect on what you hear, by definition. If someone tells you it is in the circuit, and you hear no change, well then, there is your answer....
What you are objecting to is the nature of the specific test as far as the reviewer telling a difference. If you give the reviewer a choice to say that he doesn't hear a difference, isn't that the key? If the reviewer says that the sound is the same with the tweak in and out, ok. If the reviewer says the tweak was in when it is out, or vice-versa, or no tweak at all in a certain statistical correlation, it will show that the choice was random, and not from a heard difference.
Whatever the problems of methodology of this specific test, do you think that audio is immune to testing? Surely not.
[ If you cannot hear the difference, then there is no effect on what you hear, by definition. ]You see, this is one of the DBT myths out there, and the pro DBT folks tend to promote and further this myth.
YOU CAN NOT PROVE A NEGATIVE VIA AN ABX STYLE TEST.
In the world of science and statistics, the results of such a listening test can come out one of two ways: either it came out as a statistically significant positive to a certain degree of statistical certainty, or it did not. The lack of acheiving that certain level of statistical signficance MEANS ABSOLUTELY NOTHING ELSE, LEAST OF ALL THAT THE RESULT WAS A NEGATIVE.
The is the one and only true scientific way to understand and approach these sort of listening test results. This is not negotiable, it is a hard FACT.
Interpreting a null result (failure to acheive that statistically significant positive) as anything other than ONLY that, is incorrect and is completely unscientific.
It is human nature to equate such a failure to a valid negative, but you can not scientifically prove a negative via an absence of results.
v
![]()
My family and a lot of friends performed variations of the "Pepsi challenge" back when the commercials came out. My brother used to be a bartender and he played around with this a lot too. You would be surprised how many people who express a preference for one or the other before the test fail to identify their favorite during the test. I remember that some people had a strong preference for Coke and could even explain in detail the flavor differences they use to distinguish Coke and Pepsi apart, but failed to tell them apart in a blind test. I personally had no trouble with it as long as Coke came first, but if Pepsi came first the syrupy sweet aftertaste somewhat masked whatever followed.In college, some of us repeated this with beer and got similarly perplexing results. One friend switched drinking habits (ie. started buying cheaper beer) after being "enlightened" by the results. But it didn't take long for him to rediscover his preference again. I think he willed himself out of tasting a difference just like some audiophiles will themselves into hearing one, but eventually he couldn't deny his taste buds any longer.
I think that for the average person who doesn't have years of experience as a trained taste tester, the results of these Pepsi challenge type blind tasting experiments are not as cut & dry as you think.
And try it with wine - many people would be hopeless - others are exceptionally gifted - it's called training. Harman can take any old joker off the street and then claim MOST people will like these parameters in a loudspeaker. But how many different speakers doid they test which ones so I can duplicate their test - they say it doesn't matter which brand - but umm it does so matter - did they test a bipolar omnipolar electrostat - more to a speaker than wide band flat frequency and good off axis response...buit those folks are trying to sell JBL and Infinity loudpeakers so buying Toole off is pretty shrewd marketing.I digress - there have been tests here at the Pacific national exhibition with water - Evian(spell this backwards LOL) and two other bottled water companies along with Surrey tap water. The scores in the tests averaged right around 25% for each on taste tests. So that makes it sound like there is no difference - but what really is required is ONE person to test it 100 times and see what they get rather than adding everyone's scores together(The ABX test site at Oakland university is cumulative scores not single scores - One person may have scored 100% correctly at determining difference another 12 people may have been 40-50% bmaking the overal averaged score not significant - too many people screwing with the numbers to prove their pre-conceived points. Many people conducting blind tests have a "Belief" at the outset that it's snake-oil - and typically find proof of what they believe.
![]()
Damn straight! The primary interest in blind listening tests is for "debunking" purposes, so the people who want to conduct them have no interest in things like reliability, repeatability, bias, or sensitivity. As long as they get their null result, they're happy. Unfortunately, the debate over listening tests is so polarized that anybody who did have a genuine interest in bringing rigor to listening tests and advancing the state of affairs has been driven away.
Well I have seen it in many fields - people always talk about the NRC blah blah - Harman has on their site so called Proof that people like a certain sound from speakers - and of course harman international posts this on their site - implying of course that their speakers fit this ideal. The NRC does not support research done there it is a big building anyone can use if they have money. So what. There is a huge conflict of interest.People think Eggs are bad for you - Why? Ohh a study was done - well yes ONE study in the history of the world was done in 1937. Until then most people had eggs and bacon for breakfast. Kellogs needs to break into the market and get people to eat corn flakes so what do you know KELLOGS does a study and eggs are going to kill you - buy our high carb(=sugar) little nutrional value Corn flakes instead. This kind of science is putrid. But hey the Nazis believed Aryan's were the best and used science to PROVE it - good excuse to slaughter people.
I'm not saying sighted listening is better - Blind listening makes sense and is better - but not the ones I keep reading. There is very little validity to these results - and de-bunking is impossible. Even if someone managed to get a statistically significant result in 16 trials - it would be ignored as a fluke(ie; even if you get the result you have not proven there is a difference - it's a no win proposition). And a DBT by its very definition can NEVER prove A is the same as B or A=B.
The problem with your analogy is that there is no degree with coke and pepsi - I did such a challenge with other soft drinks and like you I can easily detect coke from pepsi or root beer etc.Obviously we use testing in audio to root out the snake-oil with regards to value of product versus cheaper options - my contention is that a wire difference is not the same as a speaker difference - would you agree? Our expectation with typical tests is that people can discern a difference between cables to the SAME degree as with say amplifiers or cables. Not to mention that a lot of if not most cables probably sound the same or at least very much the same that to most people spending the money for a difference isn;t worth it.
I have only seriously auditioned ONE expensive cable and it made the sound much worse than the cheap cable...i cannot however deny there was a difference.
The notion of saying they sound the same to me is a plus option - but my example of people choosing A or B when A is always played reveals that people are UNRELIABLE in these testing environments. They are making an assumption that the tester want an either or and they provide it. Adding the thrid option of "they sound the same" makes sense - because even on the wire that rolled off the extremes in the mid band I could not have told them apart.
The more SUBTLE a difference even a TASTE test the more times you're going to need to make a decision. For instance if the two drinks were VERY VERY close in taste but different you would taste a then b and say hmm let's try that again and go back and taste A again. You may run through 10 such trials(if you were not full or sick of doing it - fatigue) and you may only be accurate 7 in 10 attempts. The more SUBTLE it is the less accurate you will be - that doesn't mean because YOU are less accurate that the drink or the componant isn't different.
Statistics in audio are skewed on a very short trial basis. A 9/10 is statisitically significant to the .05 level. But, the poor sap who get 6/10 is said to be not statisically significant and some pseudo scientist then BELIEVES he has proven that people can't tell a difference. However, what few people seem to realize is that if the person who scored 6/10 could do that 10 times with a total of 59/100 they have reached statistical significance at the .05 level which is the SAME as scoring 9/10. Obviously the 59/100 is far far far more reliable a testing proicess because the person has loads more oppportunities to show they can tell a difference(would certainly be better than the standard 16. But what stereo manufacturer in their right mind would want to do a test like this and advertise it. Most people don't understand it and the manufacturers don't want to say "our participants could tell the difference to statistical signifance to the .05 level 59/100 times." In a 16 trial test if a person score 9/16 they would be ignored - yet more trials could very well lead to 6/10 ten times scenario - and since no one bothers to do it you get low number and low relevance tests.
If you're looking for HUGE noticebal and obvious differences then typical DBT's are effective - subtle differences require more - basic logic indicates this - it is done in psychology - DBT's like this should be run in the manner that psychologists would do them not the way medical scientists would do them - but maybe engineers are just lazy - judging by most of the equipment I have heard over the years -- it wouldn't surprise me.
And let's not forget the big business of bashing the high end - there are "so-called" objective or skeptic magazines like the $ensible $ound which is another niche to sell to a different part of the population - just because one is a skeptic doesn't make them right.
![]()
I think we are picking too much on reviewers. Let the manufacturers who make money off these things respond. Mr Clark does every system need 11 stones and 36 onlines? That is about $6000. Wouldn't putting that money into better components make more sense? Do you still have all these in your system and how much did you pay for them?
![]()
"I think we are picking too much on reviewers."Nope.
If I said a dehydrated piece of my faeces placed upon a speaker had a 'magic' effect on sound I'd be talking crap - literally, but no-one would buy my crap and no-one would get stung.However, if an expert who's supposed to be giving an independent opinion of my crap confirms my claims, it is they who are responsible for promoting that product in the eyes of a wider audience and of giving credence to that product.
Best Regards,
Chris redmond.
![]()
> > However, if an expert who's supposed to be giving an independent opinion of my crap confirms my claims, it is they who are responsible for promoting that product in the eyes of a wider audience and of giving credence to that product. < <The *only* logical extension of your position is that there exist a panel of experts with legal authority to restrict hi-fi products from the marketplace that cannot be demonstrated as effective.
In other words, you want politicians (or politicized experts) to tell you what you can buy. No thanks.
![]()
the concept of "freedom" and the concept of "protection from one's own folly" are really mutually exclusive (within certain limits, if I am mentally ill and atempting to drive my car into a tree, I would expect to be restrained).At any rate the idea of a narrow-minded beurocrat deciding what is "real" in audio, or any other subject, just scares the crap out of me.
Give Me Ambiguity or Give Me Something Else!
![]()
"The *only* logical extension of your position is that there exist a panel of experts with legal authority to restrict hi-fi products from the marketplace that cannot be demonstrated as effective."That's the *only* logical extension???
Maybe you should determine which quadrant of your brain came to this conclusion then rub a Shakti stone against it for enlightenment.
"In other words, you want politicians (or politicized experts) to tell you what you can buy. No thanks"Yep - those are 'other' words alright, but where you pulled them from I'm not sure; the Monty Python book of translation perhaps?
Is my hovercraft full of eels for instance??
Best Regards,
Chris redmond.
![]()
By complaining here in public I assumed you were wanting someone to do something about this terrible practice. If that isn't what you want then your purpose in posting is just to entertain yourself. I understand.
![]()
"By complaining here in public I assumed you were wanting someone to do something about this terrible practice."Stop jumping to assumptions then as you're not too hot at making such judgements and tend to put your own spin on things.
"If that isn't what you want then your purpose in posting is just to entertain yourself. I understand".No, obviously you don't understand the concept of open debate among members of the Asylum, but hang around and maybe you'll pick it up.
...as the outcome of the debate, you are simply entertaining yourself. If you are desiring change, what other change can there be besides restrictions on what people can say and by extension, what they can buy?I suppose you could be wishing that everyone act with integrity of their own accord, but no one is that silly. You may as well wish for your hi-fi to be perfect.
![]()
nt
![]()
Can anybody meet his challenge? For audio purposes, the prize's size or availibility or Mr. Randi's motivation is irrelevant. If not, can sombody tell me why? Be technical and specific.
![]()
The inevitable questions on the prize's availability and size, can be solved merely by sending your address (e-mail, fax, postal) to me, and we send you the official statement of the account. This is all already carefully delineated on the JREF web-page, but many prefer not to go there or to know this, so they can continue to complain and whine.As for my motivation: if I see someone who's been run down in traffic, I make every effort to summon help. If the victim insists on crawling back into the stream of traffic, I might go after him a second time, but not a third. Is that motivation enough, or would you folks just ignore the accident....?
I have been through this, on another topic. Asking for scientific proof will make you the lightening rod for audio insanity. Got me banned.Randi is correct, of course. I would take a piece of that action.
![]()
"I, James Randi, through the JREF, will pay US$1,000,000 to any person who can demonstrate any psychic, supernatural or paranormal ability under satisfactory observing conditions. Such demonstration must take place under these rules and limitations."So, to take the prize does the promoter of an audio product have to prove that his improvements are not scientific, but due to "psychic, supernatural or paranormal" properties? Well, THAT would sure sell product!
I have been a Randi fan over the years, having to do with his career spent exposing so many of those so-called psychics who prey on the weak and gullible. He does go for publicity, but as you stand in line at the checkout counter, look at what most folks are given as tabloid truth. However, this audio challenge is demanding a marketing disaster. Why would anyone play?
http://www.randi.org/jr/080504string.html#8I don't think he's asking for magic:
A FIRM OFFER
A reader has a few words about "hi-end" audio matters:
The first person who told me that people who claim supernatural powers never seem to be able to make them work in the presence of magicians, was an old friend named Paul Ierymenko. He worked for me designing and building various electronic products in the mid 70's. He is now the head of R&D at QSC Audio. They're one of the makers of the ABX Comparator. I remember talking with him, back then, about the differences between the sound quality of various audio devices, especially amplifiers. He maintained that any reasonable quality amplifier, operating within its specified limits, is acoustically indistinguishable from any other. Ditto for many other devices as well. He had nothing but contempt for the claims of manufacturers of high end speaker cables and other magical crap like the stuff described in your recent commentary.
This ABX Comparator is the ideal setup to test audio devices and systems. It generates a random "A or B" switching signal, so that the user does not know whether the item or variable being examined is in or out of the circuit, and it accepts the user's decisions and stores them. When the Moment of Truth arrives, the user sees the results of a proper double-blind test. This is a setup that the audio quacks strenuously avoid, in fear that their fakery will be exposed.
Today I sent out the following e-mail letter to eleven audio reviewers who showed up on the web pages of the Shakti Stones and P.W.B. Electronics, as endorsers of some audio nonsense mentioned here last week, and to both manufacturers of the devices as well. The letter explains itself:
My name is James Randi. I am the president of the James Randi Educational Foundation (address and contacts listed below) and I am an investigator of unusual claims. This Foundation has a prize of one million dollars that we offer, details of which are to be found at www.randi.org/research/index.html and www.randi.org/research/challenge.html.
As a reviewer for a major audio publication, I'm sure that you will find the following offer of great interest, both from the point of view of validating your expert judgment, and adding substantially to your net worth.
Please refer to www.randi.org/jr/073004an.html#3 and go to the item "THE JREF MILLION IS SURELY WON" to learn of the items — the "Shakti Stones" and P.W.B. Electronics' "Electret Foil" and "Red X Pen" — that I am referring to here. In my opinion — and I have none of your expertise, I freely admit — these are farcical in nature. Yet experts such as yourself have endorsed these products, and that support indicates that the JREF million-dollar prize should surely be offered, either to you personally, or to the manufacturers of these products — who have been similarly informed on this date.
If you require further information concerning details of this endeavor, please contact me at randi@randi.org and inquire. This is a valid offer, a serious offer, and a sincere offer. Should any of these products prove to work as advertised, the first person who is able to demonstrate the efficacy of any of them, will be the winner of the JREF prize as described in the rules and details to be found at the above references.
I await your response with great interest.
The above e-mail message was sent to:
Frank Doris, at The Absolute Sound: frank.doris@fm-group.net
Clay Swartz, Clark Johnson, and David Robinson at Positive Feedback: cswartz@positive-feedback.com, cjohnsen@positive-feedback.com, and drobinson@positive-feedback.com
Larry Kaye, Wayne Donnelly, and Bill Brassington at fi: kaye@umbsky.cc.umb.edu, Waynewrite@aol.com, and bbrassington@planethifi.com
Bascom King at Audio: bhk@rain.org
Wes Phillips at SoundStage: wes@onhifi.com
Jim Merod at Jazz Times: jim@onsoundandmusic.com
Dick Olsher at Enjoy The Music: senioreditor@enjoythemusic.com
Peter and May Belt at "P.W.B. Electronics": webmaster@belt.demon.co.uk
Benjamin Piazza at "Shakti Innovations": info@shakti-innovations.com
Let's see what reaction is received — if any — to this clearly-outlined challenge. Remember, all we're doing here is asking the reviewers — the trained, experienced experts, the responsible endorsers of these products — to repeat their tests of the items, but this time under double-blind, secure, conditions. And we're making the same offer to the manufacturers, who we would expect to be even more sensitive and capable of performing such tests.
WE ARE OFFERING ONE MILLION DOLLARS IF THEY CAN DO WHAT THEY CLAIM THEY CAN DO, WHAT THEY DO PROFESSIONALLY, IN A FIELD WHERE THEY CLAIM EXPERTISE FAR BEYOND THAT OF MERE MORTALS. WE ASK FOR NO INVESTMENT FROM THEM, WE DO NOT CHARGE THEM FOR PARTICIPATING — AND WE STAND TO GAIN NOTHING BUT WE DO RISK THE LOSS OF THE MILLION DOLLARS PRIZE MONEY.
I am a mere mortal, unencumbered by academic degrees or claims of audio expertise. Show me, and win a million dollars...
(Sylvia Browne just called and offered refuge and professional evasion advice to all the above-listed.)
Magnetar
![]()
Mr. Randi wrote in his letter: "WHERE THEY CLAIM EXPERTISE FAR BEYOND THAT OF MERE MORTALS". The first part of this challenge would be for Mr. Randi to cite specific quotations from each of the 11 writers to prove that each of them has explicitly claimed to have "expertise far beyond that of mere mortals."Someone might also want to tell Randi that Audio magazine hasn't been published in this century.
![]()
Even if I were a firm believer in the Stones (never heard 'em). But with a million bucks at stake, my tinnitus would kick in big time, my sinuses would explode, and I wouldn't be able to discern a $50K system from a Fisher Price.
![]()
or need they show an 'improvement'?"
An effect. ANY effect.Establishing what an "improvement" would be, could take forever. They merely have to state when the things are in use, or not in use. Wouldn't you think that those who SELL the stones would be clawing to apply for the million? They aren't, and they won't. They've got the suckers in the barn, and they're off to the bank again....
Hmm BOTH of those should be as simple as being able to identify the effects under double blind conditions. If you can identify the change then that should be enough proof for Randi. I don't think anyone could prove what sound is better.. maybe have a thousand people that can identify the difference and have a poll that is overwhelming one sided (say 95%) toward the stones. But what about the other 5%?
Magnetar
![]()
The problem I'd have is that I once say through a demonstration by Russ Andrews in the UK where an audio system first had it's powercords replaced with Kimber's, then the ICs, then oak-cones placed under the equipment and a couple more tweaks, but after each step I couldn't be sure I'd heard any difference whatsoever.However, at the end of the demo I asked if he could put the whole system back to it's original state whereby the sound collapsed and I became a convert to Kimber products and Russ Andrews' tweaks.
The point I'm making here though is that just upgrading one IC in my own system brought an immediately obvious improvement, as did subsequent upgrades in power-cords and the like, but in an unfamiliar system/room/environment such as the one James Randi would provide and with the added handicap of quick A/B'ing where the listener does not have to to acclimatise to one particular set-up, I doubt I could deprive Randi of his $1,000,000.
If Randi was to come over to my house I am sure that I could tell when an IC has been replaced by something cheaper, but I doubt this would be possible.
Despite not particularly liking Randi or his self-promoting methodology, there's no denying that he is putting his money where his mouth is and must be respected for this reason alone.
Best Regards,
Chris redmond.
![]()
What do you claim you can do, as regards your new ICs? How confident are you that you can tell whether you're listening to your new ICs or some other cheap ones?I live in Seattle, so be careful what you claim. You might have to prove it.
![]()
"What do you claim you can do, as regards your new ICs? How confident are you that you can tell whether you're listening to your new ICs or some other cheap ones?"I'm very confident to the point of being cock-sure about it because there's no comparison even between the ICs I use now and the next model down by the same manufacturer; one has me absolutely enthralled with music and the other had me ready to sell the speakers I'd just bought because I believed they had a coarse, grainy midrange; replacing with the all-silver ICs transformed the sound.
"I live in Seattle, so be careful what you claim. You might have to prove it."
Sorry? Was I supposed to be shaking in my boots because I might have to prove I wasn't taking out of my rectum?
Well, I'm in the UK and if you're ever over here give me a call and I'll demo one pair of ICs over the other. Imediately after I'll make you a cup of tea so you don't choke on the humble-pie you'll be eating. :0)
Best Regards,
Chris redmond.
![]()
Some are sure that speakers make a difference but believe anything upstream sounds the same as long as it's level matched and operating within it's specifications.Some know that amps make a difference but think all digital sources sound the same.
Some never question that there are differences between CD players but don't believe anybody can tell the difference between CD and SACD.
Some are convinced that SACD is for real but don't believe cables matter.
Some believe that speaker cables can make a difference, but mainly because of wire gauge, and possibly interconnects too, but power cables can't be heard.
Others are full blown cable nuts, but don't buy into isolation and mass loading tweaks.
And naturally, there are tweakaholics who isolate and damp everything but are sure that Shakti stones are BS.
My point is that there's a whole spectrum of opinions out there about what's audible and what's not. Most of us lie somewhere in the middle, arriving wherever we are based on our hearing and what we've been exposed to. And at every point in the spectrum there will be a subset of people who are cock-sure they can reliably hear whatever things they happen to believe in and are just as sure that nobody else can hear the things they don't believe in. Don't fall into that trap. No matter where you lie in the spectrum, your belief system can be shattered when you suddenly discover something you haven't heard before, or when you fail a blind test you thought you could pass. Ironically, some people have worked their way through several levels believing they have found the truth each time, and yet still haven't learned their lesson.
"And at every point in the spectrum there will be a subset of people who are cock-sure they can reliably hear whatever things they happen to believe in".No, there are simply some people who don't have any preconceived ideas about whether cables/isolation devices/mains cable etc etc actually affect sound reproduction but are prepared to be open minded enough to suck it and see.
If someone bought an expensive pair of cables after reading positive reviews of that cable they might just convince themselves that the cables do make an improvement - the placebo effect/power of suggestion so to speak.
If on the other hand someone buys a ridiculously priced pair of cables to sell on at a profit to someone who believes the rave reviews, but ends up trying the cables in his system out of curiosity only to be dumbfounded by the transformation and subsequently keeps the cables, then that person is not 'hearing something they believe in' but has allowed a personal evaluation to guide his wallet.
This is what happened to me and as I said earlier, the difference was night and day.
I'd suggest that there is a subset of people who definately do allow their preconceived beliefs - prejudices - to close their minds to certain audio phenomenom and even now I find myself evaluating claims of certain tweaks on a purely logical basis without actually trying them myself.
For instance I cannot think of any possible way that the Shakti Stones can possibly affect sound reproduction so in my own mind I think they're an outright con, but I would not dismiss them out of hand as I realise that I might well be proved wrong.
Another reason the test may not be fair is that it is an observed and repeatable phenomonon that the types of effects Randi challenges are inhibited by the presence of a professional magician in the same room.
![]()
Um, can someone please tell me where the presence of a "professional magician" in the room during a test, was mentioned? Where do you people get such strange notions?As a matter of fact, I insist on NOT being present on those occasions, except in the rare cases in which the testee has asked that I attend. The person being tested ALWAYS has complete control over when and where a test will be done, and who will or will not be present. You see, we've been through all this "negative vibrations" crapiola before. Many times.
I see that most of those offering these flaccid objections, haven't even read the terms of the challenge. We're the genuine people involved here; it's the scam artists you should be interested in putting to the test.
What do they offer? Promises, pompous "expertise," lots of obfuscation. What do we offer? A million dollars, to use as the winner wishes, payable in answer to a fully legal obligation into which the JREF has VOLUNTARILY entered! THEY WON'T APPLY -- OR EVEN TROUBLE TO SAY, "no thanks"! Just who's the bad guy here....?
I can't spend too much more time on this with you folks. I have a weekly web page to prepare -- which won't have any audio reviewers or manufacturers accepting our offer....
"Another reason the test may not be fair is that it is an observed and repeatable phenomonon that the types of effects Randi challenges are inhibited by the presence of a professional magician in the same room."In that case it should be agreed that Randi stays in another room; problem solved.
That aside though, the pressure of putting your reputation on the line and the prospect of winning $1,000,000 may mean it's impossible to relax and hear differences that you might previously have been able to discern.
What I'd suggest on the whole is that reviewers might be guilty of exagerating improvements they hear and I for one switch off when reading that such and such suddenly transformed the music from being two dimensional to three, and what was once digital now sounded amazingly smooth and analogue like etc...etc...etc..
Roy Gregory of Hi-Fi+ is the main culprit and every single accessory in the AudiophileCandy catalogue seems to have a quote from Roy eulogising on it's amazing sound enhancement properties.
I'm afraid my sarcasm was a little to subtle. :)Randi is a master of prestidigitation and can reproduce Yuri Geller's spoon bending, do card tricks, can read your mind, levitate or whatever. I'll bet he could even convince Julian Hirsh that 6 feet of $500 speaker cable sounds unequicably better than 6 feet of zip cord. And when he's done he can show you exactly how you were suckered.
The phenomona that dissapear when he's present disappear because he knows how the con is done and the flim flam men know he knows.
![]()
"I'm afraid my sarcasm was a little to subtle. :)"Yep you got me; I've learned not to assume intelligence in posters until they actually demonstrate they have brain cells going into double figures.
"Randi is a master of prestidigitation and can reproduce Yuri Geller's spoon bending, do card tricks, can read your mind, levitate or whatever".Randi did the old mind reading act with me when I took up his challenge; He said "At this moment in time you're thinking of all the women, cars and holidays you could have if you can just con the old bearded twit out of his million bucks...".
Dammit!
The challenge stipulates that, as a first step, you are to demonstrate your ability to a member of JREF, typically and usually in your own home.The ability you feel confident of is part of the challenge. Go for it.
![]()
James Randi hasn't included cables in his challenge there's no chance of me making a bid for his million, but if I was one of the reviewers who'd suggested that the subject of his challenge ('magic' stones) made an immediate and obvious improvement to the sound of their system I can't see any reason why Randi's proposal would present a problem.Unless of course the reviewers were being less than honest in their appraisal of the afrementioned stones?
Best Regards,
Chris redmond.
![]()
So why don't you wright him a letter to see if he will extend his challenge to include cables?
![]()
"So why don't you wright him a letter to see if he will extend his challenge to include cables?"Because cables ARE different and it is easy to show for instance that capacitance and impedence vary between designs.
Then you should be able to pick up a cool million with little effort.
![]()
Magnetar
![]()
nt
Best Regards,
Chris redmond.
![]()
nt
![]()
....I just happen to have spelt it wrong. Got my vowels mixed up that's all.....I'm not a proper writist.
Best Regards,
Chris redmond.
![]()
Magnetar
![]()
If the manufacture or reviewer refuses the million dollar purse they are either to lazy to do so or just plain liars.
Someone might take the challenge if Randi didn't force them to pay both their own and his expenses.A good place to start this discussion would have been for Randi to 1) provide some evidence that the reviews were fraudulent and 2) provide some evidence that the Shakti Stone does not and cannot have any effect.
![]()
.
![]()
I find it nearly incredible to find that such a large percentage of you folks have no understanding whatsoever of the nature of the JREF challenge. It's all defined, carefully, on our web page. Several of you have chosen to invent impossible or illogical rules and provisions of the tests, for some reason. These canards get repeated and published, and become items that we have to spend endless hours refuting.The question is, simply: why do these reviewers, who are professionals trusted to provide expert evaluations to consumers, avoid doing a simple test that will a) show that they do have the expertise they claim, and b) make them one million dollars in negotiable bonfs....? I think I know the answer: either a) these reviewers have no confidence in their ability, or b) they know they're fakes and can't do it. I see no other possibilities.
Anyone who turns down an easy million dollars is intellectually challenged, or is dishonest. Prove otherwise...
The question is, simply: why do these reviewers, who are professionals trusted to provide expert evaluations to consumers, avoid doing a simple test that will a) show that they do have the expertise they claim, and b) make them one million dollars in negotiable bonfs....? I think I know the answer: either a) these reviewers have no confidence in their ability, or b) they know they're fakes and can't do it. I see no other possibilities.
What about the obvious one? There exists no test methodology that can prove their claims. I don't know if these stones work since I've never tried them (and don't plan to). But I'm quite sure that it would take me a great deal of time, effort, and money to develop a reliable, repeatable, and sufficiently sensitive test methodology before I could demonstrate with statistical significance that I can hear differences between interconnecting cables. Food companies have invested countless $$$ for R&D and training to get good results from taste testing. I don't see why listening tests would be any easier.
You talk of a "simple" test but design of experiments is anything but simple. Many top scientists spend the bulk of their careers trying to develop tests to demonstrate and characterize phenomena that have subtle effects like audio tweaks do. A good example is neutrino detection. According to your simplistic viewpoint, the field of particle physics is full of "fakes".
Especially if the second site has a professional magician in the room. That has been demonstrated to be a factor which inhibits the the observance of many phenomona.
![]()
Got any citations on that one?
![]()
This ridiculous fallback position was invented by those who failed such simple tests, to excuse their failures. There are zero citations to be had -- except from the scam artists who fear and avoid being tested.These reviewers -- and the manufacturers of the spurious devices -- will not even RESPOND by saying, "no thanks." They simply stay silent and wait for the matter to go away. And the audio consumer public allows that to happen, they continue to snap up the products and services, and the scammers laugh all the way to the bank with their money in hand.
I must add that these people do have a very obvious responsibity to answer to such inquiries. They are paid to be experts. How about a million dollars if ANY OF THEM can prove that they are? That seems like a very simple question....!
Hello....?
Nobody's ever demonstrated a claim in the preliminary examination, remember?And even if you didn't get the $1M, it would go a long way toward substantiating anybody's credibility to be able to say that they could do it under familiar conditions, the conditions under which they make their evaluations and recommendations.
![]()
that makes me dislike this hobby sometime.
![]()
as accessories in their systems- i.e. Fremer, Stern, Scull, Deutsch, Phillips and Willis. I pulled this info from Stereophile's website. Why would these reviewers include Shakti stones in the systems they use to review new equipment for any reason other than that they do make a difference?
![]()
I was given a Shakti Stone some years ago. When I removed the screws that attach the cover of my preamp (to afford easy internal access), I put the Shakti on the cover to prevent it from vibrating and buzzing in sympathy with high SPLs. Quite effective in that application. :-)
an environment conducive to selling advertising as much as it is to do anything else. As professionals, once they endorse a product, they're pretty much stuck with using it, else their opinion of it becomes moot (particularly with an add-on product such as a stone - if it works they have no good reason not to use it, whereas they could get away with recommending a CDP and not using it, simply because you can only have so many CDPs in one system). Consumers should always be aware of this; we live in a buyer-beware world. This is not to say that any reviewer's claims/reviews are not their honest opinions and/or facts as they see them, but it does make sense to keep in mind that no magazine could survive without its sponsors.I have never tried a Shakti stone nor the Belt tweaks being discussed, so I don't have an opinion on whether or not they work. I do think it is healthy for the audiophile community to question the claims made by each company, or any company marketing any type of new technology.
As for why the reviewers use the stones in the systems they use for reviewing new equipment, that is a bit of a mystery to me. If I were a reviewer, I'd strive to review new equipment in an environment I believed to be most typical of one of my readers. It's not likely that the bulk of Stereophile's readers have the stones in their systems, so I think it's a bit silly for the reviewers to use them, however, if the reviewers are using the stones in such a way that is very unlikely to affect the equipment which is under review, the practice is more than likely harmless.
"Harmless"? What's harmless about taking hundreds of dollars from an innocent/naive customer for a useless product? Think: if you went to the grocery and bought a huge jar of, say, caviar, and upon returning home discovered that there was only water in the jar, would you accept the store manager's reply that the water is "more than likely harmless"? Or would you accept that same statement from the persons who packaged the water as caviar....?I don't think so. So, is the audio world some sort of special place where outright scams are "harmless" fun....?
My post was about reviewers and reviewers only.Reviewers do not take money from consumers, although they could certainly be considered cheerleaders in support of the type of transactions you are speaking out against.
Also, the nature of your post would suggest that you only remembered one word from my post: harmless. Here is the sentence in which I used it:
"It's not likely that the bulk of Stereophile's readers have the stones in their systems, so I think it's a bit silly for the reviewers to use them, however, if the reviewers are using the stones in such a way that is very unlikely to affect the equipment which is under review, the practice is more than likely harmless."
I don't see how this statement could be very controversial, particularly if you believe the stones in question have no effect.
You wrote:
"So, is the audio world some sort of special place where outright scams are "harmless" fun....?"Absolutely not. There is a distinct problem in fighting this type of scam, though (assuming for example's sake the the stones in question are an outright scam (which I fully acknowledge is a possibility)). The problem is that the company and/or product has many happy customers. It is tough to prove that fraud has been committed when there are happy 'victims'. I do completely agree that unsubstantiated claims should never be allowed, but one must recognize that in the audio world, disproving claims is evry bit as difficult as proving claims, particularly when you're dealing with a company that has happy customers who swear the claims are real (it doesn't matter how many times you make a happy customer fail a DBT, most will remain happy).
I wish that instead of focusing on the word 'harmless' you had focused on what I thought were the most important words in my post, 'buyer-beware'.
Please keep questioning everything - as I said, I believe it is healthy for the audiophile community.
Cheers,
Pete
Question:
" Why would these reviewers include Shakti stones in the systems they use to review new equipment for any reason other than that they do make a difference?"Answer:
Because they BELIEVE they make a difference. Not exactly the same thing.
![]()
Heck I'm thinking of borrowing a pair to do dome SBT'ing just to see if they are even audible - if they are - mayby I'll send him in my change..
Magnetar
![]()
...that means it has some magical property?
![]()
.
![]()
"Why would these reviewers include Shakti stones in the systems they use to review new equipment for any reason other than that they do make a difference?"That's too easy! One reason could simply be that they THINK those things make an audible difference. That doesn't make it so, of course.
____________________________________________________________
"Nature loves to hide."
---Heraclitus of Ephesus (trans. Wheelwright)
![]()
Even if some difference is imagined, it's still perceived as different isn't it? If it takes a Shakti stone to make me imagine an improvement, why does anyone else care if I buy them or even sell them?I reckon people care because they like to have someone they can ridicule - it makes them feel better about themselves.
![]()
Nobody cares if you buy them or sell them!The point is if you set yourself up as being a competent judge of their effects (ie a reviewer) you have an obligation to be able to substantiate your claims.
![]()
So I can make all the unsubstantiated claims I want as a seller or manufacturer? That's OK with you?
![]()
"One reason could simply be that they THINK those things make an audible difference. That doesn't make it so, of course."I don't own the Shakti Stones but I guess I'm naive: 6 professional reviewers, ALL only thinking they hear an audible difference?
I suppose it's possible they include Shakti Sones in their list of associated equipment used in the review only because they have them. Although in several reviews, the audible effects of Shatki stones, once inserted the in the reviewer's system, and sometimes in the piece being reviewed, are discussed.
![]()
nt
![]()
It's lose, lose for the reviewer. If they can't substantiate the "dramatic" differences made by the products they review, then their credibility goes out the window. So they make excuses like the testing procedure is flawed, or "I heard the difference and that's all I need to prove", Randi doesn't really have the million dollars, or the sun got in their eyes. All the more reason to take any product review with a healthy dose of scepticism.
![]()
test (whatever it is/has been/will be).
Okay, DeKay, prove your point. Or are you offended by an honest man?Yes, I guess that anything logical, responsible, rational, and/or real, could sound like BS to you. I put my money where my mouth is. Where's YOUR money?
As for your astonishment that no one has passed the preliminary test, try hard to consider this: MAYBE THERE AREN'T SUCH POWERS/DEVICES/SYSTEMS! Even think of THAT possibility? Watch your chimney next December 24th evening. THE FAT GUY DOESN'T SHOW UP BECAUSE HE DOESN'T EXIST! Sorry to shock you....
.
![]()
nt
![]()
Yes, "we" do -- those of us who read the web page at www.randi.org, that is. They try, and they fail. I'm off to Europe (Germany, Sweden, Belgium, Italy) in the Fall, to arrange some more tests -- which will be conducted by unpaid, independent, persons on behalf of the JREF.
Why do you think Yuri Geller is off the talk show circuit?
![]()
Are you Bascom King, formerly of "Audio" magazine? I have wondered this for some time and I would have asked you this privately, but i can't find a way to send you an email : ) Sean
>
![]()
I'm Balderdash King, formerly of the famous Kings of the Highwire at the Barnum and Bailey Circus. I've escaped from their clutches. Now only you know where to find me.
![]()
in that you assume that only reviewers hear a difference. What about the thousands of people who bought the products? Shouldn't they also be required to prove that they heard a difference as well?
Too many people place WAY too much emphasis on what reviewers write - we are simply sharing our experinces - of which readers will either agree with because they have had the same experience (to whatever degree), will try it to see if they will/might share the experience (to whatever degree), or disagree because they did not share the experience (to whatever degree). Credibility is hardly the issue.
A question I have been wanting to ask all the people who feel that DBTs are the way to go (as well as all those who feel that we are just making this up as we go, we are in it for the money, it is some well organized plan to separate people from their money, or whatever), is that, "What do you really believe with respect to audio components and such?" Do you feel that they (baes on having the same specs.) are all the same sonically? That any differences are so subtle that they make no real difference? That any well made/designed cable, amplifier, CD player, etc., will sound the same as another? If so, why are you here?
Dave Clark
![]()
Why, I must ask, would a customer be required to prove that he/she could tell the difference? I don't follow this "reasoning," at all! I don't have to prove that my car performs as advertised -- though I certainly could look into that claim. And if the dealer has said that I'd get X mileage, and I didn't, I'd sure as hell try to get my money back....Apparently audio folks don't care....
s
![]()
What I've noticed: every time DBT is mentioned, some persons pull any trick out of their orifices, attempting to descredite the argument. Here is a strawman argument: claim that DBT proponents somehow believe that 'all equipment sound the same'. So - somehow if I don't believe in Shakti Stones, CD demagnitizers, 'Quantum purifiers' I'm automatically belong to those who 'believe that all equipment sounds the same'.
in that I implied a bit too much there. My question was more of a knee jerk reaction to the post and where it could spiral down towards - though it is quite justified. As to the effect of the items you list, either they make a difference for you or they do not. Why? Not necessarliy based on what the manufacturer claims... but then we are still a learning.
Dave Clark
![]()
Surely not posting the above garbage here!
Magnetar
![]()
Randi simply offered $1,000,000 to any person who could demonstrate the validity of their claims.You don't have to prove anything, unless you'd like to collect the $1,000,000.
either you agree with me or you don't. As to the money, not really an issue as in reading his site, I am not sure that I could prove what I hear (or not hear) to his satisfaction (with regards to the restrictions that he applies to others).
On the other hand, perhaps Ben Piazza of Shakti could do it. Has he made the offer to Ben? I will admit that I have not received any email from Randi (that I am aware of - it may have been deleated under my spam filter). Even so, not interested. More money equals more headaches!
Dave Clark
![]()
Mr. Clark: Interesting! Though it's obvious to everyone else here that there's absolutely no requirement for YOU to prove ANYTHING "to [my] satisfaction," since any tests would be done with YOUR people, the way YOU want to operate, with YOUR agreement, with YOUR equipment setup. But you knew that, Clark, and this is obfuscation.You state that "the money [is] not really an issue." Really? You're fabulously wealthy, then? You also say, "More money equals more headaches!" Ah, but just think of the aspirins you could buy with a million -- and have lots left over!
Let's examine a couple of your other comments: "with regards to the restrictions that he applies to others" Since you're obviously -- perhaps purposefully -- ignorant of my work, you've no way of knowing anything about any "restrictons" I might have ever "applied" to any tests. However, logical, definitive, simple, straightforward, and rational, may be "restrictive" to you, so you wouldn't want to
get involved, and I fully understand." . . . perhaps Ben Piazza of Shakti could do it. Has he made the offer to Ben?" Dave, you silly rascal, you know that this offer is open to anyone and everyone, in any country, any age, and any I.Q. level. Mr. Piazza certainly qualifies! Perhaps you'd be so kind as to personally forward my offer to him, then? Wow! This is exciting! Ben, where are you? And I should add, Dave, that yes, I specifically DID challenge Ben, via e-mail and on my web page -- where an average of 90,000 daily page hits are entered, internationally. And Ben won't respond. Why do you suppose this is so, Dave?
Hello, Ben?
Dave, you're the ONLY one who in the audio field who has responded to my challenge in these audio matters, even though you weren't one of those I specifically contacted. And you're "not interested." Gee, I wonder why. I'd ask those here on the forum to put pressure on those others I specifically challenged on these matters....!
You ought to stick to reviewing Snap-On.
a
![]()
to a completely preposterous statement. I was no more insulting in my response than the poster was to the intelligence of the readers of the thread.
![]()
x
![]()
.
![]()
a
![]()
Name-calling - one of the strongest elements of a cogent argument. Is this where one counters with "So's your old man?"
![]()
Troll.
![]()
Apparently a trioll is someone who doesn't agree with you, or calls you on being an absolute idiot.So be it.
"More money equals more headaches!"I would think more credibility would equal less headaches too.
Have you heard a difference with stones or pebbles?
Magnetar
![]()
on my part. Sure I would love to have a mioolin dollars. But I do not see me getting it from Randi.
Dave Clark
![]()
No, Dave, you won't get it until you first fill out the application.Go to randi@randi.org, look at www.randi.org/research/challenge.html and we're off! You and Ben Piazza can compete together!
> > All expenses such as transportation, accommodation, materials, assistants, and/or all other costs for any persons or procedures incurred in pursuit of the reward, are the sole responsibility of the applicant. Neither the JREF nor JR will bear any of the costs.You set all the rules and get paid expenses up front? It's all rigged in your favor.
How many people do you test a year? Or is it all just a marketing campaign?
![]()
http://www.randi.org/research/challenge.htmlhis statement outlines the rules covering the offer made by this Foundation (JREF) concerning psychic, supernatural or paranormal claims. Since claims vary greatly in character and scope, specific rules must be formulated for each applicant. All applicants must agree to the rules set forth here before any formal agreement can be entered into. Completing this form is mandatory; there are no exceptions to this rule.
Applicant will declare agreement by signing this form where indicated on the reverse before a notary public, and returning the form to the James Randi Educational Foundation. Applicants must state clearly what they claim as their special ability, and test procedures must be agreed upon by both parties before any testing will take place. All tests must be designed in such a way that the results are self-evident, and no judging process is required. We do not design the protocol independently of the applicant, who must provide clear guidelines so that the test may be properly set. All applicants must clearly identify themselves properly before any discussion takes place.
Due to the large amount of correspondence exchanged in this process, applicants must send a stamped, self-addressed envelope (SSAE), in the case of foreign letters only a self-addressed envelope, to accompany each piece of correspondence requiring an answer. This offer is administered by the JREF, and no one may negotiate or make any changes, except as set forth in writing by James Randi (JR). All correspondence must be written, and will be answered, in English only.
Upon properly completing this document and agreeing upon the test protocol, you will receive your application back signed on the reverse by JR. The applicant then becomes eligible for the preliminary test, which, if successful, will result in the formal test.
I, James Randi, through the JREF, will pay US$1,000,000 to any person who can demonstrate any psychic, supernatural or paranormal ability under satisfactory observing conditions. Such demonstration must take place under these rules and limitations.
1. Applicant must state clearly in advance, and applicant and JREF will agree upon, what powers or abilities will be demonstrated, the limits of the proposed demonstration (so far as time, location and other variables are concerned) and what will constitute both a positive and a negative result. This is the primary and most important of these rules.
2. Only an actual performance of the stated nature and scope, within the agreed-upon limits, will be accepted. Anecdotal accounts of previous events are not accepted or considered. We consult competent statisticians when an evaluation of the results, or experiment design, is required. We have no interest in theories or explanations of how the claimed powers might work; if you provide us with such material, it will be ignored and discarded.
3. Applicant agrees that all data (photographic, recorded, written, etc.) gathered as a result of the testing may be used freely by JREF in any way that Mr. Randi may choose.
4. No part of the testing procedure may be changed in any way without the agreement of all parties concerned. JR may be present at some preliminary or formal tests, but will not interact with the materials used.
5. In all cases, applicant will be required to perform the preliminary test either before an appointed representative, if distance and time dictate that need, or in a location where a member of the JREF staff can attend. This preliminary test is to determine if the applicant is likely to perform as promised during a formal test. To date, no applicant has passed the preliminary test, and this has eliminated the need for formal testing in those cases. There is no limit on the number of times an applicant may re-apply, but re-application can take place only after 12 months have elapsed since the preliminary test.
6. All expenses such as transportation, accommodation, materials, assistants, and/or all other costs for any persons or procedures incurred in pursuit of the reward, are the sole responsibility of the applicant. Neither the JREF nor JR will bear any of the costs.
7. When entering into this challenge, the applicant surrenders any and all rights to legal action against Mr. Randi, against any persons peripherally involved, and against the James Randi Educational Foundation, as far as this may be done by established statutes. This applies to injury, accident, or any other damage of a physical or emotional nature, and/or financial, or professional, loss or damage of any kind. However, this rule in no way affects the awarding of the prize.
8. At the formal test, in advance, an independent person will be placed in charge of a personal check from James Randi for US$10,000. In the event that the claimant is successful under the agreed terms and conditions, that check shall be immediately surrendered to the claimant, and within ten days the James Randi Educational Foundation will pay to the claimant the remainder of the reward, for a total of US$1,000,000. One million dollars in negotiable bonds is held by an investment firm in New York, in the "James Randi Educational Foundation Prize Account," as surety for the prize funds. Validation of this account and its current status may be obtained by contacting the Foundation by telephone, fax, or e-mail.
9. Copies of this form are available free of charge to any person who sends the required SSAE, marked on the outside, "Challenge Application," requesting it, or it can be downloaded from the Internet, at www.randi.org/research/challenge.html
10. This offer is made by James Randi through the JREF, and not on behalf of any other person, agency or organization, though others may become involved in the examination of claims, others may add their reward money to the total in certain circumstances, and the implementation and management of the challenge will be carried out by James Randi via the James Randi Educational Foundation. JREF will not entertain any demand that the prize money be deposited in escrow, displayed in cash, or otherwise produced in advance of the test being performed. JREF will not cater to such vanities.
11. This offer is open to any and all persons, in any part of the world, regardless of gender, race, educational background, etc., and will continue in effect until the prize is awarded. Upon the death of James Randi, the administration of the prize will pass into other hands, and it is intended that it continue in force.
12. EVERY APPLICANT MUST AGREE UPON WHAT WILL CONSTITUTE A CONCLUSION THAT, ON THE OCCASION OF THE FORMAL TEST, HE OR SHE DID OR DID NOT DEMONSTRATE THE CLAIMED ABILITY OR POWER. This form must be accompanied by a brief, two-paragraph description of what will constitute the demonstration. PLEASE: Do not burden us with theories, philosophical observations, previous examples, or other comments! We are only interested in an actual demonstration.NOTE: No special rules, exceptions, conditions, standards, or favors will be granted, without mutual agreement of those concerned — in advance. Any applicant who refuses to agree to meet the rules as outlined here, will not be considered to have ever been a claimant. Only complete agreement with these rules will allow the "applicant" to become a "claimant." Applicant, by signing, notarizing and returning this form, signifies agreement with all of the above rules. Be advised that you should conduct proper, secure tests of your own to determine whether your abilities or claims are actually valid. Some persons who failed to do this have undergone serious embarrassment and emotional stress as a result. This advice is offered only so that you might be spared these problems. Your application signed by JR will be returned to you by mail after a test protocol has been mutually agreed upon and a test date and location have been determined.
James Randi Educational Foundation
201 S.E. 12th Street
Fort Lauderdale, FL 33316-1815 U.S.A.
Notarized:
(signature of applicant)Please be advised that several applicants have suffered great personal embarrassment after failing these tests. I strongly advise you to conduct proper double-blind tests of any ability you believe you can demonstrate, before attempting to undergo a testing for this prize. This has saved me and many claimants much time and work, by showing that the powers were quite imaginary on the part of the would-be claimant. Please do this, and do not choose to ignore the need for such a precaution.
I liked this part: "Applicants must state clearly what they claim as their special ability." Have any of the reviewers Mr. Randi has challenged ever asserted "special ability"? Not to my knowledge, unless being able to write a little better than the average audiophile constitutes "special ability". So Randi demands that the reviewers make a claim of "special ability" before they can take a shot at winning his $1M, even though "special ability" is not the issue here?
![]()
Hmmm- anyone should be able to read that and interput the 'special ability' as being the claim in question.
Magnetar
![]()
> > anyone should be able to read that and interput [sic] the 'special ability' as being the claim in question < <I'm not so sure about that. Given that the prize is $1,000,000 the stakes are extremely high and such a technicality could be used to deny payment. This is a contract after all. No one should enter into this challenge without the support of serious legal team to eliminate any ambiguity or weasel - room. For example, Randi's letter states "WE ASK FOR NO INVESTMENT FROM THEM, WE DO NOT CHARGE THEM FOR PARTICIPATING" yet the application says "All expenses such as transportation, accommodation, materials, assistants, and/or all other costs for any persons or procedures incurred in pursuit of the reward, are the sole responsibility of the applicant". Obviously the cost of the test is an "INVESTMENT."
Randi is "offering" a million bucks; but wants the applicants to send stamped addressed envelopes for his replies; hmmmm.
Dave
Later Gator,
Crank up your talking machine, grab a jar of your favorite "kick-back", sit down, relax, and let the good times roll.
![]()
That is why I pondered over how genuine his offer was, and people like "Oneballismissing" don't get it.
![]()
LOL- having to send all that change in for a million bucks is really hard to swallow. It's like who would do that!Hint -- How many one dollar lottery tickets were sold last week?
up front to a third party. That would demonstrate his ability to pay and sincerety.
![]()
Yes, that's precisely what we would do, if we were idiots. We're not. The JREF benefits from the conservatively-invested million, in the account where it resides with Goldman-Sachs. If it were in cash, it would not earn proper returns for us.Tell you what, when you have your million together, you turn it into cash and put it in your mattress. That is, if you're "serious."
It has beeen a regular claim by those who can't back-up their BS that the money doesn't exist - since the days when the prize was only $10,000.
You may ascertain the the JREF has the $1,000,000 in liquid assets at any time by contacting them at www.randi.org.
Many people have received satisfaction that the funds are indeed available before attempting to prove their prowess. Nobody's yet proven anything except they are as full of sh*t as you obviously are.
![]()
I simply made a comment about methodology. It is common practice under similar circumstances for the funds to be placed in trust as a public demonstration of intent and ability.
![]()
We've amply demonstrated our "public demonstration of intent and ability" by legally committing ourselves to this challenge. It's published, clearly and definitively, in four languages -- including Chinese -- internationally, and we are legally bound to fulfill that obligation. Read the web page....
You claimed Randi needed to demonstrate his ability to pay, to demonstrate his sincerity.You said this without knowing the least thing about JREF or the standing challenge that's been in place for years.
They might have to make a new one. Are you a commissioned employee, or a free-lancer? You're "you implied" crap is pretty weak.That's too bad. I've read many of your posts before, and always thought you were a level-headed contributor. That was stupid of me. You won't fool me twice, though.
![]()
your comments are embarrasing.the first question you should ask yourself is whether what you write is advancing the discourse: you have failed here: personal attacks and general rudeness don't get anyone anywhere. the second question you should ask is whether there have been any assumptions in the initial thesis that are questionable: you have failed here as well. there might nopt be a scientific explanation for why these things work. shakti stones make a clearly audible difference in my system, but i can't for the life of me explain why. therefore i can't win the money, but i'm quite satisfied that my minimal investment in the shakti's made a difference i get to enjoy every day...
![]()
Robert, where did you get the idea that you'd have to explain how you can discern this difference? Simply being able to tell whether or not the Shakti stones are in use, or not, would win you the million....!I have no idea where people get these strange ideas about the JREF challenge!
Yes, I know that you're absolutely convinced the stones work, and that you probably won't be willing to test that notion -- but isn't a million dollars enough incentive?
No? Que lastima! Quelle dommage! Etc.....
here you go: they work. now send me my million.
I don't know what attack I initiated here, though I did respond to some foolishness..The object of the challenge is to demonstrate that "shakti's made a difference i get to enjoy every day".
It has nothing to do with posing a theroretical explanation.
The application form/agreement EXPLICITLY SAYS THEY'RE NOT INTERESTED IN THEORIES OR HYPOTHESYES!
All that required is that you can demonstrate the ability to discern the difference you claim to hear. THAT'S ALL.
![]()
your claims to have not initiated any attack are specious. your quote is below. if you can't participate like an adult, you shouldn't expect serious responses to your missives.
"Nobody's yet proven anything except they are as full of sh*t as you obviously are."
![]()
subjective criteria are often not measureable. this "discussion" occurred in a similar fashion over at the cables asylum. it got more than ugly. i don't want to repeat.try this: pretend you can hear the difference. now you figure out how to prove you can...
![]()
Not even for a million, Robert....?
![]()
NOBODY, FOR THE HUNDREDTH TIME, SAID ANYTHING ABOUT MEASUREMENT!Simply demonstrate that you can hear wwhat you claim to hear.
before you go UPPER-CASING ME: without an objective set of criteria, i.e., MEASUREMENTS, there can be no way to demonstrate what I hear. i don't know how to measure objective predilections. no one does. that's why audio crticism is in the realm of opinion and not science. why is THAT so hard for YOU to understand. you've reached a dead-end here, pal.
Yes, wunhuanglo, I believe we've reached a true dead end here.He'll never understand. Too bad. And he could have so easily won the million!
that we hear a difference and one that we prefer? I have reviewed the Shakti products and my wife has written about Belt's treatments. And your point is what...?
Dave Clark
![]()
Sheeesh! How could anything be easier?Folks, all you have to do to win a million dollars, is to be able to tell when the Shakti Stones are in use, or are NOT in use! Could that be simpler? No numbers, no degrees of effect, no subtleties at all, just YES or NO.
And you win a milion dollars.....!
Mr. Randi,can this 'in or out' test be done in a system of our own choseing?
in other words, these types of tweaks are 'final tuneing' devices for mature systems. their effects are most subtle but repeatable......the only issue is a certain 'settleing time' that Shakti Stones seem to have.....they need a few hours to really 'work'. if a system does not already have a very good balance and resolution.....the Shakti is not usefull.
you would almost need identical components side by side; one using a Shakti and one without.....allow them to settle-in.....and then A/B. that would eliminate any issues of breakin or warm-up.
i would expect that you would first need to test the two components to insure that they sounded identically.
this would only work where the listener would have their own system as a reference IMHO.
His point is if you can really hear a difference them it'll be an easy $1,000,000?If you don't want the money take it from Randi and donate it to charity?
It would seem if you could substantiate your claims you'd take the guys money and end all doubt?
...upon his idea of a proper test methodology. Maybe we should ask him for evidence that his methodology replicates the circumstances under which the original claims were made? And that the test itself doesn't change the outcome? (I'm not talking about fidelity concerns.) After all, when someone makes their observations, they aren't usually pushing ABX box buttons or worrying about outcomes or $1,000,000.These concerns are usually dismissed as nonsense by the skeptical, which is a very anti-science thing to do. Science is about investigation, isn't it? Show me that these sorts of things don't influence the test results.
Nature doesn't have to arrange herself so that we can test her. Sighted opinions are definitely subject to bias, but this doesn't automatically mean that any blind test is better.
![]()
It's about a claimant's and JREF's CONSENSUS METHODOLOGY!Nobody has ever attempted to verify a claim unless they agreed that the test conditions are appropriate beforehand.
You are all just making up stuff out of thin air to dismiss the challenge - the same thing you love to do when somebody says all amplifiers sound the same.
![]()
I said:
> > These concerns are usually dismissed as nonsense by the skeptical, which is a very anti-science thing to do. < <You said:
> You are all just making up stuff out of thin air to dismiss the challenge <
![]()
that all amplifiers, with reasonably close specs, sound the same?
![]()
.
![]()
what each poster thinks. That's irrelevant?
![]()
Since you hear a difference prove it to Randi and make yourself an easy $1 million.
![]()
First off, does he have a million dollars? I seriously doubt it. And if he does, is this issue really THAT important?
Second, do I really care if he doubts whether I heard a difference or not? Not really, why should I care if he feels that neither product does what the manufacturer claims, and that I have reported hearing a difference with either. I report what I heard and nothing more. I have nothing to prove to him - either it works for you or it doesn't.
Third, why does it matter to him that I heard what I heard? If they did not work for him nor do they make any sense, and yet they worked for me (whether the manufacturer's claims are accepetable to me or not) - well, gee I never understood why people have an issue with that?
You need to be more tolerant of differences in perceptions, values, opinions, and beliefs. No doubt because I said that they made a difference (heard one) that the idea is both companies have now made a ton of money because all the people who bought their stuff are just so gullible and that I have prospered in some way.
This is all so childish.
Dave Clark
![]()
Thank you. That is what this is all about. Why should anyone care whether I hear a difference in something or not? They're my ears and I'll listen to what I bloody well please. The degree it bothers someone else, shows the degree of personal insecurity that person has. I admit in moments of weakness it can bother me when people like Bose or a rack system. That shows I am insecure and intolerant at times. That is a fault that lies within me. Just as the fault lies within Randi for not tolerating those who like Shakti stones or Belt products.And what some people at AA, and elsewhere, have failed to address is: what's wrong with being gullible? It doesn't violate the Koran or one of the Ten Commandments, does it?
There again, those who judge others to be gullible display their intolerance and imply their own superiority.
Let's pretend you're a clinical radiologist.Let's say you get a kid's X-ray and you say that you see a deep tumor in the center of his brain. Nobody else sees it, but because you have a George Bush-like personality, the family believes you.
Let's say that a surgeon buddy of yours operates on the kid and, lo and behold, there's no tumor. But in getting to the spot that you pointed out, there's so much damage the kid is paralyzed from the neck down.
The parents come to you looking for answers, and you say, "this is all so childish"?
So audio isn't life and death - does that mean you're in no way responsible for your published opinions?
![]()
So what you're saying is that you'd like some panel of experts to decide what hi-fi products are available for you (and others) to buy?
I'm saying that someone who positions themselves as an expert has a responsibility to be truthful, to be able to substantiate his expertise in a meaningful way.Does the amount of money or the potential for consequences negate the need for integrity?
Reviewers merely offer opinions. That's all they have ever done, going back to such illustrious examples as George Bernard Shaw. It is a weak personality indeed who depends on others to make decisions for them. I've never expected a review to determine what I buy. I listen to reviews, just as I listen to friends opinions, then make a decision for myself.One assumes some degree of expertise by reviewers, or why else would they have gotten the job? An incompetent reviewer is the fault of the editor. As for integrity, the only integrity needed by a reviewer is to write comments true to their own feelings and opinions, uninfluenced by outside pressures. How does one prove the validity of a reviewer's comments? By listening and deciding to what degree one agrees with the reviewer.
OK, so there is no way to for the reviewer to substantiate that he's not pulling stuff out of thin air.But the editor is supposed to be able to tell whether he's just making it all up.
Are the editor and reviewer required to be bedmates in order for the editor to know the reviewer's "true to their own feelings"?
![]()
Then let's start fresh.
![]()
we arent talking BRAIN SURGERY here! we are talking about audible differences. SOME people cant hear the diff between the table radio and a state of the art audio system. they shouldnt buy ANYTHING except a table radio.i hear differences in cabling, speaker and interconnect. the differences arent like black and white, more like different grays. if i prefer one grey to another, then i will go for that one. its my decision whether to spend significant money on that difference. stones, dots, wire, cabinets, whatever. none of that will sacrifice my childs ability to walk or talk.
If a "professional", and that's anyone who receives compensation for their labor (including reviewing) claims something is a fact, then they have a responsibility to substantiate that fact.How you spend your money is of no interest to anyone.
If you promote ideas about how someone else should spend their money, you have a responsibility to tbe truthful and accurate,not just opinionated.
![]()
you made it sound too intense. its not. whether you subscribe to the idea that these things work or not is up to you.amplifying that to the level you did is a false alarm. perhaps if you had said the plumber wanted you to use 6 nines purity copper on the re-pipe job in you house, it would be more equitable.
...regards...tr
![]()
There seems to be a real difference in semantics here, and maybe that’s why i just can’t seem to get the idea across.A PROFESSIONAL OPINION is supposed to mean something. It’s quite a different thing from a PERSONAL OPINION.
It’s my PERSONAL OPINION that broccoli sucks – as such it’s meaningless.
It’s my PROFESSIONAL OPINION that Microsoft is a good investment, or that you have no liability for taxes on unearned income, or that the child has a brain tumor, or that the building is safe after an earthquake and can be re-occupied.
My PROFESSIONAL OPINION carries some significance – that’s why doctors, lawyers, CPAs and engineers have liability insurance. When a PROFESSIONAL OPINION results in problems, those who rely on the opinion have legal recourse.
Now before another canard is thrown at my head, I’M ABSOLUTLEY NOT SUGGESTING AUDIO REVIEWERS SHOULD HAVE LIABILITY INSURANCE.
I am suggesting that the concept of publishing responsibility-free opinion is irrational.
![]()
youre overstating the DANGER! its obvious that you are abnormally passionate about this subject. over at audioreview.com, there are some overpassionate people who demand DBTs for any and everything. perhaps you can find solace there.they even go so far as to say that sacd is being promoted by making the rbcd layer lousy! uuuuuuhhhhhRIGHT!
so if you want to find some like minded people, try that location.
...regards...tr
![]()
over the top. It hardly relates to audio. If people go out and buy a $99 Online because I felt it made a difference in my system and they feel that it was money down the drain, well gee, sorry. It worked for me. YMMV, deal with it. It is simply a report of what I heard here, take it or leave it.
Take responsiblity for your own actions and decisions - gee I am no more 'enlightened" or blessed with supreme hearing and insight than any other "audiophile." Stop taking our opinions as "THE WORD" as we all have different preferences, tastes, biases, systems, tastes in music, etc. Use reviews as a way of narrowing things down - use your ears and brain.
Dave Clark
![]()
your opinions are worthless. One can gather as much valid information from a manufacturer's advertising.Why bother with a reviewer's opinion - you're on your own anyway.
Going a step further, I guess people are getting suckered twice - first by the press then by the manufacturers.
are worthless. It is illegal to misrepresent facts. That is quite a different issue from a reviewer noting personal impressions. Noting personal impressions is the purpose of a review.
![]()
that what I hear may not relate to what you hear (or value). You have to decide if they are worthless or not. If you read what I write and you then go out and find out that you hear the same thing, then they have MAY have value. Only you can decide that.
Dave Clark
![]()
I simply cannot understand your position. Nobody else in this world, not lawyers, not doctors, not ball players, not umpires, not cooks nor waiters can get away with what you claim is the latitude you enjoy in rendering your opinion.If your findings have no objective validity, then they simply have no validity at all.
![]()
Or should I say out of your fingers? You've got to be kidding me.
![]()
you have really gone over the top, wunhuanglo. what would ever make you believe that any aspect of our hobby is entirely objective??? a reviewer doesn't feed you "truths," they feed you their opinions. over time, and with experience with the equipment commented on, one learns if the opinions voiced by a reviewer have relevance to one's particular point of view or not. i have read drclark's opinions for years, and i think i have a fairly good idea about where he stands realtive to me. that helps me apply personal value to the subjective points he makes. there simply is no "objective validity." please let me know how you figured out that there is...
![]()
Whether you derive any satisfaction from a review is your own business.But if a compensated professional provides an asessment of a product he has a responsibility to be able to demonstrate the truth of that assessment.
You and the others who want to give a free pass to any crap that's written so long as it's about audio and not medicine are completely missing the essence professionalism.
![]()
now don't go making this silly by extension. i'm not giving "a free pass to any crap that's written," i'm just wondering how one can ever measure an opinion.if you have a way to prove what may be unmeasureable, i'll go in with you and we can split the money. i'll be available right after i figure out how many angels can fit on the head of a pin...
![]()
we need to agree to disagree. But I do find your analogy is quite absurd (for obvious reasons).
Our findings only have "objective validity" if others agree. But that is my opinion - and that is all it is - an opinion based on my preceptions and experiences. See I do not see myself as some audio guru - you can find those elsewhere.
Dave Clark
![]()
Just agree on what will be considered 'proof' then prove what you and your wife put in print! It's actually pretty simple dude.
Magnetar
![]()
if you read my post clearly, I am not so sure what you point is in your post. Exposed? How? Just reporting on our experiences, nothing more. Either you agree, disagree, or whatever. Really doesn't matter. None of this is like, life or death now is it?
Dave Clark
![]()
This is what YOU say about your ADVERTISER's product:"Ben arrived with enough Shakti products to treat the neighborhood. He had boxes and boxes, and when I looked at him incredulously and asked "How many is enough?" his response was, "We’ll know that when we’re done!" This was good for him but bad for me, because by the time he was done I had eleven Stones and thirty-six On-Lines in my system. Ben spent the better part of eight hours playing the same damn song over and over and over again, as we first placed a Stone or On-Line here, then there, then here again. Imagine the horror of putting an On-Line in some spot, playing a song, taking the On-Line off, playing the song again, putting the On-Line back, playing the song again! After which Ben would either say, "Great, leave it there" or "Okay, that didn’t work for me, what did you think?" I would either agree or ask to do it again.1 If we both agreed, it was on to "Great, now let’s try one here," and the process started all over again. By the end of the day I was wasted, but was it worth it!
The effect of the first few Shakti products was not as apparent as when the effect became compounded. Each built on the others’ ability to eliminate EMI in the component on or under which it was placed. Music became more relaxed, with greater clarity. Space and ambience increased. The soundfield became considerably more open and defined. At a certain point, the effect became quite startling as another Stone or On-Line was added. Shazaam!
Funny how you think it all sounds good till you try something like this. Adding this many Shakti products elevated my system several notches. These are a must for anyone who feels that they have it all. Until you "Shakti-ize" your system, however, you may never know how it really sounds. As far as my system is concerned, they are here to stay.
Dave ClarkRetail Stones $230/each, On-Lines $99/pair"
..end quoteRandi says you are not telling the whole truth. He will pay you ONE MILLION dollars for you to prove he is full of shit! You can do what ever you want with the money! Even buy more stones and pebbles! Or maybe upgrade your golden ear system that somehow improves with magic.
and this means what? That I know Ben, that he brought over a bunch of Stones and Onlines, that he treated my system based on where he felt they would be best served, that I used information from his site to explain what they were doing to the components, and that I liked them enough to arrange a purchase? Funny, if you ask all the Shakti users they will probably say prety much the same as I wrote. What's your point?
As to advertising, what is the connection? I liked something from someone that at one time advertised with PFO/audiomusings. Gee, I like stuff from people who do not advertise with us too - and probably (for whatever reason) never will, but that does not change what I heard. You are assuming WAy too much here.I have no idea what any of this means.. "Randi says you are not telling the whole truth. He will pay you ONE MILLION dollars for you to prove he is full of shit! You can do what ever you want with the money! Even buy more stones and pebbles! Or maybe upgrade your golden ear system that somehow improves with magic."
What truth? That I put something in my system and it made a difference for the better. Why you getting so angry here? I have no grudges with you or anyone else. If you don't like what I write, don't read it.
Dave Clark
![]()
I'm not angry! I'm LMAO!You promote advertisers products and this guy Randi says since these qualities you heard are so obvious for you to promote them in your magazine he is going to be a nice guy and pay you a MILLION dollars if you can duplicate your writing for him.
You back away --- THIS IS FUNNY - Take the money and run!
PS- I see it is your plicy to NOT REVIEW PRODUCTS THAT PRODUCE MAGIC!
You send them back -- NO REVIEW!I find this funny too!
the results are subjective, and as such, fall into the category of "non-scientific." do you get it, magnetar?? randi is safe, because he's asking for scientific proof of something that, at least for now, we don't know how to exlain (maybe you remember how an aurora borealis was considered the "fal-out" from a battle amongst gods: took a while for science to get that one right...). my shaktis make a big difference in my system, and i don't know why. this doesn't mean i'm lying: it just means i can't satisfy the requirements that randi insists on to get the money.have you ever tried shakti stones, or are you just a sceptic? the only way to really know is if you try them yourself, and even then, given the importance of system synergy, if they don't work for you that doesn't mean they won't for someone else. dr. clark has a point of view about audio that i understand, and therefore i know how to take his reviews. if all reviewers worked only with objective - factual - information, it would imply that we all heard the same things...do you really want to claim that?
![]()
There's NO requirement or even interest by JREF in a SCIENTIFIC EXPLANATION.
All that the challenge is about is your being able to demonstrate the ability you claim, in this case the ability to hear rocks.That's it. End of story.
Subjectivity IS NOT THE ISSUE OR THE PROBLEM! Only YOU need to hear it, not anybody at JREF!
Just show them you hear what you claim to hear and you get $1,000,000
![]()
I get it fine young. I have no reason to spend the money on these promotions of faith. My system doesn't need magic to sound good.If positive feedback's editor believes the advertisers product works (as he did in the ad he wrote for them) he should have already excepted the offer and banked the cash.
I recommend YOU take Randi up on his offer on these stones ASAP, and beat the ad writer to the cash. If you feel they "my shaktis make a big difference in my system" then do it!
Just because "and i don't know why" is no excuse - he is just looking for facts to support it happens, not why.
you haven't been reading any of the responses to the flaws built into the offer, have you? let me choose the criteria for acknowledging the effect, and i'll take that million today.
![]()
and partially on how they want it set up. The language of the test makes it clear that it is based on Mutually Agreed Upon Conditons. And upon Mutually Agreed Upon conditions (in advance) of what constitutes success.How hard is that? It is not the average audiophile Randi is taking aim at, but the folk that Professionally Claim to be able to hear these differences at will. If a man is basing his profession on a supposed ability, then he must be able to demonstrate that he actually HAS that ability. Otherwise, he is a quack.
Give Me Ambiguity or Give Me Something Else!
![]()
I've read them and they are just cop outs. Will you send me your stones so I can single blind test them to see if I should go after the million since you folks are too goofy to prove Randi wrong?
It's an essential part of the challenge that YOU DO GET TO CHOOSE THE CRITERIA!All you have to do is sign an explicit agreement in advance of the test as to what the criteria will be.
IF you're successful (if you meet the criteria you signed up to) YOU GET THE $1,000,000
actually, you just contradicted yourself. in an earlier post you stated that the criteria were consensual. therfore, i don't get to choose the criteria.perhjaps the reason the million will sit there for eternity is that the consensual criteria will never be agreed upon..!
i'm off to send my "rock" to magnetar so we can figure out how to split the money...have a good sunday. time for some music.
![]()
Enjoy the rocks, whether within your cranium or without, I'm sure they'll enhance your listening experience.
![]()
that's the best you can do? what a waste of time. i was hoping that with your passion on the topic at hand you might be able to convince of something. you are in a tautology, and i can't, at this point, help you out.
![]()
but I do not see that under Randi's rules I could satisfy his conditions of proof. Perhaps some else.
As to non-reviews, life is too short to spend a lot of time with something that sdoesn not work. If a prodcut is bad, the community is small enough to sort it out quickly. This does not preclude us from saying that we did not like A versus B because of system synergy, biases, etc. That we do.
If you read or editorial policy it is all there for the reader to either agree with or not. You know where PFO is coming from - whether it meets your needs or it not.
Dave Clark
![]()
The community has NEVER filtered out ANY tweak or magic sound enhancer.My memory is admittadly less than perfect, so if one could mention a few tweaks that were rejected by the high-end community could you please refresh my memory?
Give Me Ambiguity or Give Me Something Else!
![]()
dollars working for positive feedback or any other job you have?How long will it take you to prove you are right?
Are you saying you are wrong, and there is no way to varify the dtones actually make the differences you wrote about in your ad? Or are you saying it is not worth your time to pocket the cash?
Magnetar
![]()
neither of your options, magnetar. he, like many here in this thread, realize the absurdity of trying to first agree on a methodology for testing, and second on the impossibility of actually quantifying a subjective set of criteria. thankfully the great randi has debunked many fraudulent magicians, but he plays a game himself with a challenge in which the rules themselve are impossible to agree on, and where the "facts" are only opinions.i myself have decide to offer $1,000,000 to the first person who can prove that the mona lisa by leonardo is the greatest painting of all time. if no one takes me up, then i suggest anyone who claims the painting is the greatest ever is a fraud.
![]()
Well - it's more black and white than that.The 1,000,000 question is more like this -is the Mona Lisa a painting or is it not? It has nothing to do with if it's the greatest painting!
Please specifically show me where randi's rules are impossible to agree upon.
Also to be clear - Randi is the one that out to prove 'opinions' are not facts - not the other way around. IOW - you bring him a what you believe is a fact (it's not proven yet- you make a claim ie the Mona Lisa is a painting - so it's your opinion) and it's up to you to prove it.
but yesterday I ripped off (completely) the nail on my big toe, my right hernia (once repaired) hurts, and after a glass or two of Pinot Noir, well.... now for football and/or the Summer Olympics.
Dave Clark
![]()
Hey, did you watch the American Women's Softball team beat the crap out of the Aussie's today? It was great! Mercy!Now let's see you do that to Randi and take that cool million. ; )
Magnetar
![]()
I think you may well be right.
____________________________________________________________
"Nature loves to hide."
---Heraclitus of Ephesus (trans. Wheelwright)
![]()
This post is made possible by the generous support of people like you and our sponsors: