|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
In Reply to: An attempt to explain posted by John C. - Aussie on November 15, 2002 at 00:02:22:
I think this whole thing of perception, preference and scientific validation gets way more confused that it needs to.I'll repeat a hypothetical I just used in another post. I have two sets of cables, A and B. I switch them in and out of my system, listening each time I make the switch knowing which cable I’m listening to, and then claim that they sound different. At that point I'm describing my "perception". Whether or not my perception of difference is due to actual audible sonic differences between A and B can only be "scientifically validated" or verified through control testing, and even at that, only to a certain statistical confidence level.
If I say I like cable A better than cable B because A sounds better to me, that is my "preference", and it doesn't matter if the difference I'm hearing between the two cables is really due to actual audible sonic differences between the two cables.
It seems to me that the distinction between perception, preference and validation is just as simple as this. Moreover, I think it is important we not blur this distinction because we already have more issues, unresolved debates and obstacles in the "validation" arena than we can handle, and don't need to be unnecessarily complicating the matter further.
Follow Ups:
If I say I like cable A better than cable B because A sounds better to me, that is my "preference", and it doesn't matter if the difference I'm hearing between the two cables is really due to actual audible sonic differences between the two cables.This statement is unclear to me. Are you suggesting that the actual audible sonic differences between the two cables leading you to your preference could be the result of something external, such as the color of their encasements?
Are you suggesting that the actual audible sonic differences between the two cables leading you to your preference could be the result of something external, such as the color of their encasements?
The short answer is yes. If I choose my cables based on sighted auditioning (which I in fact do), there is no way for me to know for sure that I'm basing my decision on actual sonic differences, as opposed to color preference, advertising, reviews, recommendations or any other factor other than actual audible sonic differences. The only way I could possibly hope to know for sure that it is actual audible sonic differences that I'm basing my choice on would be for me to make the choice under control testing where the only possible factor I was basing that decision on was the audible differences.
> > I'll repeat a hypothetical I just used in another post. I have two sets of cables, A and B. I switch them in and out of my system, listening each time I make the switch knowing which cable I’m listening to, and then claim that they sound different. At that point I'm describing my "perception". Whether or not my perception of difference is due to actual audible sonic differences between A and B can only be "scientifically validated" or verified through control testing, and even at that, only to a certain statistical confidence level. < <Yes! It can be verified through control testing. However, failure to verify a sonic difference during control testing is a *substantially* less meaningful result. Anyone professing strict adherence to scientific method must admit this.
However, failure to verify a sonic difference during control testing is a *substantially* less meaningful result. Anyone professing strict adherence to scientific method must admit this.Failure to verify in one particular test has little meaning to the borad question of whether cables of similar gauge and length can sound differenct because it is a test of particular cables being testing in a particular system. However, if the protocol of the test is valid, failure to verify, it would seem to me, is just as valid for those particular cables in that particular system, as would be verification.
I think we can agree that we are talking about testing the hypothesis that cables do affect sound. So a test of this hypothesis can be valid, regardless of the outcome of the test. Agreed so far?Repeated positive results that show cables can sound different are required to validate the hypothesis. If there are repeated negative results, however, it does not invalidate the hypothesis.
Therefore, the two possible outcomes of hypothesis testing, the positive or the negative, carry a different weight when applied to supporting the hypothesis. And of course, any result which is applied to the hypothesis must be from a valid test.
Clear as mud, huh?No, you are quite clear and I fully agree. Moreover, I have never seen what I would consider a valid challenge to this proposition. Language and misinterpretation often seem to get in the way when discussing all of this, but from a scientific viewpoint, I wish we could at least get basic agreement on fundamental principles such as this.
Proof in science is not quite as simple as people seem to think. There are rules and standards which have to be satisfied.One rule is that no amount of observation can demonstrate that something definitely does not exist. So, the fact that there is no evidence that anyone has seen a unicorn in the last few centuries does not prove that unicorns don't exist or that they are now extinct. It does, however, make it extremely likely that they don't exist and even a normally rash person would be unlikely to bet on finding a unicorn.
On the other hand, it only requires the production of 1 unicorn (to allow for verifiable observation) to show that unicorns do exist. Of course, if they couldn't find any other unicorns the researchers would probably kill and mount the one they had so that they could really prove the point. Pity we can't do that with audible differences :-)
So proof really requires only one substantiated ovservation here, though we would normally require other researchers to duplicate those observations elsewhere if we were talking about hearing a difference in cables rather than sighting a unicorn, but no amount of testing which fails to demonstrate a difference can conclusively prove that differences don't exist.
In practice, however, if you can't demonstrate the existence of a difference, researchers examine the test conditions looking for things that may have masked the demonstration of a difference, try constructing some other test designs that might work better and give that a go, and if it still can't be shown after a reasonable number of genuine attempts, they stop testing. At that stage they probably definitely BELIEVE rather than KNOW that there is no difference but improvements in knowledge elsewhere and/or better test instruments may cause them to revisit things at a later stage.
Non confirmatory results do have a different standing than confirmatory results.
I'm not sure if there is a mix up in communication here, but I can't find anything in your post that I would disagree with.
You said, as your final sentence in your post: "However, if the protocol of the test is valid, failure to verify, it would seem to me, is just as valid for those particular cables in that particular system, as would be verification. "My point is that there is a sense in which failure to verify is not as valid as verification. Putting it a bit simplistically, verification represents proof but failure to verify does not represent disproof, it merely means that the hypothesis remains unproven. Verification removes doubt, failure to verify still leaves doubt as to whether the hypothesis is false or the test failed for some reason. When failure to verify occurs, the nature of the results and the number of times that verification has failed to be achieved can both serve to reduce the level of doubt, perhaps even to extremely small levels, but there still remains the in principle possibility that the hypothesis was correct and the test or tests have failed for some reason.
David:Just to try and be clear (and I still don't think we disagree), my sentence was meant to apply solely to the results of that particular test and solely with respect to the specific items being tested, and not to the implications of such test regarding proof or disproof of a particular hypothesis. Also I was assuming a perfect test and protocol so that the null result was totally reliable (a situation I agree in practice can never be achieved).
I fully admit my sentence was not well crafted and easily subject to misinterpretaton.
All results apply to the particular test and each test always has an hypothesis, but the results of a test verifying the hypothesis are generalisable while the results of a test which doesn't verify the hypothesis aren't generalisable.What you actually test is the counter hypothesis which is the opposite of the hypothesis. So, if you're testing for audible differences between two cables and your hypothesis is that audible differences do exist, the counter hypothesis is "there are no audible differences between the cables". The only way you can fail to show the counter hypothesis is by actually showing there is an audible difference. So failing to verify the counter hypothesis is conclusive and verifies the hypothesis that there is a difference. If you end up with a result which fails to falsify the counter hypothesis, you're left wondering whether the failure was due to there being no audible difference or being due to the fact that the test wasn't capable of demonstrating the existence of a difference that was there.
Remember the requirements for repeatability and the assumption that nature acts consistently. If the counter hypothesis fails and there is only one way it can fail, then you can expect all future tests to return the same result. That's why failing to verify the counter hypothesis would confirm the hypothesis and demonstrate the existence of an audible difference. It's also why you can generalise from that outcome - it can only occur if the hypothesis is true.
On the other hand, if you can't falsify the counter hypothesis, you're always left with 2 options for why you failed and you can't eliminate either one of those options. It's something like trying to show that there are different things on the 2 sides to a coin by tossing it - the fact that it always comes up "heads" in a series of tosses doesn't demonstrate that there is no "tail" side, no matter how long the run of heads goes on for. You may have a two headed penny or you could just have "lucked into" a very long run of heads and you can never resolve which one it is by continuing to toss the coin because you haven't shown that the coin has 2 heads and it must always remain possible that it could come up "tails" on the next toss, just as it's always possible in principle that the test could verify the hypothesis if you ran it another time. It's the inability to eliminate this possibility of a positive result that prevents you from generalising from the outcome here.
There are ways of disproving some things, but it isn't by observation. You have to demonstrate that it's impossible in principle for the thing to occur because it's logically incompatible with the accepted theory. That still leaves open the possibility that the theory/accepted law is wrong but shifts the onus of proof so that anyone trying to prove the thing has to disprove the theory or, at the very least, prove that an exception to the theory exists.
This post is made possible by the generous support of people like you and our sponsors: