![]() ![]() |
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
98.220.32.239
In Reply to: RE: Tubeguy! You might have a contestant!!!! posted by theaudiohobby on April 25, 2008 at 13:53:34
A long term DBT would add a measure of complexity that would make it inferior. An ABX comparator as mentioned above would add something to the signal that is not there now, for example. My method of SBT's introduce no unknowns so it would therefore be superior.
Follow Ups:
** My method of SBT's introduce no unknowns **it does, see the Clever Hans effect.
** A long term DBT would add a measure of complexity that would make it inferior **Why not follow your logic to the nth degree then, a longterm sbt would add a measure of complexity over and above a longterm sighted test, so a longterm sighted test should be superior, right? As I said earlier, your position is untenable because your its illogical. Reliability of detection is the goal of the exercise not lack of complexity. In conclusion, Short term dbts are measureably superior (at detecting differences) because they mitigate against psychoacoustic masking effects.
NB: Removed signing off.
Music making the painting, recording it the photograph
Let me help you out.
> Reliability of detection is the goal of the exercise not lack of complexity <
Is a long term DBT more reliable than a long term SBT? No. Is a long term SBT more reliable than a long term sighted test? Yes. So when there is no further reliability to be gained, the least complex is the superior method. Keep It Simple, Stupid.
*** > Is a long term DBT more reliable than a long term SBT? No ***Well, your opinion obviously but its wrong, it's a scientific fact that dbts are superior (more reliable) to sbt, longterm or short term, now you may prefer sbts but that another matter entirely.
*** Is a long term SBT more reliable than a long term sighted test? Yes ***
yes
** Yes. So when there is no further reliability to be gained, the least complex is the superior method **
Well, your logic is flawed, dbts are more reliable, reliability increases in tandem with complexity, just the way it is.
Music making the painting, recording it the photograph
> it's a scientific fact that dbts are superior (more reliable) to sbt <
If that's the case, please point me to the proper citations that show scientifically that dbt's are useful in audio and where they have been calibrated to show their sensitivity to that being tested. You'll also need to show that the insertion of an ABX box doesn't obscure the details the test is trying to show. Show where a "forced choice" abx methodology is superior rather than designed to obfuscate.
Because you say so is not good enough. Showing they are fine for pharmaceuticals is not good enough.
We've covered no new ground here. I expected as much. I also expect that you won't be able to provide what I've asked, so I'll be jumping off this circular argument train. But if you can provide some proof, please do so.
Experience from many years of double-blind listening tests of audio equipment is summarized. The results are generally consistent with threshold estimates from psychoacoustic literature, that is, listeners often fail to prove they can hear a difference after non-controlled listening suggested that there was one . However, the fantasy of audible differences continues despite the fact of audibility thresholds.
Ten years of A/B/X Testing ,AES Convention: 91 (October 1991),Clark, David L.*** If that's the case, please point me to the proper citations that show scientifically that dbt's are useful in audio and where they have been calibrated to show their sensitivity to that being tested. ***
I just posted an abstract for you to digest. Furthermore, did you read the example posted in andyC post? Did you read the control experiment and the conclusion which clearly states "The A/B/X test was proven to be more sensitive than long-term listening for this task.", do you have anything to counter this aside from your assertion. You are the one going round in circles here, not I. And understandbly so, because your position is untenable due to faulty logic.
Music making the painting, recording it the photograph
There are serious methodological reasons for considering single blind tests inferior to double blind tests, and the Clever Hans problem is a perfect example. Single blind tests are particularly unconvincing to others who are not personally familiar with all of the players. That is not to say they are worthless, but they are not likely to be convincing to third parties, as is needed for an art or science to progress. But faulty logic is possible with double blind tests as well. As commonly conducted by audio hobbyists, double blind tests work reliably only when they conclude that something was heard. As generally conducted by these hobbyists they do not have sufficient statistical power to conclude that nothing was heard.
Experimental Science needs the support of Mathematics, particularly mathematical theories of causality that justify the use of statistics. When conducting a sequence of experiments one needs to start with a model of what is possible and what is likely. One then conducts the experiments, applies statistical methods and refines the model.
A simple causal model suffices in the case of the successful amateur ABX listening test. One makes the highly plausible assumption that the source of randomness is unknown and uninfluenced by the test subject, affecting the subject only through the physical mechanisms of hearing. Then when the statistics are analyzed one concludes that the subject heard the stimulus. (If the test set up were poorly designed, for example if it displayed the random number on a screen, then this conclusion would be invalid. Note also that even if the test were done perfectly it would fail to convince a person who didn't buy into the underlying model. For example, an audio skeptic who believed in ESP (!) might conclude that a successful test subject used his psychic powers and not his ears.)
Now consider a slightly more complex model. Suppose that a sound is near the threshold and a subject has the ability to detect its presence or absence 5% better than chance. In other words, if the sound is present the subject will say he heard it 55% of the time and the subject will say he didn't hear it 45% of the time, while if the sound is not present, the subject will say he heard it 45% of the time and say he didn't hear it 55% of the time. Can the subject hear the sound? I would say yes, albeit just barely.
What will happen if a traditional 16 sample ABX trial is run? It will almost certainly fail to show that the subject heard the sound. However, it would be an abuse of statistics to conclude that the subject did not hear the sound.
When experiments are run which have marginal results, it is not uncommon for people with different underlying causal models to reach opposing conclusions. In many cases, Science progresses only after scientists with outdated models die and are replaced with by a newer generation.
Tony Lauck
"Perception, inference and authority are the valid sources of knowledge" - P.R. Sarkar
Well, a poor test is a poor test, so it does not add much to the discussion to say that a poor dbt will produce unreliable results. That's equivalent to saying that a well driven Yugo will outsprint a poorly driven Ferari 550 Maranello, well yes it's obvious. At any rate, the dbts cited in this thread are professional conducted, and the conclusions are pretty much consistent with prevailing psychoacoustic theory.
Music making the painting, recording it the photograph
"Well, a poor test is a poor test, so it does not add much to the discussion to say that a poor dbt will produce unreliable results."
We are agreed that poor tests are poor tests. Unfortunately, poor tests are often cited in this forum as proving things they don't, and more unfortunately sometimes poor tests are published in refereed journals. There are a few practical problems that have kept me from finding the "good" tests to see if perhaps they can be extended (or possibly refuted):
(1) Greedy journals make it expensive to read literature for those of us who live in rural America and so do not have easy access to technical libraries.
(2) Journals generally fail to fully describe models and experimental procedures and rarely disclose the underlying raw data.
(3) I lack a concise bibliography of the "good" tests and in light of the other difficulties I face find it excessively burdensome to perform an ab initio literature search.
I have a longstanding interest in audio epistemology and would persue this in more detail were I to be given a good starting point. However, in this day of desktop publishing and effectively free communication, I consider most journal publishers in the same category as the RIAA, namely parasites. I am reluctant to pay good money out of my pocket to read an article unless it is likely to be relevant. I have less reluctance to spend money on text books or monographs.
Any suggestions would be helpful.
Tony Lauck
"Perception, inference and authority are the valid sources of knowledge" - P.R. Sarkar
"I have a longstanding interest in audio epistemology and would persue this in more detail were I to be given a good starting point. However, in this day of desktop publishing and effectively free communication, I consider most journal publishers in the same category as the RIAA, namely parasites. I am reluctant to pay good money out of my pocket to read an article unless it is likely to be relevant. I have less reluctance to spend money on text books or monographs.
Any suggestions would be helpful."
Hi Tony,
If you're interested in AES articles, feel free to shoot me an email, and I can provide, err, "more information" ;-).
nt
I guess those citations gave you cause for pause, your opinion (and that of many audiophiles) on blind testing is illusory, have fun.
Music making the painting, recording it the photograph
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: