|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
64.231.72.69
In Reply to: RE: That clears up a lot! posted by EBradMeyer on September 21, 2007 at 20:37:38
Good Lard can you really be serious?
"Of course this is a scientific paper; it's been published in a refereed journal, where people of demonstrated competence pick it apart and demand whatever they think it needs before they allow it to go to publication. The methodology is sound and the work has stood up to the challenges the reviewers posed, which were few, because we knew what we were doing. They wanted a bit more data reduction, which we supplied, and they otherwise pretty much appreciated our thoroughness."
What, are we to suppose that you gave them a verbal extensive description of your methodology, procedures, detail results... the whole shebang! What then, should we suppose they said, "Great, look write up a short summary and we'll publish it in our scientific journal... Oh! don't forget the conclusions but if you want to leave off all the boring detail No Problemo!, we're already fully satisfied on those points and there's no need to bore the poor readers to death."
Did it go down something like that? Surely it must have because a memeber here reported after reading the full report that "... the test description and analysis is surprisingly thin. No equipment readout, no results breakdown by listener or location or whatnot, no detailed description of listening venues, no musical selections. No null hypothesis, no description of type I/II error."
Which brings up an interesting question, how exactly would a reader come "... up with a cogent objection or spots a flaw in our procedures or conclusions"?
I mean really! Is this a reflection on what passes for "science" for the folks of "demonstrated competence" at the "scientific" journal in question?
We hope our perceptions ain't slippin
But we swear to God we seen Lou Reed
Cow tippin
Follow Ups:
"a memeber [sic] here reported after reading the full report that "... the test description and analysis is surprisingly thin. No equipment readout, no results breakdown by listener or location or whatnot, no detailed description of listening venues, no musical selections. No null hypothesis, no description of type I/II error."
Okay, again you are read someone else's opinion, accept it as gospel, and insist you know what you're talking about on that basis. You seem determined to critique the paper without reading or really thinking about it, so I'm about done with you.
Each of those criticisms, as it happens, is either untrue or irrelevant. The exact equipment doesn't matter because it was well chosen and was good enough for the test (we're preparing a more specific list, to be made available to those who have read the paper and wish to know). There were several venues, all chosen for the excellence of the room; one important criterion is that the background noise must be extremely low for enough detail to be heard, and we measured and reported this.
There is little point in burdening the reader with individual listener data -- though we checked their high-frequency hearing limit of most subjects, since that seemed possibly relevant -- when not one person could hear differences with music at normal levels. Most subjects listened from the sweet spot in our main system, of which there is a photograph in the paper, but again no one, sitting anywhere, passed the test.
A list of musical selections will likewise be made available; we submitted one with the paper but it was not published. The null hypothesis -- that there was no detectable difference between the high-bit audio and the same signal passed through our codec -- was obvious from the paper. Type II error (Have you read Leventhal's paper? Somehow, I doubt it.) is relevant only when there are results that show positive correlation, but not enough of one to meet the 95% confidence limit. We had no such data. No one even came close.
"I mean really! Is this a reflection on what passes for "science" for the folks of "demonstrated competence" at the "scientific" journal in question?"
The idea that the people who actually do the work in the audio industry are all fools, who neither know what they're doing nor understand anything about scientific procedure, is bandied about rather easily around here, I notice. Saying such things just makes you look bad, a job I leave in your competent hands. -- E. Brad Meyer
"The exact equipment doesn't matter because it was well chosen and was good enough for the test "
You are right BUT who decides that the equipment was well chosen and good enough for the test?? You or the reviewers of your paper? Why the reviewers should be allowed to determine this. Your job is full disclosure in the paper not a predecided "it was good enough". When I publish papers in scientific journals I disclose ALL the instrumentation we are using for our measurements. We also show examples of those measurements so that people can see that the instrument is operating without bias and the data delivered meets important criteria.
An example: I regularly make measurements in the lab on trace levels of organic compounds. I have three different mass spectrometers in the laboratory that I could use to make those measurements. One of them though has a sensitivity about 100 times greater than the other two.
If I were, for whatever reason, to use one of the lower sensitivity instruments I could very easily state that most of the compounds that I KNOW to be there are not there because I cannot detect them on that instrument.
If I were to publish a paper saying that we cannot detect those compounds because they are not there but I put in the paper in the methods section that I am using merely a Brand X mass spectrometer with detection limits that are quite low then my peers would like ask, "Why the hell didn't you use a more sensitive instrument because we KNOW that the compounds are present at lower levels than you could measure". If I didn't disclose the mass spectrometer in the paper for fear of getting such a question then the first question would be, "What type of Mass spectrometer did you use and is it good enough?"
That your peers were apparently unconcerned about your failure to disclose the equipment you used for your test says a lot about how lax they were in their duties as reviewers OR there is an unreasonable bias in the Audio engineering society that audio equipment used is not important for listening tests!! What rubbish because in other scientific fields the equipment used to prove a test or not is VERY important. If I were to use a Mass spectrometer from the 1960s there might be a lot of eyebrows raised when that paper is submitted for publication. It seems that you peers were only concerned that your PROCEDURE was correct and not the meat of how it was actually conducted and with what equipment, which despite what you think, is highly relevant. I am an Ph.D. in analytical chemistry with many years training and experience in making analytical measuremnts and data interpretation and I can tell you that in chemistry or audio its just as relevant.
The whole point of audiophile gear is that the exact equipment DOES matter and that it will give you significant bias if not chosen carefully. I applaud your use of electrostatic speakers but what amp was used?? This is important because as you are surely aware, electrostatic speakers provide an highly unusual load in that they are largely reactive in nature often with wild impedance curves and can drive many amps into fits of oscillation or just plain bad performance. Why don't you publish the equipment and let US decide if was a good choice or not?? It is not like using a lab equipment where as long as the voltage is right its a good powersupply (we would still include it in the methods section of a paper though).
I've already discussed a lot of this with David Moran on HydrogenAudio (see link).
People who are using my complaint to justify attacking the paper as a whole, who haven't read the paper themselves, definitely don't know how I feel about it. I don't have a problem with the conclusion at all. And for that to be true, ultimately, I don't have a problem with the test setup. I'll probably never be in as good of a position to evaluate the value of high res music as you were with your test setups. It's a result that, even if I were to take your setup with a grain of salt, is still a test well beyond my means both financially and in terms of my listening experience. So I trust it.
Still, I understand that much of the data was omitted either because the editors thought it was impertinent (ie, the music selections), or because it was perceived that no rational debate on the data is possible. The former can't be helped, but the latter seemed like a somewhat cavalier attitude to take, and one that confused me. Even though, as some others have pointed out, it's most likely true.
Like I said to David: "Trust us" is not a very reasonable argument to use in a technical paper. Clearly the editors disagreed, though, and I don't think one must resort to conspiracy theories to explain that. I'm still a little confused as to how much detail is acceptable for peer review, versus how much should ideally exist in a paper for the purposes of test reproduction and establishing trust in the test procedure.
Nevertheless: Thank you very much for doing this test!
- http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=57406&view=findpost&p=517818 (Open in New Window)
Initially it was expressed in:
http://www.audioasylum.com/forums/prophead/messages/3/37245.html
See, it's not about "omission of data"!
Then that was expanded upon in:
Howdy
Ironically one of the papers in their references doesn't seem to have these problems, it gives much more detail about equipment used, music selections, details of the test protocol, some demographics of the test subjects, a serious digression into the four outliers who could hear a clear difference, etc.:
D. Blech and M. Yang, “DVD-Audio versus SACD: Perceptual Discrimination of Digital Coding Formats,” presented at the 116th Convention of the Audio Engineering Society, Berlin, Germany, 2004 May 8–11, convention paper 6086
I certainly don't know what the differing constrains are for being published by AES in these two different fora, but after reading the paper which is the subject of this thread reading the cited paper above was a breath of fresh air.
-Ted
One is once again reminded of Todd Krieger's inimitable phrase, "token peers".
Why, I've even gotten E. Brad to treat me high-handedly, in that connection:
"This statement was about whatever should or should not
be written at this point in a particular refereed journal. Unless you're
planning to write for it, which something tells me you aren't, it doesn't
apply to you."
Good to know.
I'll sign off --
clark, AES Life Member
"Sometimes experimenters may make systematic errors during their experiments, unconsciously veer from the scientific method (Pathological science) for various reasons, or, in rare cases, deliberately falsify their results. Consequently, it is a common practice for other scientists to attempt to repeat the experiments in order to duplicate the results, thus further validating the hypothesis." (see link)
It seems clear that documentation in this case was lacking and hence it would not be possible to replicate the experiment based upon the published paper. This is a sad statement for the documentation of an empirical investigation published in a peer reviewed scientific journal.
It would be entirely futile to argue this point as the demands of the scientific method clearly were not met and franlkly it matters little what I, you, or the editors feel about it. The scientific method makes no allowance for lowering standards regardless of whether rational debate may or may not be possible, and it certainly makes no allowance for "Trust us" assertions!
We hope our perceptions ain't slippin
But we swear to God we seen Lou Reed
Cow tippin
I agree. Full disclosure of equipment used and methods is a mandatory and prefunctory part of all scientific papers. It would be like me publishing a paper on detection limits in mass spectrometery for some compound and then not telling them what kind of mass spectrometer I was using...instead simply saying that it was a properly designed and funcitoning unit. I can assure you lots of questions would be forth coming from the reviewers if I had omitted that critical piece of information. Also, omission of sample handling would raise a few eyebrows!
You're reasoning with a lynch mob. These are not people interested in comparing, testing, improving, and accumulating knowledge. It's a fraternity enamored with rituals and rites, a club rather or a mutual affirmation society, not a site for openly debating issues in electrical engineering, acoustics, and the physiology of perception as one might otherwise be mislead to think.TL
"You're reasoning with a lynch mob."
Well, when anyone decides the paper is no good without reading it, I have to agree there's no point in arguing with that person.
We tried to present the data necessary to prove our result -- that if high-bit audio sounds better, the extra bits are not responsible. If the equipment we used was defective in any audible way, the subjects would have heard the difference, so any errors of the type we are alleged to have made would have led to the opposite outcome. That's why, for example, the gain stage that we used to match the levels of the CD link to the original was in series with the CD link. When subjects heard the high-bit audio, it wasn't in the circuit. So if it had an audible flaw, again the experiment wouldn't have come out as it did.
In the meantime, we are planning to make available the details we left out for the purposes of clarity and brevity. That handout/email should be ready this coming week if anyone here is interested. -- E. Brad
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: