|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
71.233.247.75
In Reply to: RE: Boston Audio Society Strikes Again! posted by Charles Hansen on September 11, 2007 at 09:44:35
Mr. Hansen;
I respectfully suggest that you and anyone else who cares to comment on our paper actually read it first.
We sought diligently for source material, playback systems, and/or subjects to turn up any audible difference between the SACD or DVD-A source and the same signal passed through a CD-quality "bottleneck". We established a reference system gain (with digital full scale at about 100 dB SPL) for which we reported our results. If we increased the gain of the system by 15 dB, and used one particular recording with an extremely low background noise level (a list of sources is being prepared and will be made available to anyone who wants it), and looped the player through a bit of room tone, then anyone could hear the noise introduced by the CD codec. Played back at that level, the rest of the recording was quite a bit louder than life, since the small ensemble never reached anything close to 115 dB in the hall. This is all in the paper too.
But that (and the extreme case, when the player was stopped and the gain was up) was the only time we or anyone else could hear the difference. We did an evening of tests on a system with audiophile credentials -- recent Quad ESLs, Conrad Johnson preamp, commensurate player and power amps, $600 cables -- in a purpose built room that was very quiet and did not degrade the excellent imaging of the Quads. We used the favorite discs and cuts of the owner and his audiophile friends and let them sit and listen any way they wanted. Still no correlation. Their recordings were all noisier than the CD link, as are 99.9% of all discs out there.
The above high-gain test was done first with an inexpensive Pioneer player, which was plenty quiet enough to reveal the difference; the test exposed a small but audible low-level nonlinearity in its left channel decoder. We tried a $2000 Sony player, which sounded clean, and wound up doing the great majority of our tests with a Yamaha DVD-1500.
We also wrote that our recordings were as a class the best-sounding commercial efforts we had heard anywhere. As it happens, virtually all could have been released on a CD (just the two-channel versions, obviously) without sounding different. Why that doesn't happen is another very interesting discussion, and is addressed briefly in the paper.
-- E. Brad Meyer
Follow Ups:
Interesting - I'll have to look at the paper. Any time I see something of this nature, with this type of conclusion, I have to then wonder why so many engineers (recording, mixing, and/or mastering) part with hard-earned $$ for high bitrate digital equipment. Or cables for that matter, but that's another kettle of fish. Guys like Bob Ludwig, for example. Or any number of others of course. Now, Bob clearly has a bigger budget to play with than most in his profession, so a devil's advocate could say that someone in a similar position might be collecting the latest and greatest gear simply to impress clients. But so many other guys out there are really just getting by, or operate in a high-risk strata of their profession, and I would be hard pressed to believe that most/all of them would play that kind of game. What's left is that higher bitrate equipment, and other types of 'better' gear, must sound better to their ears. At least that's how it would seem to me.
I recall John Atkinson's remarks on a DBT study some years ago, to the effect of "what has really been proven is that these subjects couldn't hear the difference, not that differences did not exist." Or something like that. I'm probably misquoting. I would wonder if maybe that applies here. I will have to take a look at the paper, as you said, and see for myself.
Thanks,
Mike
Quads were a good choice for this kind of test.
You see many mistakenly mistook the study as a scientific investigation as hence there was considerable derision when it was revealed by a member (who had read the full report) that there was scant information as to the test methodology and procedures, equipment employed (including little on the "bottleneck”), selection of test subjects, statistical analysis, and on and on.
That was to be expected of course were it the case of research with pretenses of serious scientic. Now however we find that the study was at least in part informed by the results of a listening session at an audiophile's home with his audiophile buddies in attendance as listeners. That and some specialized tests where the playback levels were such that one hopes that participants left without suffering hearing damage; and who knows perhaps even further procedures and tests not described in any great detail?
Much ado about little it now seems, at least for those who were mystified at what appeared to be obvious omissions in the report of a serious research project/investigation.
It seems to me that the confusion could well have been avoided had the report included at least as much detail as you've now provided. Then all would have clearly have seem that it wasn't an attempt at serious science.
We hope our perceptions ain't slippin
But we swear to God we seen Lou Reed
Cow tippin
"many mistakenly mistook the study as a scientific investigation"
I'm not sure whether you're deliberately misunderstanding the situation, just to be provocative. If so, I shouldn't dignify your post with a response. From your post, I don't think you have read the paper, in which case I really shouldn't waste my time answering you. At any rate I'm going to assume that at least some people here would like to hear a few more details.
Of course this is a scientific paper; it's been published in a refereed journal, where people of demonstrated competence pick it apart and demand whatever they think it needs before they allow it to go to publication. The methodology is sound and the work has stood up to the challenges the reviewers posed, which were few, because we knew what we were doing. They wanted a bit more data reduction, which we supplied, and they otherwise pretty much appreciated our thoroughness.
Subjective publications become obsessed with specific makes and models of equipment used, but in a serious test you can use what is known to be competently designed equipment and everyone understands that your experiment is good. It hasn't been compromised just because we failed to conform to some tweako fashion of the month. The A/D/A link is not a secret; it was an HHB CDR 850, a very highly regarded pro CD recorder that some really fussy engineers have used as their main A/D for acoustic sessions. What you seem not to get is that if we hadn't used a good deck, our subjects would have spotted the difference between it and the high-bit source, which we gave them every opportunity to do successfully. That they didn't means that the codec is good enough.
The audiophile test session you denigrated was part of our experiment precisely because the system conformed completely to standards espoused in the subjective journals. Professional studios rarely use electrostatic loudspeakers because they won't play loud enough and are not sufficiently reliable for professional use. But it was a near certainty that someone in that part of the industry would claim that with a "real audiophile system" the differences would have been obvious. So we found such a system and gave its owner and his friends a chance. We conducted that test with the same rigor as the others; levels of the two signals were matched within 0.1 dB at 1 kHz, and then the subjects were asked to choose their best material and listen however they usually do, to maximize their aural acuity.
I'm not sure why you think our doing that somehow compromises the test (if in fact you do), but I would guess that almost everyone else here understands that it strengthens our conclusions significantly.
Are our results unassailable? Nope. If someone comes up with a cogent objection or spots a flaw in our procedures or conclusions, we or someone will investigate further. The nature of this process is that nothing is final. There could be someone out there who can hear these differences on music at normal levels, in which case we'd love to have him or her prove it, and if possible teach us to hear the difference as well. But we tried hard to find such a person, and failed, so far. For now, if someone disagrees with our results, it falls to that person to do another experiment and prove the contrary case. -- E. Brad Meyer
"Professional studios rarely use electrostatic loudspeakers because they won't play loud enough and are not sufficiently reliable for professional use."
Sorry but how loud is "loud enough" and how reliable is "sufficiently reliable for professional use"?? I have three pairs of Acoustats, the oldest of which is more than 20 years old and all are capable of nearly 115db with sufficiently powerful amplification. They can play at levels over 100db all day long and simply don't die. No need to refoam a woofer or worry about a burned voice coil. Now I know that there are SOME electrostats that are fragile but the truth is that not all are that way or is it an inherent trait of them being an electrostatic speaker.
This brings up another question though, why exactly is it necessary for a professional studio to listen to music at such elevated levels?? Is it to hear all the details in the recording?? If so then the speakers they are using are not good and they would be better served using something with a high resolution so that they can listen at more reasonable levels and still get all the information they require.
Philips once used to use Audiostatic loudspeakers for mastering their classical recordings and they made some very fine sounding recordings during this period.
"We conducted that test with the same rigor as the others; levels of the two signals were matched within 0.1 dB at 1 kHz"
Why not with a broadband source like pink noise?? I have done level matched preamp tests and I found using pink noise to set the SPL level for each preamp (within 0.3db on my test) worked quite nicely and would perhaps give less bias if one or the other source is somehow not uniform and 1Khz gives a significantly different level between test units.
Okay, okay. I should have known better than to cast aspersions on ESLs around here.
Yes, many of them can play loud enough for me, but a peak level of 115 dB SPL at the monitoring chair does not, I'm afraid, come close to many people's requirements (nor is that level sustainable by any ESL I know down into the chest-pounding dance-club bass range; YMMV). Sometimes it's necessary to reveal how much skin there is in a kick-drum sound to a room full of noisy, pharmaceutically impaired musicians.
You say no one should need levels like this. You may also say they do much hearing damage, especially to the poor mixing engineers who are in there all day. If you said those things I would agree, but it wouldn't change the nature of the market.
You asked about level matching with pink noise. It's necessary to do this to less than 0.1 dB, which is much harder with a time-variant source. The pitfall you cite with non-flat devices certainly exists. But, partly because it's fast, easy and repeatable, my tendency has always been to use 1 kHz and let the rest of the spectrum do what it will. If the device is non-flat by more than a couple of tenths of a dB over a couple of octaves or more, you're gonna hear it whatever you decide about the level match. That was very true for my power amp test from 1991, as you can see from the graphs. It's at http://www.soundandvisionmag.com/features/651/the-ampspeaker-interface.html
-- E. Brad
Good Lard can you really be serious?
"Of course this is a scientific paper; it's been published in a refereed journal, where people of demonstrated competence pick it apart and demand whatever they think it needs before they allow it to go to publication. The methodology is sound and the work has stood up to the challenges the reviewers posed, which were few, because we knew what we were doing. They wanted a bit more data reduction, which we supplied, and they otherwise pretty much appreciated our thoroughness."
What, are we to suppose that you gave them a verbal extensive description of your methodology, procedures, detail results... the whole shebang! What then, should we suppose they said, "Great, look write up a short summary and we'll publish it in our scientific journal... Oh! don't forget the conclusions but if you want to leave off all the boring detail No Problemo!, we're already fully satisfied on those points and there's no need to bore the poor readers to death."
Did it go down something like that? Surely it must have because a memeber here reported after reading the full report that "... the test description and analysis is surprisingly thin. No equipment readout, no results breakdown by listener or location or whatnot, no detailed description of listening venues, no musical selections. No null hypothesis, no description of type I/II error."
Which brings up an interesting question, how exactly would a reader come "... up with a cogent objection or spots a flaw in our procedures or conclusions"?
I mean really! Is this a reflection on what passes for "science" for the folks of "demonstrated competence" at the "scientific" journal in question?
We hope our perceptions ain't slippin
But we swear to God we seen Lou Reed
Cow tippin
"a memeber [sic] here reported after reading the full report that "... the test description and analysis is surprisingly thin. No equipment readout, no results breakdown by listener or location or whatnot, no detailed description of listening venues, no musical selections. No null hypothesis, no description of type I/II error."
Okay, again you are read someone else's opinion, accept it as gospel, and insist you know what you're talking about on that basis. You seem determined to critique the paper without reading or really thinking about it, so I'm about done with you.
Each of those criticisms, as it happens, is either untrue or irrelevant. The exact equipment doesn't matter because it was well chosen and was good enough for the test (we're preparing a more specific list, to be made available to those who have read the paper and wish to know). There were several venues, all chosen for the excellence of the room; one important criterion is that the background noise must be extremely low for enough detail to be heard, and we measured and reported this.
There is little point in burdening the reader with individual listener data -- though we checked their high-frequency hearing limit of most subjects, since that seemed possibly relevant -- when not one person could hear differences with music at normal levels. Most subjects listened from the sweet spot in our main system, of which there is a photograph in the paper, but again no one, sitting anywhere, passed the test.
A list of musical selections will likewise be made available; we submitted one with the paper but it was not published. The null hypothesis -- that there was no detectable difference between the high-bit audio and the same signal passed through our codec -- was obvious from the paper. Type II error (Have you read Leventhal's paper? Somehow, I doubt it.) is relevant only when there are results that show positive correlation, but not enough of one to meet the 95% confidence limit. We had no such data. No one even came close.
"I mean really! Is this a reflection on what passes for "science" for the folks of "demonstrated competence" at the "scientific" journal in question?"
The idea that the people who actually do the work in the audio industry are all fools, who neither know what they're doing nor understand anything about scientific procedure, is bandied about rather easily around here, I notice. Saying such things just makes you look bad, a job I leave in your competent hands. -- E. Brad Meyer
"The exact equipment doesn't matter because it was well chosen and was good enough for the test "
You are right BUT who decides that the equipment was well chosen and good enough for the test?? You or the reviewers of your paper? Why the reviewers should be allowed to determine this. Your job is full disclosure in the paper not a predecided "it was good enough". When I publish papers in scientific journals I disclose ALL the instrumentation we are using for our measurements. We also show examples of those measurements so that people can see that the instrument is operating without bias and the data delivered meets important criteria.
An example: I regularly make measurements in the lab on trace levels of organic compounds. I have three different mass spectrometers in the laboratory that I could use to make those measurements. One of them though has a sensitivity about 100 times greater than the other two.
If I were, for whatever reason, to use one of the lower sensitivity instruments I could very easily state that most of the compounds that I KNOW to be there are not there because I cannot detect them on that instrument.
If I were to publish a paper saying that we cannot detect those compounds because they are not there but I put in the paper in the methods section that I am using merely a Brand X mass spectrometer with detection limits that are quite low then my peers would like ask, "Why the hell didn't you use a more sensitive instrument because we KNOW that the compounds are present at lower levels than you could measure". If I didn't disclose the mass spectrometer in the paper for fear of getting such a question then the first question would be, "What type of Mass spectrometer did you use and is it good enough?"
That your peers were apparently unconcerned about your failure to disclose the equipment you used for your test says a lot about how lax they were in their duties as reviewers OR there is an unreasonable bias in the Audio engineering society that audio equipment used is not important for listening tests!! What rubbish because in other scientific fields the equipment used to prove a test or not is VERY important. If I were to use a Mass spectrometer from the 1960s there might be a lot of eyebrows raised when that paper is submitted for publication. It seems that you peers were only concerned that your PROCEDURE was correct and not the meat of how it was actually conducted and with what equipment, which despite what you think, is highly relevant. I am an Ph.D. in analytical chemistry with many years training and experience in making analytical measuremnts and data interpretation and I can tell you that in chemistry or audio its just as relevant.
The whole point of audiophile gear is that the exact equipment DOES matter and that it will give you significant bias if not chosen carefully. I applaud your use of electrostatic speakers but what amp was used?? This is important because as you are surely aware, electrostatic speakers provide an highly unusual load in that they are largely reactive in nature often with wild impedance curves and can drive many amps into fits of oscillation or just plain bad performance. Why don't you publish the equipment and let US decide if was a good choice or not?? It is not like using a lab equipment where as long as the voltage is right its a good powersupply (we would still include it in the methods section of a paper though).
I've already discussed a lot of this with David Moran on HydrogenAudio (see link).
People who are using my complaint to justify attacking the paper as a whole, who haven't read the paper themselves, definitely don't know how I feel about it. I don't have a problem with the conclusion at all. And for that to be true, ultimately, I don't have a problem with the test setup. I'll probably never be in as good of a position to evaluate the value of high res music as you were with your test setups. It's a result that, even if I were to take your setup with a grain of salt, is still a test well beyond my means both financially and in terms of my listening experience. So I trust it.
Still, I understand that much of the data was omitted either because the editors thought it was impertinent (ie, the music selections), or because it was perceived that no rational debate on the data is possible. The former can't be helped, but the latter seemed like a somewhat cavalier attitude to take, and one that confused me. Even though, as some others have pointed out, it's most likely true.
Like I said to David: "Trust us" is not a very reasonable argument to use in a technical paper. Clearly the editors disagreed, though, and I don't think one must resort to conspiracy theories to explain that. I'm still a little confused as to how much detail is acceptable for peer review, versus how much should ideally exist in a paper for the purposes of test reproduction and establishing trust in the test procedure.
Nevertheless: Thank you very much for doing this test!
- http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=57406&view=findpost&p=517818 (Open in New Window)
Initially it was expressed in:
http://www.audioasylum.com/forums/prophead/messages/3/37245.html
See, it's not about "omission of data"!
Then that was expanded upon in:
Howdy
Ironically one of the papers in their references doesn't seem to have these problems, it gives much more detail about equipment used, music selections, details of the test protocol, some demographics of the test subjects, a serious digression into the four outliers who could hear a clear difference, etc.:
D. Blech and M. Yang, “DVD-Audio versus SACD: Perceptual Discrimination of Digital Coding Formats,” presented at the 116th Convention of the Audio Engineering Society, Berlin, Germany, 2004 May 8–11, convention paper 6086
I certainly don't know what the differing constrains are for being published by AES in these two different fora, but after reading the paper which is the subject of this thread reading the cited paper above was a breath of fresh air.
-Ted
One is once again reminded of Todd Krieger's inimitable phrase, "token peers".
Why, I've even gotten E. Brad to treat me high-handedly, in that connection:
"This statement was about whatever should or should not
be written at this point in a particular refereed journal. Unless you're
planning to write for it, which something tells me you aren't, it doesn't
apply to you."
Good to know.
I'll sign off --
clark, AES Life Member
"Sometimes experimenters may make systematic errors during their experiments, unconsciously veer from the scientific method (Pathological science) for various reasons, or, in rare cases, deliberately falsify their results. Consequently, it is a common practice for other scientists to attempt to repeat the experiments in order to duplicate the results, thus further validating the hypothesis." (see link)
It seems clear that documentation in this case was lacking and hence it would not be possible to replicate the experiment based upon the published paper. This is a sad statement for the documentation of an empirical investigation published in a peer reviewed scientific journal.
It would be entirely futile to argue this point as the demands of the scientific method clearly were not met and franlkly it matters little what I, you, or the editors feel about it. The scientific method makes no allowance for lowering standards regardless of whether rational debate may or may not be possible, and it certainly makes no allowance for "Trust us" assertions!
We hope our perceptions ain't slippin
But we swear to God we seen Lou Reed
Cow tippin
I agree. Full disclosure of equipment used and methods is a mandatory and prefunctory part of all scientific papers. It would be like me publishing a paper on detection limits in mass spectrometery for some compound and then not telling them what kind of mass spectrometer I was using...instead simply saying that it was a properly designed and funcitoning unit. I can assure you lots of questions would be forth coming from the reviewers if I had omitted that critical piece of information. Also, omission of sample handling would raise a few eyebrows!
You're reasoning with a lynch mob. These are not people interested in comparing, testing, improving, and accumulating knowledge. It's a fraternity enamored with rituals and rites, a club rather or a mutual affirmation society, not a site for openly debating issues in electrical engineering, acoustics, and the physiology of perception as one might otherwise be mislead to think.TL
"You're reasoning with a lynch mob."
Well, when anyone decides the paper is no good without reading it, I have to agree there's no point in arguing with that person.
We tried to present the data necessary to prove our result -- that if high-bit audio sounds better, the extra bits are not responsible. If the equipment we used was defective in any audible way, the subjects would have heard the difference, so any errors of the type we are alleged to have made would have led to the opposite outcome. That's why, for example, the gain stage that we used to match the levels of the CD link to the original was in series with the CD link. When subjects heard the high-bit audio, it wasn't in the circuit. So if it had an audible flaw, again the experiment wouldn't have come out as it did.
In the meantime, we are planning to make available the details we left out for the purposes of clarity and brevity. That handout/email should be ready this coming week if anyone here is interested. -- E. Brad
Just as it was done -- and to the hilt --
that house of cards that Charles built,
and John came all the way to paint
(now ain't that man a saintly saint!)...
Clarkjohnsen rushed up with the roses.
Some fires, yes, but with their hoses
the younger boys delivered doses of their famed
and sour salvos, and lo! they thought once mo’ they’d tamed
that nasty nature (oh so, um, unyielding!) --
Was safe again what they were shielding:
those labels that they do like wielding...
(Do I compare to Henry Fielding?)TL
Um... I'm not sure quite what *all* of this means, but it looks as though you may, in the course of writing it, have called me "young".
If that's true -- THANKS! It's been a long time since anyone mistook me for anything but the geezer I am. -- EBM
Sorry for all the obscurity. It was quite modestly meant as a mockery of the kind of mentality that gets people to jump at the chance of launching a 258-post onslaught on an article that no more than one person has read.Thus also that "young boys" was basically nothing more than a reference to the acolytes following their opinion leaders named earlier, so I am afraid I never meant to include you in that category...
The last line was a semiprivate joke meant to schematically connect with a recent string of situational humor by poster Richard BassNut Greene.
Thanks for supplying all that additional information about your interesting and quite pioneering experiments (and also of those given his own interpretation by J. Atkinson). You guys are the hard workers. Moreover, this is an issue that has been on my mind and I'm curious to learn more.
TL
"It was quite modestly meant as a mockery of the kind of mentality that gets people to jump at the chance of launching a 258-post onslaught on an article that no more than one person has read."
Now THAT I can really appreciate.
"I am afraid I never meant to include you in that [young] category..."
Ah, I knew it was too good to be true.
Meanwhile, if there's anything else you're curious about, ask away. -- E Brad
"It was quite modestly meant as a mockery of the kind of mentality that gets people to jump at the chance of launching a 258-post onslaught on an article that no more than one person has read."
Oh yes! I remember a 'discussion' a few years ago about an article by Tom Nousaine in Stereo Review describing a Geek and Tweak systems test which at most two or three here had actually read. It was truly pitiful!
I don't think E. Brad minded being called "young" any more than I would!
-
"It pertains to all men to know themselves and to be temperate."
---Heraclitus of Ephesus (trans. Wheelwright)
One tries to pre-empt any possibility for open, unprejudiced debate if one can't afford such.As you point out, they like doing that sort of a thing. I'm pretty convinced there are more than the two or three occasions when I've seen John Atkinson here "accidentally" slip out a wholly unsubstantiated rumour (or not even rumour but an insinutation claimed to be something that people "in the know" had "confidentially" told him "in private" which privilege he had promised "to respect," and so forth - all that crap) about his competitors in the business, only to backtrack just enough a couple of days later when pressed on the issue, with the vocal regrets that quite unfortunately, for the time being he was not in a position to speak more about the subject (back up his claims, in other words) due to some circumstance or another, like the advise of his lawyers (clearly good advise if that's what he got), etc., etc.
Oh well, the story was already out and nothing to do about it now!
(And yes I can paste the stuff here if anyone's interested.)
So, nothing new if Charles Hansen and his disciples now opt for the same strategy. Only I can't see why he feels like he should be so worried (apart from being a manufacturer of SACD capable players). clarkjohnsen everyone already knows; that's his sole mode of operation, but I don't take him seriously since it's very unprobable that too many do that anyway. (Strike that; I just saw another post by someone claiming to do so. Everything's possible, then.)
Yet it doesn't cease to astonish me. It's kind of like not how you were told at primary school we do these things.
TL
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: