HTTP/1.0 200 OK Content-type: text/html
Can't connect to database, trying again....HTTP/1.0 200 OK Content-type: text/html
Can't connect to database, trying again....
|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
207.216.246.51
In Reply to: RE: You need to get a does of reality posted by BigguyinATL on May 18, 2012 at 10:07:02
...this is more blind leading the blind. They believe the venue acoustic properties are encoded in the recording and their stereo magically RECREATES the entire hall IN delicious 3D as if there was some sort impulse response convolution going on. If you can take some audiophiles and put them in a room and have them listen to recordings BLIND and tell me what hall the recordings were made in (if they were made in a hall at all) I will print this post off on 20 pound bond paper and eat it. You'll never see such an exhibit. They all are sitting there reading the liner notes - "Ah yes, indubitably, this is the Sydney Opera House... I knew it! I recall the acoustics very clearly... haw haw poo poo. Earlier that evening we had scallops prepared by the famous chef..."
Bla bla bla bla.
The closest thing stereo can do to "recreate" an "event" is the properly HRTF equalized binaural recording which is best realized with headphones rather than loudspeakers in room. Loudspeakers complicate the situation too much with their unique acoustic response (phase AND amplitude) and polar response which is affected by room boundaries, geometry, listening distances etc.
A $200 pair of headphones can reproduce the ambient info of a binaural recording better than even a $100,000 pair of loudspeakers so long as HRTF equalization is properly employed.
"If you can't hear (read: imagine) what I am hearing (read:imagining) then your speakers are simply not revealing enough..." is what everything written on this entire site boils down to. It's how all arguments begin and end. Remember this when you are considering how much time you spend debating here. The guy you're debating with right now, for example, is the audiophile equivalent of Jo Jo's Psychic Alliance.
Always consider the source, grand-daddy always used to say...
Cheers,
Presto
Follow Ups:
Your words echo a lot of audiophiles, who apparently have never had the opportunity to hear a quality system. Pity.
Ah, the famous "You've never heard a quality system" aka "your system is not resolving enough" routine.
Like I said. Begins and ends with this.
Geoff's right. For some people even the best recording and a matching "resolving" system won't matter.
"That willing suspension of disbelief for the moment, which constitutes poetic faith."
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
...acoustic "information" of the venue is captured in such a way that it can be recreated is wrong.
Look, imaging is wonderful. It's fun. It's neato. It's up, down, back side to side, hell last night I had sounds coming from beside me. Must be some phase-related Q-sound type effect. Damned impressive.
But the physics is the physics Tony. You're grounded in physics are you not? Look, if someone is using two mics then they are getting the closet thing to a binaural recording as you can get. Why the fuss about binaural recordings? If the mics don't "hear" the venue (and of course the sounds in it) the way humans do, then it's not going to be anywhere near accurate spacially. Yes, you get the impression there are performers in front of you and off to the left and off to the right and towards the back. These are a result of relative levels between left and right channels, reflections in the roof, phase distortion from the speakers, interaction of speaker polar patterns determined by speaker spacing... distance of speakers to back wall.
If you can change your soundstage by purchasing new speakers or just moving them about the room, how can one then say what accurate is? If the mic-to-mic distance on a two-mic soundstage changes soundstage attributes, then what is the "correct" distance to capture the venue's acoustics so they are accurately conveyed? The answer is "you can't". The recreating is a nifty simulation, some more real and convincing than others but a simulation nonetheless.
Look, I am all for "tricking the listener" into FEELING like he is there. Get it all the time. Wonderful effect caused by artifacts, delays, phase shifts and all kinds of constructive and destructive interference. But to say that "the venue acoustics are captured on the recording and if you have a RESOLVING ENOUGH SYSTEM you will magically extract this wonderful information".
There are tricks to recreating room reverb times and it has to do with capturing an impulse of the space and convolving that impulse with the music material. Assuming the guy will use cans, the time-domain info needed to SIMULATE the acoustics of the venue is indeed captured. I say simulate because when you're listening to cans in a 8 x 10 room you're not in a cathedral, so obviously getting cathedral sound is indeed a simulation. Now if he's NOT using cans, he has two additional problems. He's got his OWN room acoustics and he's got speaker attributes when added give you the "speaker-room" equation. To use speakers in a room and impulse response convolution to simulate a space, you would first need to remove the amplitude and time-domain errors of the speaker/room combination - one could create a single impulse that removes the listening room and adds the listening space. I'm not a huge convolution fan so I never got quite that far. Using cans will obviously be much easier which is probably why binaural recording fans seem to more often skip speakers.
"Transported back to the original event". Maybe emotionally, but not acoustically. Believe, guffaw, point and laugh all you want. The emperor has no clothes again. Recording methods and speaker/room interaction obfuscate "room" info, if what is there on the recording is even worth a lick "spacially" in the first place. Most concerts I've seen have mics on instrument groups as well as mics for individual instruments. Now c'mon - tell me this recording setup was done with capturing the venue in mind. Sure, you have some ECHO captured, but when you mix all these mics what you have is a bunch of echos who's time-domain info is now completely irrelevant. You have multiple locations from which you are recording - spacial game is over. Neato effect? Still there. Acoustics of venue captured? Nuh-uh.
But hey, if you believe your stereo is a transporter beam that takes you back to the hall, then wow, what the hell are you doing typing on here?
Cheers,
Presto
"The idea that the ...acoustic "information" of the venue is captured in such a way that it can be recreated is wrong.
A large amount of information about the acoustics of the original recording venue can be captured by two microphones. There are certain spatial symmetries that can't be resolved, but in general if one puts a source of impulses at various places in the sound field the recording will change and in such a way that it is possible (e.g. by a computer) to recognize points from these patterns. If sufficient additional information is provided, e.g. a calibration grid, then it will be possible to locate the sound sources in real space. Similarly, if the walls are moved the patterns will also change, making it possible in principle to recognize hall acoustics. Left and right information is obviously present, but so is depth (it even appears in mono). There is also height information because of reflections off the floor or ceiling.
Note that I am talking solely about information that's on the recording, not whether (or how) it can be "decoded" by the human ear/brain/mind system. The sonic patterns at one's head when a stereo is played are not the same as those in a seat at a live concert, so they will have to be decoded different. The ability to "hear" microphone patterns on recordings, for example, is not something that an untrained listener can do, but an experienced recording engineer can do this with a good playback system.
You can tell if a system is "accurate" is by playing a large corpus of reference recordings. This is the way that mastering engineers fine tune their systems and is something that can not be done by measurements alone, although measurements play an essential role in the setup process. Recording and playback of music are an art as well as a science.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Tony:
I don't disagree with what you're saying in your last post. Let me clarify:
Recreate Soundstage / venue acoustics: not correct
*Simulate* venue acoustics: possibly, to some extent
And yes, I agree that 2-mic methods are going to give one a chance while mixed multi-mic recordings are just a facsimilie. That said, the two mics and how they are positioned will greatly affect the perceived "stage". Even so, two mics x feet apart is going to give an entirely different effect than a stereo mic or two mics placed in closed proximity but at different angles.
Even with 2-mics there are a number of different methods.
To say that a system is "set up correctly to play two mic recordings properly" is therefore a really big stretch. Sure, you can play with placement, toe-in and distance to back wall depth. But "correct"? I think reviewers and many philes have crossed the line into dream land once again.
Cheers,
Presto
Agree, "correct" doesn't apply to sound stage, as it is an illusion that depends on the listener. "Preferred" would be a better word.
However, when it comes to tonal balance, e.g. high frequency roll off or the lack there of, there is more of a natural reference, at least for acoustic music. The perceived tonal balance should correspond to the perceived tonal balance at some seat at a live performance, and the seat in question has to be somewhat plausible in light of amount of reverberation and volume. Unfortunately, recordings aren't made to a standardized playback, and if they are miked, mixed and/or EQed to sound good on a mastering system with one high frequency roll off they won't sound so good with a different one. (I presume that the reference system used to monitor the Mercury Living Classics recordings was somewhat rolled off based on the brightness of these recordings, compared, say to Lewis Layton's work on RCA.)
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Tony:
Although there is not an overall "EQ" transfer function that would please everyone, most people seem to like a downward tilt over the entire range - this could well be to compensate for most people simply not finding a perfectly flat response to be natural sounding. Myself, I like "flat" but after doing baffle step compensation and tweaking things high-frequency shelf filters and tweeter level pad (voicing you could call it) the response is anything but flat.
In a perfect world, the recording engineers would step into a room next to the studio - a audiophile sized listening room with two speakers and a chair set up with audiophile placement. If they heard their mixes on this "system next door" I am sure the number of decent recordings would go up dramatically. Trouble is, they are eq'ing to studio monitors that are flat, but they are listening nearfield. They might switch from nearfield monitors to larger house monitors, but in the end, are they ever mixing for systems like the ones we as 'phile assemble? And are not many studio rooms on the dead-side, which could be a reason why some recordings are excellent except for a sizzling hot high end?
I have recording equipment here... you just gave me an idea...
Cheers,
Presto
The reason why people prefer slightly rolled off response is that the studio monitors used have similar response. Most CDs (including classical and probably even some so-called "purist audiophile" recordings) are EQ'd at some point in the "mastering stage". (The correct term for this stage is "pre-mastering" as the actual mastering takes place at the manufacturing plant.) This EQ is needed because the earlier stages of processing produced imbalance results, either due to accidents or because errors were deliberately made to offset errors in the inferior monitoring and room acoustics at the recording venue or mixing stage.
There are some very subtle and complex relationships between equalization and sound stage. After making lots of adjustments on the electronic crossovers of my Focal satellites and sub woofer I still was left with a few peaks due to room modes, at 40 Hz and 127 Hz and a small peak at 781 Hz. After living with these for the past two months I decided yesterday to see what would happen if I took these out with a parametric equalizer. As expected a few recordings that were bass heavy because of peaks at similar frequencies were tonally improved. This was to be expected, but what was much more surprising is that the sound stage improved greatly in depth, as my mind was no longer cuing in on the room resonances of my small listening room and hearing past them to more of the ambiance on the recordings.
To do this EQ I had to use the iZotope parametric equalizer that comes with Soundforge, a pain in the ass process that takes a minute to process each recording to be played. Unfortunately, this software is tied to Soundforge and is not usable as a VST plugin that I can use with my player software such as cPlay.
Do you know of any good parametric equalizers that run as VST plug ins? Are you still recommending the one from aixcoustic.com?
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
The ReaEq plug in from the Reaper folk. Works at 64 bit precision, and is minimum phase. There is also ReaFIR, a linear-phase para-graphic (meaning you can draw your own curve if you aren't trying to be precise at the moment).
The plugins are free. Reaper has a package for either 64 bit or 32 bit hosts. They are not pretty, but are extremely effective. I have worked with some of the expensive plugins, and always come back to these two.
Give me Ambiguity or give me something else!
Tony:
Wow, that was a while ago "AIX Acoustic". Good memory!
I am running DSP crossovers almost exclusively now, except when I pop in my passive-crossover based reference monitors to see how far I've wandered off the beaten path. ;) As such, I can do pretty much any equalization imaginable right in the crossover.
I might have to listen to the AIX acoustic crossover again - it's been years!
Cheers,
Presto
Not memory on my part. AA search. :-)
I got it to work and set it up for a similar response to what I was getting out of the iZotope parametric EQ in Soundforge. There were a few differences, however, i.e. "bandwidth" is specified instead of Q. Also I am suspicious of the AIX plugin because it's "flat" position had about 0.5 dB of gain. Not something any serious DSP expert would ever do.
I need to do some more tests to see if this gain problem is the only problem. At least it interfaces with Soundforge and cPlay.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Thanks for bringing that to my attention Tony. Some basic testing (in versus out) is called for here. Simply recording the output of the plugin in the digital domain should do it... I'll see what version of this EQ I have.
Cheers,
Presto
I am no longer seeing this difference in gain, so perhaps the problem only appears in certain conditions. Or perhaps it could have been "cockpit error" on my part when I first started playing around with the program.
I've got the EQ running as a VST plugin under cPlay and this is definitely improving my sound, cleaning up some muddy bass on some recordings as well as improving imaging. I've done some In-out tests and the amplitude response appears to be more or less as displayed on the graphs. I've looked at impulse response and there is no pre-ringing, looks like minimum phase. I didn't observe distortion products on some sine wave tests, but this was not done at high precision, so there could be non-linear results at -130 dB that I might have missed.
My next step will be to try some time correction as well, but this is much more complex and I'm not sure what programs are available for testing that aren't expensive.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
I think you’re on the right track, remember when CD’s were introduced, they “had to” sound better than the LP’s they were targeting to replace and an easy way was making a brighter recording that anyone would notice as “more extended”. Get a copy of Michel Hedges Breakfast in the field and hear an early a very clear but very bright recording.
Part B I think Is that harmonic distortion is normally to the high side of the fundamental producing it so in the case of loudspeakers, one finds the harder they are driven, the brighter they sound and eventually sound bad. As the size of loudspeakers has fallen and power handling climbed and lacking a standard measure of linearity or usable loudness, what we have now are speakers that are often less signal faithful than the old days.
An Altec horn for example had essentially no power compression because if you drove it that hard, the wire came off the voice coil at ~ 125-150C, now days some VC adhesive will tolerate > > 350C (Rdc doubles at about 230C).
Also. I do think we “hear through” obstructions without being conscious of it and so if you remove a que that the speakers or room add, what you hear is a more faithful image while you were previously unaware of that same que.
I would offer that for the most part, if you can measure and correct your speakers based on outdoor / anechoic measurement, then you can be pretty sure that what your doing will fix both mag and phase simultaneously.
To the degree what you’re trying to eq is caused by a delayed signal (reflection) combining with the direct, you can’t really fix these as they are not a minimum phase problem. In the old days, they said only cut peaks and bumps, NEVER try to fill a sharp deep notch as that is the signature of a comb filter (caused by a delayed signal).
While the miracle of DSP and incomplete explanation or limited measurement resolution will make it appear you can “fix everything” the absolute best one can do is fix it in one specific spot where the measurement was taken and often making it worse everywhere else.
The “fixed” location is limited to about ¼ wavelength in size for the highest frequency being corrected so it is truly futile (acoustically) fixing it with dsp when the listening area is much larger than the wl in size. At 20Khz, the wave length is about 5/8 inch..
Fixing the source is the best way I think. Can’t help on a plug in but would offer that LSPcad can emulate a number of speaker controllers which have parametric’s , the down side is most dsp units are somewhat different when you call for a given filter set or alignment.
You can also listen to music through that alignment if you want. For work, I take the actual unit and measure / adjust the eq until it overlays on the transfer function or response I need.
Also, you could save the impulse response for the correction and convolve it with the music, the ”Gratisvolver” at Cat acoustics is free anyway. That also appears to be a way of transferring “what a speaker sounds like” as well as the speakers impulse response can be convolved with music too, sort of a software way of doing loudspeaker generation loss recordings we do at work.
Best,
Tom
Get a copy of Michel Hedges Breakfast in the field and hear an early a very clear but very bright recording.
The original analog version from 1981 or the later CD copy? I have both. In this case, I think the "villain" is the minimal processing done on the recording. Your post piqued my interest because I find that overall, the original analog copy is an incredibly natural sounding recording - albeit a touch bright (an easy thing to cure). On The Happy Couple , you can so easily visualize his hands moving in his inimitable way. I've had to good fortune to see Hedges live three times before his untimely accident.
In another post, you mention the KEF Blade. I was in the Bay area earlier this year and noticed a hi-fi shop in downtown SFO nearby the Ruth's Chris where my wife and I had dinner. Since it had literally been years since I've set foot in an audio dealer, we walked over. They graciously played a couple of tracks on the Blades. As a coherency freak, I found them good in that respect. But, the apparent image size was tiny and very directional. Not my cup of tea. :)
Hi
I have an early CD of it (and one of his others).
I like the recordings very much but are what I was talking about as to how they made the early CD voiced differently.
In that case, on speakers that measure flat is a very bright recording and bright I think “so it sounded better”.
I have not heard the Blade speakers myself but they did talk about hearing the radiation shape but they didn’t provide any directivity measurements that I recall.
While I noticed the effect developing the Synergy Horns speakers for work, they are constant directivity and radiate as if they had one driver..
...they are constant directivity and radiate as if they had one driver..
Yes. Back to the Blades for a moment, they seemed quite coherent even if it was a coaxial mid/tweeter and two side firing woofers. It just suffered from shrunken image size.
I'm thinking that would not be the case with an SH-60. Maybe two would be better.
I agree, the recording is not bright. Same is true of all or most Windham Hill Recordings.
Edits: 05/27/12
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: