|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
64.252.40.32
In Reply to: RE: what is your most shocking wikileakesque revelations about the audio industry in the last 3 years? posted by Don Till on December 13, 2010 at 16:26:53
Yeah, it's difficult to adjust EQ on a recording by recording basis. There's a lot to be said for old fashioned tone controls, which may have been coarse in their action but were at least able to render some recordings listenable by correcting massively out-of-kilter balance. As things stand, I use the (less than ideal) graphical equalizer in Foobar 2000 to make recordings that are truly ghastly listenable. Often this means pulling down the violin screech range, which is easily done once you know the frequency range that's affected. But in the future, I'd like to see two capabilities: per recording EQ settings that are remembered or store in the file like settings in Replay Gain, and deconvolution, the technique that was used to remove the horn resonances from the Caruso recordings -- essentially comparing the tonal balance of a recording to a better recording of the same work and inferring the response abnormalities from it. Perhaps some day it will be possible to analyze automatically a recording on the basis of the instrumentation, and compare that to a model of how such a recording should sound.
Follow Ups:
"But in the future, I'd like to see two capabilities: per recording EQ settings that are remembered or store in the file like settings in Replay Gain, and deconvolution, the technique that was used to remove the horn resonances from the Caruso recordings -- essentially comparing the tonal balance of a recording to a better recording of the same work and inferring the response abnormalities from it. Perhaps some day it will be possible to analyze automatically a recording on the basis of the instrumentation, and compare that to a model of how such a recording should sound."
This type of EQ is what Andrew Rose claims to be using for his historical recordings.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
That is brilliant stuff! I'm having a hard time tearing myself away.
"Perhaps some day it will be possible to analyze automatically a recording on the basis of the instrumentation, and compare that to a model of how such a recording should sound."
That's asking alot but I don't think it's out of the range of possibilities. At this point I'm thinking venue or spatial information could be much better served by digital. Using transfer functions one could store information that can be used to map venue, seating and microphone information into the playback environment. Given a recording from a venue one could chose his desired seating position and expect a reasonable sonic representation of what it actually sounds like in the selected listening seat. Instead of just left/right channel information raw microphone feeds could be included as well. The possibilities of digital and beyond stereo are endless.
The natural recordings clearly can benefit but IMO these kinds of things really open up a new realm for studio recordings. Not to mention a whole variety of configurability and personalization for the end user.
I remember seeing that proposed -- predicted, actually -- back in the 70's. The essential idea being that a recording would contain a mathematical model of the hall. The technology now exists to do it, a couple of Japanese companies have made acoustical models of major concert halls. You could as you imply begin with two microphone stereo, and "fill in" the sound from the rest of the hall at the time of reproduction. (Something like that is already being done at a primitive level with surround releases that use synthesized reverberation. This I think can offer superior results insofar as the directionality of cardioid microphones makes it difficult to isolate the reverberant from the direct sound when making a recording. It's a case in which purist recording techniques are actually inferior.) Or as you also point out, you could begin with a multitrack recording and do a more complete reconstruction of the hall acoustics.
Ideally, I think, you'd want to use an array to accurately reconstruct the wavefront of the image coming from the front; surround sound reproduction is I think less demanding, in the case of music reproduction, anyway (movie and game effects are a different matter). As in the stereo case, the reproducing array could be fed with an array of microphones, or a synthesized sound field generated from a multitrack recording. One fellow made a clever proposal to use the directionality of the front array to generate reflections in the other five surfaces, obviating the need for surround speakers and perhaps allowing a more accurate reconstruction.
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: