|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
70.190.151.59
In Reply to: RE: what is your most shocking wikileakesque revelations about the audio industry in the last 3 years? posted by josh358 on December 13, 2010 at 10:17:01
First both characterizations you make about what I believe are incorrect.
To the point and in all honesty you've been ignoring(I don't think intentionally) the most important aspect of my position throughout both threads of this conversation. Your responses have been reasonable so I haven't pushed but in the clarity of your responses I have to go right to the heart of the matter - which is why we have a disagreement.
"Go for best possible reproduction first. I know you don't like that approach, but in my experience, you can't nurse the best possible sound out of a bad reproducer no matter what the quality of a recording. This isn't an abstraction -- take a superb reproducer (and there are several in the high end audio market, although a surprising number fall short) and put a relatively unprocessed recording on it, and it will take you a lot closer to the original performance than anything else."
You are wrong I do like that approach. However any decent mid fi stereo will take you closer to the original performance given an unprocessed recording.
The difference between the original performance and what you hear at playback is a result of a filter. That filter consists of the recording chain and the playback system including environment/listening space.
As audiophiles we can do nothing except adjust our systems to give us what we percieve as the best possible performance.
Unfortunately using unprocessed recordings to evaluate high fi performance has a number of obstacles.
1.) One needs to assemble a comprehensive test set of unprocessed recordings capable of being used to completely characterizing hifi performance.
2.) One needs to be capable of quantifying the results of listening to those recordings.
If someone is capable of doing #1 and #2 correctly all recordings will benefit from components chosen given that methodology even bright foward ones - remember you have to turn down the volume!
If someone is NOT capable of doing #1 and #2 correctly some recordings will benefit from components chosen given that methology but others will become even more alienated.
In fact I would suggest that doing #1 and #2 incorrectly WOULD lead to system inaccuracies that make the test recordings more preferable, more real and more live, than when they are played back on the accurate system.
"(I know you don't agree, but it's no secret that poor recordings tend to be too bright and forward, and if you compensate for that by tilting the balance you'll compromise the good ones).
But you are wrong I do agree. The point I am trying to make is that there is no guarantee that by using unprocessed recordings one is not being fooled by subtle (or not) colorations that tend to make such recordings sound more real or more live. Also if using such a methodology is alienating other recordings (again remember my volume adjustment comment) one is selecting components with colorations that favor such recordings.
In a nutshell - surely I can't disagree that rolling off high frequencies and limiting low end extension will reduce the goodness of many high quality recordings in order to facilitate reasonable playback of the world of recorded works. It seems like a small price to pay. On the other hand I find any coloration that increases the goodness of any recording at the same time making other recordings sound bad or unlistenable to be simply intolerable.
Follow Ups:
> But you are wrong I do agree. The point I am trying to make is that there is no guarantee that by using unprocessed recordings one is not being fooled by subtle (or not) colorations that tend to make such recordings sound more real or more live. Also if using such a methodology is alienating other recordings (again remember my volume adjustment comment) one is selecting components with colorations that favor such recordings. <
In my experience, unprocessed recordings have less coloration. That's both a practical judgment -- they sound more natural when I listen to them -- and a matter of engineering: a relatively flat pair of microphones located at a distance from the performers produces a signal that is closer to the sound field at the listener's position than multiple microphones that are located unnaturally close to the instruments. And that's before cowboy producers start mucking with the EQ! It's almost impossible to make a convincing recording of a large ensemble with a multitude of microphones, though I've heard some pleasing ones.
One well-known example of this would be the screechy violin effect, which is a consequence of miking the string session up close and above: the violin is strongly directional at certain frequencies, and if you capture only the beam that's aimed straight up they sound screechy and hard.
> In a nutshell - surely I can't disagree that rolling off high frequencies and limiting low end extension will reduce the goodness of many high quality recordings in order to facilitate reasonable playback of the world of recorded works. It seems like a small price to pay. On the other hand I find any coloration that increases the goodness of any recording at the same time making other recordings sound bad or unlistenable to be simply intolerable. <
I certainly agree with that last. In general, I find that brightness or peakiness is much more offensive to the ear than recessive sound or sound with suckouts. For some reason, the bright sound is fatiguing.
As to rolling off the highs, well, I don't have an answer for that, but my personal inclination is to go for something that serves the main body of recordings, which, after all, are most of what I listen to. But I'd rather do it in EQ than in the speakers. There was a time when EQ circuits could be sonically deleterious, but once you've made the transition to all digital as I have you can use digital EQ, which can actually do a better job of correcting tonal balance than a speaker can and can be adjusted to accommodate various recordings and scenarios (stereo vs. multichannel, small hall vs. large one, live room vs. dead one, etc.).
"As to rolling off the highs, well, I don't have an answer for that, but my personal inclination is to go for something that serves the main body of recordings, which, after all, are most of what I listen to. But I'd rather do it in EQ than in the speakers. There was a time when EQ circuits could be sonically deleterious, but once you've made the transition to all digital as I have you can use digital EQ, which can actually do a better job of correcting tonal balance than a speaker can and can be adjusted to accommodate various recordings and scenarios (stereo vs. multichannel, small hall vs. large one, live room vs. dead one, etc.)."
I don't have the skills or experience to accurately eq playback - surely some do and some don't but think they do. That's all good and tasty stuff but until the recording contains information on how these adjustments are supposed to be made all bets are off. With feedback we can compare the media content to what is arriving at the listeners ears and make adjustments as needed to correct for system (including room) influences. If we knew more about the content of each channel we could do more with it in fact we could do best fits for the actual playback environment and optimize the results to get the best performance based on x number of speakers and how they are configured in the playback space - all of that based on recorded content!
I think another benefit of digital is that the "quality" of playback gear, at least the non-digital parts, is going to become less important. Of course robustness will be key but such gear may sound dreadful in a non-digital system but put it into a feedback based system configured on the parameters of the recording and it will really shine.
But for now my main rigs are going to remain minamalist and mostly analog - save for CDP and tuners. I am currently importing CDs and LPs onto a harddrive and have been converting my portables and car stereos to source from the iPod and we (my wife and I) are going to put in a new video system in the living room as part of our home remodelling. That system is likely to contain a dedicated music server so I will have a purely digital based system soon!
But until the format gives us more information I'm going to let the pros adjust the eq's and let myself continue to make adjustment by careful selection of gear.
Yeah, it's difficult to adjust EQ on a recording by recording basis. There's a lot to be said for old fashioned tone controls, which may have been coarse in their action but were at least able to render some recordings listenable by correcting massively out-of-kilter balance. As things stand, I use the (less than ideal) graphical equalizer in Foobar 2000 to make recordings that are truly ghastly listenable. Often this means pulling down the violin screech range, which is easily done once you know the frequency range that's affected. But in the future, I'd like to see two capabilities: per recording EQ settings that are remembered or store in the file like settings in Replay Gain, and deconvolution, the technique that was used to remove the horn resonances from the Caruso recordings -- essentially comparing the tonal balance of a recording to a better recording of the same work and inferring the response abnormalities from it. Perhaps some day it will be possible to analyze automatically a recording on the basis of the instrumentation, and compare that to a model of how such a recording should sound.
"But in the future, I'd like to see two capabilities: per recording EQ settings that are remembered or store in the file like settings in Replay Gain, and deconvolution, the technique that was used to remove the horn resonances from the Caruso recordings -- essentially comparing the tonal balance of a recording to a better recording of the same work and inferring the response abnormalities from it. Perhaps some day it will be possible to analyze automatically a recording on the basis of the instrumentation, and compare that to a model of how such a recording should sound."
This type of EQ is what Andrew Rose claims to be using for his historical recordings.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
That is brilliant stuff! I'm having a hard time tearing myself away.
"Perhaps some day it will be possible to analyze automatically a recording on the basis of the instrumentation, and compare that to a model of how such a recording should sound."
That's asking alot but I don't think it's out of the range of possibilities. At this point I'm thinking venue or spatial information could be much better served by digital. Using transfer functions one could store information that can be used to map venue, seating and microphone information into the playback environment. Given a recording from a venue one could chose his desired seating position and expect a reasonable sonic representation of what it actually sounds like in the selected listening seat. Instead of just left/right channel information raw microphone feeds could be included as well. The possibilities of digital and beyond stereo are endless.
The natural recordings clearly can benefit but IMO these kinds of things really open up a new realm for studio recordings. Not to mention a whole variety of configurability and personalization for the end user.
I remember seeing that proposed -- predicted, actually -- back in the 70's. The essential idea being that a recording would contain a mathematical model of the hall. The technology now exists to do it, a couple of Japanese companies have made acoustical models of major concert halls. You could as you imply begin with two microphone stereo, and "fill in" the sound from the rest of the hall at the time of reproduction. (Something like that is already being done at a primitive level with surround releases that use synthesized reverberation. This I think can offer superior results insofar as the directionality of cardioid microphones makes it difficult to isolate the reverberant from the direct sound when making a recording. It's a case in which purist recording techniques are actually inferior.) Or as you also point out, you could begin with a multitrack recording and do a more complete reconstruction of the hall acoustics.
Ideally, I think, you'd want to use an array to accurately reconstruct the wavefront of the image coming from the front; surround sound reproduction is I think less demanding, in the case of music reproduction, anyway (movie and game effects are a different matter). As in the stereo case, the reproducing array could be fed with an array of microphones, or a synthesized sound field generated from a multitrack recording. One fellow made a clever proposal to use the directionality of the front array to generate reflections in the other five surfaces, obviating the need for surround speakers and perhaps allowing a more accurate reconstruction.
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: