|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
76.219.188.187
In Reply to: RE: speak for yourself! posted by Ralph on April 26, 2016 at 08:20:54
you are wrong
Follow Ups:
It amazes me sometimes when people try to tell me I'm wrong about things they've not even tried themselves.
Get a recorder and start recording. Release a few of your recordings- put them on LP and CD. Then see if you can still tell me what you are saying now.
"It amazes me sometimes when people try to tell me I'm wrong about things they've not even tried themselves."
Do you even know what I am talking about?
"Get a recorder and start recording. Release a few of your recordings- put them on LP and CD. Then see if you can still tell me what you are saying now."
apparently you don't even know what is being discussed here. One does not need to do any recording to understand the nature of live acoustic sound and the human perception of it. The fact is that in real life aural stimuli alone does not give our brains sufficient information to localize sound sources. The visual cues give us more information on localization of sound sources. This is a well proven scientific fact. Your personal anecdotes on what you believe you are perceiving doesn't refute that.
Your recordings are utterly irrelevant since we are not talking about recording and playback.
Here is a link to a little science based information on the subject.
And here are some key points.
" localizing a sound source is a highly complex, computational process that takes place within the brain. Because auditory space cannot be mapped onto the cochlea in the inner ear in the same way, the direction of a sound source has to be inferred from acoustical cues generated by the interaction of sound waves with the head and external ears (Blauert 1997; King et al. 2001). The separation of the ears on either side of the head is key to this, as sounds originating from a source located to one side of the head will arrive at each ear at slightly different times. Moreover, by shadowing the far ear from the sound source, the head produces a difference in amplitude level at the two ears. The level of the sound is also altered by the direction-specific filtering by the external ears, giving rise to spectral localization cues.
By themselves, each of these spatial cues is potentially ambiguous and is informative only for certain types of sound and regions of space. Thus, interaural time differences are used for localizing low-frequency sounds (less than approx. 1500 Hz in humans), whereas interaural level differences are more important at high frequencies. For narrow-band sounds, both binaural cues are spatially ambiguous, since the same cue value can arise from multiple directions known as 'cones of confusion' (Blauert 1997; King et al. 2001). Similarly, spectral cues are ineffective unless the sound has a broad frequency content (Butler 1986), in which case they enable the front-back confusions in the binaural cues to be resolved. Spectral cues also provide the basis for localization in the vertical plane and even allow some capacity to localize sounds in azimuth using one ear alone (Slattery & Middlebrooks 1994; Van Wanrooij & Van Opstal 2004). However, because this involves detection of the peaks and notches imposed on the sound spectrum by the external ear, the sound must have a sufficiently high amplitude for these features to be discerned (Su & Recanzone 2001). Moreover, under monaural conditions, these cues no longer provide reliable spatial information if there is uncertainty in the stimulus spectrum (Wightman & Kistler 1997). In everyday listening conditions, auditory localization cues are also likely to be distorted by echoes or other sounds. Accurate localization can therefore be achieved and maintained only if the information provided by the different cues is combined appropriately within the brain."
"Accurate auditory localization relies on non-acoustic factors too. Because the coordinates of auditory space are centred on the head and ears, information must be provided by the vestibular and proprioceptive senses about the orientation and motion of these structures (Goossens & Van Opstal 1999; Vliegen et al. 2004). Moreover, a congruent representation of the external world has to be provided by the different senses, so that the objects registered by more than one modality can be reliably localized and identified. In the case of vision and hearing, this means that activation of a specific region of the retina corresponds to a particular combination of monaural and binaural localization cues values."
"Although sound sources can obviously be localized on the basis of auditory cues alone, localization accuracy improves if the target is also visible to the subject (Shelton & Searle 1980; Stein et al. 1989). This is an example of a more general phenomenon by which the central nervous system can combine inputs across the senses to enhance the detection, localization and discrimination of stimuli and speed up reactions to them."
"visual localization is normally more accurate than sound localization"
Thanks for the text.
I still recommend you do as I suggested. You will find that without seeing the performers, you can still tell where everyone is. A recording I did many years ago was a rather large production. The producer asked me what I thought of the chorus. I told her the exact location of a soprano that was singing a bit too loud, even though I could not see her. She knew exactly which one I was talking about.
I've played in a number of orchestras and recorded them. I have noticed that the better recording engineers are very sensitive to the placement of musicians- this is crucial to placing the mics properly for a good recording, which should be a documentation of the musical event, heard from a good perspective that a human in the same location would hear.
Go to a live concert and get a good seat. Close your eyes. Notice that there is indeed a soundstage. Turn your head to sort out where the sounds are coming from if you've any doubt. I do this often at shows to see how a real life performance stands up to what we are doing with recordings and also playback as I am an equipment manufacturer too and am very interested in image location. A good seat will give you lots of image location information- just ask a blind person!
While I regard the information that you quoted as real, your extrapolation is not.
> > Go to a live concert and get a good seat.> >
I've done that. quite a bit actually
> > Close your eyes. Notice that there is indeed a soundstage.> >
Done that too. Of course I never said there was no sound stage so not sure what your point is.
> > Turn your head to sort out where the sounds are coming from if you've any doubt.> >
Turn my head? really? Why should I need to turn my head? What happens when I turn my head when I am listening to two channel stereo? Think about that. You don't need to turn your head to get some very precise imaging with two channel stereo. If the real thing is as precise why would I need to turn my head????
> > I do this often at shows to see how a real life performance stands up to what we are doing with recordings and also playback as I am an equipment manufacturer too and am very interested in image location.> >
Of course, we all do it at live concerts. We turn our heads, we use our eyes. We use our brains. That is why our ability to locate sound sources is as precise as it is. Because we do things we cant do while listening to two channel stereo. And that is why two channel stereo is actually more precise in it's aural imaging than real life without all those other tools we use to locate sound sources. It has to compensate to be the equal of the overall live experience. But that overall live experience has a lot more than what we get from two channel stereo.
> > A good seat will give you lots of image location information- just ask a blind person!> >
I really don't need more anecdotal evidence on that subject. The science on the subject is more than good enough. And here is what it says on the subject...
"Accurate auditory localization relies on non-acoustic factors too. Because the coordinates of auditory space are centred on the head and ears, information must be provided by the vestibular and proprioceptive senses about the orientation and motion of these structures (Goossens & Van Opstal 1999; Vliegen et al. 2004). Moreover, a congruent representation of the external world has to be provided by the different senses, so that the objects registered by more than one modality can be reliably localized and identified. In the case of vision and hearing, this means that activation of a specific region of the retina corresponds to a particular combination of monaural and binaural localization cues values."
"Although sound sources can obviously be localized on the basis of auditory cues alone, localization accuracy improves if the target is also visible to the subject (Shelton & Searle 1980; Stein et al. 1989). This is an example of a more general phenomenon by which the central nervous system can combine inputs across the senses to enhance the detection, localization and discrimination of stimuli and speed up reactions to them."
"visual localization is normally more accurate than sound localization"
> > While I regard the information that you quoted as real, your extrapolation is not.> >
What extrapolation? Those are direct quotes.
extrapolation below.
You most definitely can't hear the specific positioning of most of the musicians from front row center. The only specificity you get in aural imaging from front row center would be from soloists near the front and center.
(emphasis added). Anecdotal or not, my and the experience of others belies the statement above. I agree with the quotes you have provided, but the extrapolation of yours above which started this I simply don't; no amount of discussion is likely to change that.
You are right. No discussion will ever settle this. If you ever want to put it to the test if there is ever an opportunity I'd being willing to bet money you could not accurately locate individual instruments in an orchestra by sound alone other than the ones I mentioned. Unless you have been lead into a concert hall with a blind fold on all of your experience on the matter has been aided by seeing the positioning of the musicians.
So why does SS gear reproduce recorded images better than toobs.? is it the wider bandwidth?
Edits: 04/27/16
Although this really isn't a tubes/transistor thread (like there's a shortage of those or something...).
Soundstage, generally speaking, has traditionally been a strength of tube amps over solid state.
The one thing about this is that tubes seem to make more low level detail- and ambient information is low level detail. The result can be, when this low level detail is removed, like the images are in greater stark contrast, which is because they are. However once you get used to the added ambient information surrounding each musical image, you realize that the tube amps are making a more 3D soundstage, instead of one with cardboard paste-ups.
I am exaggerating here of course- and struggling with the best way to explain what I hear.
Tube amps by definition don't always have less bandwidth. The limiting factor in most of them is the output transformer- the tubes themselves have no trouble making bandwidth (if that were the case, television would not have been possible in the 1950s and 60s). Our amps have no output transformer and so have much more bandwidth- both low and high.
Ralph :)
Apology for egging you on like that, i couldn't resist :) and yes toobies tend to do the image , halo around instruments better than most, good SS does it too, Spectral for eg..
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: