|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
In Reply to: Re: Some morsels posted by Middleground on November 12, 2002 at 10:43:15:
Let's say you have two coincident pure sine wave tones generated simultaneously - a 30,000 Hz signal and a 31,000 Hz signal. There will actually be four tones produced - the original two, plus (at greatly reduced volume) a tone that is the sum of the two (30,000 Hz + 31,000 Hz = 61,000 Hz), plus (at greatly reduced volume) a tone that is the difference of the two (21,000 Hz - 20,000 Hz = 1000 Hz). That difference signal is called a "beat".This is at least one way in which harmonics above the normal frequency range of human hearing can affect the audible range.
Whether or not this translates into a requirement for playback systems with response well in to the ultrasonic range I cannot say. Theoretically the audible beat frequencies would have been captured in the original recording, and so would not have to be re-created by the physical interaction of ultrasonic frequencies. Personally I think bandwidth well into the ultrasonic range is desirable, but probably not cost-effective to pursue.
Below is a link to a site that explains beats much better than I did.
Follow Ups:
futher to what I did not find in the article, these beat frequencies are not heard as the summed difference tone but are sensed as warbling beats as the phase relationships becomes constructive and destructive.
As such any reinforcement of the fundamental through beat frequecies appears to be a function of our ability to process phase information very accutely. Also, beat frequencies can be derived from tone produced from different instruments such as a 440hz tone on a piano and a 450hz tone from a quitar. The beat frequency is sensed as a warbling 10 beats per second. I understand if 440 hz is played in one ear and 450 hz in the other the brain will create a 10 hz tone, this is called a binaural beat. It is interesting the brain can create the fundamental of a tones 44th and 45th harmonic and do so through phase relationships.It is theorized by some our ability to process extremely complex phase information is largely responsible for sound location detection. This can be taken a step further to help describe the importance of omnidirectional bass frequencies and proper phase relationships contibution to sense of space from out audio systems.
A production of a 32 hz tone will produce a harmonic structure of 64, 96, 128, 160 etc. The amplitude relationships of the fundamental and harmonics will create it's own unique set of phase beat frequencies for a given space which our brains interpret as that space. Now if we record that tone and play it back through our audio system what happens to the original amplitude and phase relationships? Our own listening room resonant frequencies will change the amplitude relationships and the electronics will do the same to amplitude and phase relationships. On top of this our brains are forced to do double spacial duty by not only interpreting the space of the recording but also interpret the low frequency information to give us a sense of our own listening room spacial information.
I'm writing off the top of my head here in somewhat unchartered waters for me so don't know if this is correct or not but it sure explains why producing low frequencies properly (for what they are naturally for) is deemed so damned difficult and expensive.
This post is made possible by the generous support of people like you and our sponsors: