High Efficiency Speaker Asylum

Need speakers that can rock with just one watt? You found da place.

Return to High Efficiency Speaker Asylum


Message Sort: Post Order or Asylum Reverse Threaded

Need help on time alignment question.......

205.175.225.5

Posted on October 31, 2002 at 18:20:46
Norman Bates
Audiophile

Posts: 563
Joined: August 9, 2002
So using 6db or 24db crossovers, align the centers.

The drivers are aligned at crossover point. But not at all frequencies??????????

I remember that at 80hz 24db active crossover there is 11ms of delay and using 24db at 140hz there is only 5-6ms of delay. There would be less delay outside of the crossover points using less steep crossover slopes or higher frequencies.

At the crossover the phase of the 1 driver goes up and the other one goes down. just Below the crossover the signal is out of phase and then approaches zero.

Help !!!!!!!!!!!!!!!!!!!!!!!!!!!

 

Hide full thread outline!
    ...
Phase, delays and offset baffle spacing, posted on October 31, 2002 at 21:57:32
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hello Norman!

You wrote:
[about adjacent speakers in a multi-way loudspeaker system]

>> The drivers are aligned at crossover point. But not at all frequencies?

That's right, they're not. Nor are they aligned at all listening positions, even if aligned on-axis at the crossover point. Even a single driver moves around in time, exhibiting movement and "jitter" of phase so one cannot expect all frequencies to be generated at the same time or same apparent position from a single electro-dynamic speaker motor. But a good design will minimize the really troublesome issues that can arise.

Check out the posts called "Phase, delays and offset baffle spacing" and "Hi Fi by Design." There are also a pair of documents available online that show the behaviour of various networks and various diaphragm placements in relation with one another. One is a crossover document that concentrates mainly on the electrical response of various crossover circuits, and the other is an offset document that focuses mainly on the issues surrounding diaphragm placement or baffle offset. This second paper shows very clearly what happens as speakers are moved in relation to one another, or as the listener moves relative to the speaker. Either movement changes the parallax between sound sources and the listening position, so they act similarly.

There have been lots of discussions about this issue here and on the π Speakers forum, so you might do a search on both forums for posts about "time alignment."

Take care!

Wayne Parham

 

Re: Need help on time alignment question......., posted on November 1, 2002 at 02:59:15
Tom Brennan
Audiophile

Posts: 5853
Joined: January 2, 2000
Norman---John Hilliard, Zeus of The Great Horn Gods who hurls Thunderbolts from his Olympus over Southern California into homes and theaters throughout the world (and project manager on the Shearer Horn and designer of the VOTs) thought that physical alingnment was worthwhile. It won't hurt. That's as technical as I'm willing to get on this.

 

Re: Need help on time alignment question......., posted on November 1, 2002 at 08:19:38
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Hi

"How to most favorably align your drivers and crossover" is perhaps one of the biggest can of worms it is possible to open.
Unlike all other components in the recording and reproduction chain, a loudspeaker is NOT expected to preserve the
waveshape of the input signal or preserve the "time" information it contains.
It is argued that "you can't hear that" but I think maybe just like low bass, where manufacturer's say "how far" you need to go down to is
governed more by what there products can do rather than what can one hear.
"the Hearing Time ?" debate aside, here is what
one has.

If one takes a driver, places a microphone 1/64 inch away and then inputs a signal, one finds that there is a length of time
between when the signal is applied and when the driver begins to make sound.
This delay is made of 3 things, first and largest is the "low pass" filter which the coil inductance and resistance make, added to
that is the delay caused by the sound propagation speed in the radiator and last, the time delay caused by the 1/64 inch air path.
All drivers have delay with the larger drivers having a longer delay (according to the VC inductance and resistance).
A big heavy 15 woofer may have a delay on the order of 3-5 ms or more but curiously mass plays NO PART in the delay or
speed of response, only VC inductance and resistance.

A VC driver (in the region it is behaving) also has a "phase response", that is "how many degrees away from the electrical signal
is the acoustic output when all the fixed delays are removed".
Since a driver has two reactance elements (mass and compliance), it forms a 2 pole system which then defines the phase shift as
being within + 90 and - 90 degrees (as the equivalent circuit also shows).
For an acoustically small source like a woofer, its mid band phase is around -90 degrees if the response is flat and wanders to
around +90 at each end of the range.
This makes it a little more difficult because that -90 degree lag represents a different amount of time delay or distance at every
frequency.
In time, this means that the frequencies produced are re-arranged in time compared to the input signal, emerging with the highest
F's delayed the least and the lowest ones the most.
As a result all direct radiating woofers that have flat response and are automatically eliminated from the possibility of preserving
time as the driver assigns a delay which is frequency dependent.
Richard Heyser was the first one I know of to identify all this (with respect to time) and his preferred way of seeing things was to
think of the delay as the drivers acoustic position moving to the rear an equivalent distance.

An efficient horn and electrostatic drivers (when small) have an acoustic phase shift which is more desirable, a perfect horn and
ESS have an acoustic phase at zero degrees, that is, there is only a fixed delay and not one dependent on frequency.
These drivers have an acoustic position which does not move around and are so far as time, closer to preserving the
input signal /waveshape (if they also have flat response).

A further complication is that all filters have delay as well, the total delay depending on filter order and the slope of the delay is
dependent on the filters "Q".


To mate direct radiators, the best one can do is make them "align" in time so that while it is not at zero degrees, at least there is
no jump in acoustic phase or amplitude. With sine waves only, one can find a position every WL or 1/2 WL (if one flips
polarity) as one moves the upper driver front to rear compared to the lower, where the signals add, all one needs to be is in
phase.
If one wanted to retain the time information, there is only one of those locations which will result in the upper and lower portions
of an interrupted or complex signal arriving at the same time (limited over all by the capacity of the drivers to preserve time).
That location will also give the best "energy VS time" , impulse, step response and so on, meaning it spreads the signal out in time
the least compared to the other positions.

Using a horn one can get closer to the (my personal goal) ideal of preserving time as an efficient horn, mid band may have little
or at least much less deviation from zero degrees and so has an acoustic position which does not move as much or at all.
Now, offsetting the systems according to the fixed delays, one can usually find a combination of crossovers which allows the on
axis acoustic phase to have a seamless transition in time, phase and amplitude.

The problem now is that with separate horns, there is a large undesirable interaction when the horns are producing the same
tones (a crossover) due to the large acoustic separation.
One way this shows up is if one moves to the side or up and down, if you move such that the path length between the upper and
lower drivers change (with respect to your ears), there is a change the the response /phase.
For upper and lower drivers to add without any interference, they must be less than about 1/3 WL apart.
Unfortunately, for a horn to load ideally, its mouth needs to be on the order of 1 WL in circumference but if one had the hf horn
coaxially in the lf mouth and used the hf horn down to its lowest practical F, one could still make a low interference transition.

Our Unity's address this as well, it only uses one horn, on these the hf driver is most rearward with the mid and low drivers
(when used) mounted progressively forward.
That is done because as one moves forward on the horn, one finds the expansion rate slows (making it suitable for lower F's)
and at the same time the front to back spacing also offsets the delays from the crossovers.
The at xover, ranges are added within the 1/3 wl criteria so there is no interference or position related stuff out front.

A filter which is not commonly used but that is very useful is called an "allpass" filter, it has flat amplitude response but has
changing phase with F, a time delay.
By incorporating one of those on a crossover I am working on for an old product of ours, I have been able to get the acoustic
phase to "around zero" from about 150 Hz to about 1600 Hz which means that all 3 ranges of drivers have the same acoustic
position in time and space and that location moves little in that F range.
So far I can say that voices in particular sound "more real" but this is too early to say as I am still working on a kink in the hf part.

If I were you or anyone else putting significant time and effort into building a set of speakers I would get a copy of Lspcad,
get a measuring system which you can import measurements from to lspcad and start fooling around.
It is imperative that you start with measurements of the drivers your using and in the way your using them (as the enclosure etc.
alters the results) and get familiar with everything. If you can, measure outside and that is a must for low frequencies.
Look at Group delay, it is an indicator of position if one remembers that .88ms = 1 foot, a step in group delay shows a different
acoustic position.
Remember that while Butterworth, Bessel and so on are "normal" filters, there is an infinite variety available.
Remember it is the filter AND driver which produce the responses your adding in your listening room and a program like Lspcad
is a great way to deal with them.
Cheers,

Tom Danley

 

can the relative drop in level, posted on November 1, 2002 at 10:27:59
Sam P.


 
at the xover freq. when one driver is reversed be used to determine the final phase relationship between the two drivers? For example, my quasi-4 Pi Pro's measured output with both drivers wired Positive is 76dB at 1570Hz., but only 60dB when the HF is reversed. LP filter is 2nd order BW, HP is 3rd order.
Previous SWAG was that they were around 90 degrees apart at xover...if that were true I don't think reversing the HF would give such a large dip...seems that the drivers in their normal polarity must be fairly close in phase at 1.6kHz. afterall.
Can the documented(?) result of a 16dB drop with reversed HF polarity be used to mathematically calculate the "normal" phase relationship present? Sam

 

Question, posted on November 1, 2002 at 21:56:23
Jon Risch
Bored Member

Posts: 6659
Joined: April 4, 2000
Contributor
  Since:
March 1, 1999
from another post of yours.

See:
http://www.AudioAsylum.com/forums/HUG/messages/35872.html



Jon Risch

 

John Hilliard and the IEEE, posted on November 1, 2002 at 23:49:09
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi Tom!

Just for fun, I did a search of the High Efficiency forum for comments about John Hilliard, hoping to find references to that 1930's tap dancing echo problem with the ~10 foot long horns and their attendant lengthy offset. There's a couple dozen posts about Elenor Powell's shoes, and you're on about half of 'em. [smile]

A participant that calls himself "Traddles" even mentions the volume and page number of an IEEE Transactions publication that has Hilliard's notes in this regard. Seems he believed that the offset between subsystems should be limited to 3 feet which corresponds to about 3mS.

It's kinda fun to look back through some of those old posts.

Wayne

 

Re: John Hilliard and the IEEE, posted on November 2, 2002 at 02:28:31
Tom Brennan
Audiophile

Posts: 5853
Joined: January 2, 2000
Wayne---Yeah, I love that story, it shows people who think this time-alingment is a new-fangled thing that the Old-Timers were there long ago.

The results of the experiments showed inaudibility with delay of less than 3ms with a 500hz crossover. Yet evidently this still ranckled Hilliard as one of his goals with the horn-reflex bassbin of the VOTs was to get lack of any offset.

 

Re: Question, posted on November 2, 2002 at 14:29:34
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Hi John

Sorry for the delay in reply.
I was contracted by Gary and another fellow to make transducers for an audio imaging system using ultrasonic heterodyne.
This is similar to the system that pops up every so often, the most recent by Joe Pompeii, the tech who worked with Gary "back
then".
His comment was made at a lunch meeting, being interested in "stereo hearing:" myself, I asked a bunch of questions about the
research he was doing. As I recall, his reference was to detecting a time difference between each ear and that he had to up his
sampling rate to 100 kHz before it had no impact on the results.
He is a nice guy and would probably respond if you wrote him, he was at Northwestern University, Evanston IL

Here is a high frequency link which may be of interest.

http://www.cco.caltech.edu/~boyk/spectra/spectra.htm

Cheers,

Tom

 

this may be a stupid question but that's never stopped me before, posted on November 2, 2002 at 19:37:44
wouldn't a single driver horn have the fewest time related distortion problems?

Could this be why single driver fans are such fans? I have never heard one before, I will state for the record.
Randy


It is only with the heart that one can see rightly. What is essential is invisible to the eye.
- Antoine De Saint-Exupery

 

Binaural delay and harmonic delay, posted on November 3, 2002 at 01:44:01
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi guys!

Just a quick note:

There's quite a difference between an ability to detect very small delays between ears and an ability to detect tiny delays of harmonics in relation to a fundamental. Said another way, phase between ears is quite a different matter than phase between a fundamental and its harmonics.

If I were to build a machine that needed to detect spatial information using binaural clues, it would use phase between sensor "ears." But it would also not need to detect harmonic to fundamental relationships, in fact, that would be something I would want a robot to disregard.

The reasons I would want these things are that tiny differences in arrival times between binaural sensors give a clue as to direction of the source. So that's an important ability, and the smaller the delay that is discernable, the better the resolution of direction that could be detected. So that's why very small delays between ears are important and discerning them is an important ability to develop.

But tiny differences in harmonics to a fundamental are created by reflections in an environment. So in a cluttered environment, there are myriad relections. That means that in "the real world" there are an infinite number of phase relationships that will develop, and if one were to be very sensitive to that, then they would suffer from a form of sensory overload. So in this case, the ability one would develop would be to disregard such information, and to become insensitive to it.

Maybe that delves into psychoacoustics, maybe even a little bit into the evolution/creation (or creation by evolution) debate. But it's something to think about nonetheless. If the best machine algorithm would be to be very aware of sub-millisecond delays between binaural sensors but disregard such small delays of harmonics to their fundamental, one would think that biological machines would probably be pretty much made this way too.

Just a thought.

Wayne

 

Re: this may be a stupid question but that's never stopped me before, posted on November 3, 2002 at 11:32:43
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Hi

The answer is a qualified yes, a single point source driver (one that is acoustically small compared to the WL being produced)
that is operating below "breakup" will have a maximum acoustic phase deviation (output signal compared to input signal)of + 90 to - 90
degrees.
At and above breakup, this relationship is lost.
Midband such a device would still have the ~90 degree lag which prevents it from preserving time fully.
Since it is a single source, there are also no big lobes or elevated reflected sound levels in the room as there aren't two sources
interfering with each other.
On the other hand, with the exception of a headphone driver, it is not possible to make a single driver loudspeaker that covers
the 10 octave span and do it well. People often forget that our "audible range" (and I am not discussing <20Hz or >20KHz) is a
fantastic bandwidth.
I mean, for those into radio, think of the difficulty in making a Radio receiver that covered 20 MHz to 20,000MHz.
How many frequency bands must it be divided into to get acceptable performance?.


Personally, I imagine that people have differing amounts of sensitivity to various acoustic "problems".
In the case of distortion, it was found that there was about a 12 dB difference between those most sensitive to distortion and
those the least. Time or phase?, I suspect will be the same, different sensitivities depending on the person.
The fact that horns can in mid band "preserve time" but are often "less flat" than direct radiators may explain the popularity, there
does seem to be a pretty distinct line between those who like horns and those that like small point sources.
Cheers,

Tom


 

Re: Binaural delay and harmonic delay, posted on November 3, 2002 at 13:08:06
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002

Just a quick note:

There's quite a difference between an ability to detect very small delays between ears and an ability to detect tiny delays of
harmonics in relation to a fundamental. Said another way, phase between ears is quite a different matter than phase
between a fundamental and its harmonics.

Yes, the question that was asked was in what context did the researcher find that such a short period was detectable.
In my answer I should have said it took a 200 kHz sampling rate not 100K.

If I were to build a machine that needed to detect spatial information using binaural clues, it would use phase between
sensor "ears." But it would also not need to detect harmonic to fundamental relationships, in fact, that would be something
I would want a robot to disregard.

Why? all the information would be useful for identifying the source angle.


The reasons I would want these things are that tiny differences in arrival times between binaural sensors give a clue as to
direction of the source. So that's an important ability, and the smaller the delay that is discernable, the better the resolution
of direction that could be detected. So that's why very small delays between ears are important and discerning them is an
important ability to develop.

But tiny differences in harmonics to a fundamental are created by reflections in an environment.

No, your assumption is not correct here, a reflection generally delays all components, there is no shifting of acoustic phase here.
The kind of phase shift a speaker causes is one where the "path length" would be different for every frequency.

So in a cluttered
environment, there are myriad relections. That means that in "the real world" there are an infinite number of phase
relationships that will develop, and if one were to be very sensitive to that, then they would suffer from a form of sensory
overload. So in this case, the ability one would develop would be to disregard such information, and to become
insensitive to it.

On the contrary, one can develop excellent stereo hearing perception in real life, vastly more acute than on any recording / reproducer.
This is partly based on the fact that in real life, source sounds are not contaminated in time the way loudspeakers spread things out.
While a loudspeaker(s) can give an excellent stereo effect, they fall far short of real life stereo hearing.


Maybe that delves into psychoacoustics, maybe even a little bit into the evolution/creation (or creation by evolution)
debate. But it's something to think about nonetheless. If the best machine algorithm would be to be very aware of
sub-millisecond delays between binaural sensors but disregard such small delays of harmonics to their fundamental, one
would think that biological machines would probably be pretty much made this way too.

Don't you think that if "the designer" to the trouble to develop such a keen sense of time for stereo hearing that this skill would be available for
mono too?
It is your assumption that it is not.
On the other hand as you know, Time is one of my areas of interest, this weekend I was able to try a prototype crossover on an old product of
ours (td-1), a 3 way Unity design.
With this I was able to get the acoustic phase (measured) to be around zero degrees from 200 Hz to 4 kHz, with a rise 360 degrees positive at
20K (showing an advancing position of ~5/8 inch).
This means that in that range, all 3 sets of drivers are at the SAME point in time AND that position does not move as a function of frequency.
As it should when the acoustic phase is zero, in that frequency range, the speaker reproduces a square wave quite nicely.

This is the first time I have heard a speaker which preserved time to this degree (able to reproduce a complex input waveshape over a significant
bandwidth) and I would describe the effect like this. It is more than subtle, it makes voices and percussive things more real sounding.
A-B'd against the same product with a normal xo, with the new one, it is much harder to tell how far away it is or how big the source is if that
makes sense, in exchange, more of the "space" of the recording environment takes up the image.
Using it as one channel in a stereo the effect is very noticeable.
At the moment my listening system is a pair of boxes called a trik, originally designed as a loud hifi stage monitor (there are a pair on the stage for
the Tonight show). This box has a 60X60 Unity horn (4 mids and 1 compression driver) and 4, 8 inch woofers at the mouth.
In time, it is our best sounding product and sounds darn good. I have a pair of those on end on top of subwoofers
Using the prototype td-1 on one side of the stereo, it simply makes the trik sound inferior.
I am going to try to get a second td-1 going this week , I am keen to hear a pair.

I have had a feeling all along that this would matter, I hear it, the question is do others hear it too?, others who are not emotionally tied to the work that is?. Time will tell.
Maybe I'll see if Irish Tom wants to take a drive north and see what he thinks about the sound "when waveshape is preserved".
Cheers,

Tom


 

Re: Binaural delay and harmonic delay, posted on November 3, 2002 at 14:07:37
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi Tom!

I wrote:

"If I were to build a machine that needed to detect spatial information using binaural clues, it would use phase between
sensor "ears." But it would also not need to detect harmonic to fundamental relationships, in fact, that would be something
I would want a robot to disregard."

And you replied:

>> Why? all the information would be useful for identifying the
>> source angle.

No. The phase difference between one ear and the other will give direction by triangulation. But the phase difference of high frequency components scattered from their their fundamentals is indication of clutter in the environment. So source angle detection is actually made more difficult by scatter, and it is something that should be disregared for the determination of sound source. What we are told by the amount of phase scatter, is the amount of clutter in the environment.

So it stands to reason that for a creature to survive, direction of a potential preditor is much more important than the amount of clutter in the area. Maybe for a small creature that eats bugs, the "clutter" becomes important. But for us mammals, this is of secondary importance.

I wrote:

"The reasons I would want these things are that tiny differences in arrival times between binaural sensors give a clue as to
direction of the source. So that's an important ability, and the smaller the delay that is discernable, the better the resolution
of direction that could be detected. That's why very small delays between ears are important and discerning them is an
important ability to develop. But tiny differences in harmonics to a fundamental are created by reflections in an environment."

And you replied:

>> No, your assumption is not correct here, a reflection generally
>> delays all components, there is no shifting of acoustic phase here.

Nonsense!

A twig or branch reflects high frequencies, but low frequencies go right around it. A slightly larger surface reflects midrange but not bass or midbass. Only a very large surface like a rock face reflects all components of a sound source.

>> On the contrary, one can develop excellent stereo hearing
>> perception in real life, vastly more acute than on any
>> recording / reproducer.

When I wrote of "developing an ability" to use certain sound clues, I was refering to an evolutionary process that selects the ability to determine sound source by binaural clues, and also allows the creature to sense this in the presense of ground clutter, and to not be confused by it.

>> This is partly based on the fact that in real life, source sounds
>> are not contaminated in time the way loudspeakers spread things
>> out. While a loudspeaker(s) can give an excellent stereo effect,
>> they fall far short of real life stereo hearing.

In real life, a sound source is scattered by the environment it is contained in, unless a sound is made in a wide open field. The biggest thing a person notices is in an anechoic environment, such as a wide open field, is that the sound seems to have less high frequency content because there is no reflected energy. If the environment is very cluttered or closed in, the reflections and scattering may be enough to make the sound cluttered. But in either case, binaural clues work pretty well, allowing you to locate the source of a sound.

Sometimes in very reflective and cluttered environments, the clues are too scattered and it makes sound source location confused. But the fact that we are able to determine direction in cluttered environments at all tends to demonstrate the ability we have to ignore phase differences between harmonics and their fundamentals. Since it is rare that an environment is so cluttered to confuse our sense of direction, this tends to show that we are pretty good at ignoring that kind of information, and not be confused by it.

I suggest the sawtooth wave demonstration to illustrate this matter. Left peak or right peak - You can't tell the difference.

Take care!

Wayne

 

Thanks! (nt), posted on November 3, 2002 at 15:34:18
Jon Risch
Bored Member

Posts: 6659
Joined: April 4, 2000
Contributor
  Since:
March 1, 1999
.
Jon Risch

 

Re: Binaural delay and harmonic delay, posted on November 4, 2002 at 03:58:07
Tom Brennan
Audiophile

Posts: 5853
Joined: January 2, 2000
Tom---I'll probably get laid-off this week, we're really cooking on this job and bringing it in ahead of schedule. I'll soon be back in the land of the living, for awhile anyway.

 

localization, early spacial impression, and envelopment, posted on November 4, 2002 at 10:09:38
hancock


 
Hi Tom,

Just thought I'd add my two cents. I've been doing a lot of reading on the subject lately. If anybody else is interested, I recommend reading the recent work of David Griesinger, Ando (can't remember his first name), and Francis Rumsey. Our understanding of hearing has progressed quite a bit in the last 10 years. Some of the earlier work by Schroeder and Beranek is pretty interesting too. One should recognize, though, that there is still a fair amount of speculation in all of this...

Localisation:
The ear is remarkably good at suppressing reflections when it comes to localization. Think about it: in a typical listening room, the direct sound coming from your speakers may be 12dB lower than the
reverberant level of your room. You have to add into that the reverb in the recording itself. And yet, we have no trouble at all sussing the localization of sounds (at least above 200Hz or so). That little fact never ceases to amaze me. The secret to that ability is that the brain determines localization on the rising edge of a sound. The good thing about reflections is that they ALWAYS arrive later than the direct sound (the shortest path between two points is a straight line). For wide-band sounds, the brain also averages over frequency. For percussive sounds--sounds with a distinct beginning, like most sounds in pop music or in Wayne's allusion to predator prey: most sounds that would alert one to a predator--very little time averaging is done and only reflections that arrive very closely to the beginning of a percussive sound will affect localization. For sounds with a less distinct beginning more time averaging is done.

Interestingly, we only have the ability to determine localization for one sound at a time--try locating two simultaneous sounds some time when you are listening at home. You can't do it. Vision has a lot to do with localization as well--if you expect a sound to come from a particular direction, it will.

Early Spatial Impression:
Early reflections do give one a sense of space. Some researchers (Griesinger) believe that arrival of each individual early reflections is recognized distinctly by the brain, although the brain does not perceive them as distinct sounds as long as they arrive within 40ms or so of the direct sound (known as the Haas effect). A single reflection that arrives later than 40ms is perceived as an echo (a separate sound). Interestingly, a reflection at 40ms and a reflection at 80ms will combine with the direct sound and give a sense of space, but a single reflection at 80ms will be heard as a distinct echo with no sense of spaciousness. This fact leads me to believe Griesinger's conclusion that individual reflections are recognized distinctly from the direct sound, but they are suppressed by the brain as multipath.

Apparent Source Width and Envelopment:
A widening in Apparent Source Width and, in the extreme case of
widening, Envelopment are generated by fluctuations in the apparent source direction due to late reverberation. The key to good acoustics is how to generate these fluctuations. It is absolutely critical for getting good sound from your system. It took me a long time to understand it and it would take a long discussion to explain it so I won't try to do so completely but I will give some ideas to ponder...

Suppose you were listening in an anechoic space to sound coming from two loudspeakers, one directly in front of you and one directly to your side--the sound from side speaker is like a reflection. Suppose that both speakers are playing a 500 Hz sine wave. The two sine waves will add or interfere at the listener's ears depending on their relative phases. The key is that interference will be different at each ear. The sound coming from the speaker in front will have the same phase at each ear since the distances between the front speaker and left and right ears are the same. The sound coming from the side speaker, however, will have a different phase at each ear since one ear is farther away from the side speaker than the other. You therefore get a different sound intensity and relative phase at
each ear when the front and side sounds are combined. The net effect is that apparent source of the sound either to the left or to the right of the front speaker depending on the whether the two sine waves are adding or interfering.

The differences in sound intensity and phase at each ear are called the inter-aural intensity and time differences, respectively (iid and itd). Iid and itd depend on the relative phases of the direct and reflected sound and also on the relative magnitudes of the direct and reflected sound and the angle of the reflection. A reflection causes maximum iid and itd when the reflected sound at the left ear is 180 degrees out of phase with the sound at the right ear. When this is the case, a reflected sound that is -10dB below the direct sound can cause a 3dB iid. Maximum interference at 2kHz is achieved by reflections coming from around 30 degrees from the front or back of the listener (around 3" path length difference to left and right ear). At 1kHz, maximum interference is achieved by reflections coming from a 60 degree angle (around 6 inch difference). Below this, maximum interference is achieved by reflections coming from the side of the listener.

The net result of all this is that a stereo set up in an anechoic room with the traditional +/-30 degree placement can only sound enveloping above around 1.5kHz. Below that envelopment has to come from reverberation in the room itself or else reverberation in the
recording reflecting off the walls to the sides of the listener. Unfortunately, a small room cannot produce the reverb necessary to get envelopment at low frequencies. (It would take a long time to explain why. If you're interested read some of the authors listed above.) To get envelopment at low frequencies from a stereo, the reverb MUST come from the recording and bounce off the side walls to your ears. Even then, to get envelopment at low frequencies, the reflections must be rather late--later than the reflections produced from most small listening rooms. The end result is that you really need a surround sound setup to get meaningful envelopment at low frequencies. And don't tell me envelopment is some Sony surround-sound gimmick that is not worthy of a serious audiophile. Concert hall acousticians spend their whole careers searching for the holy grail of envelopment. If you want it at home, you need more than two speakers.

Now, back to the two speaker example. I mentioned before that
envelopment is produced by fluctuations in the iid and itd. If the front speaker and the side speaker are both playing the same frequency, you will not get fluctuations in the iid or itd and you won't get envelopment. Now if instead of playing a 500Hz sine wave from the side speaker, you played a 510Hz sine wave, then this would combine with the 500Hz sine wave from the front speaker to produce fluctuations in the iid and itd at a rate of 10Hz. It turns out that the bandwidth of the sound is very important for producing envelopment. A source playing a simple sine wave will never sound enveloping even in the best designed hall. It takes multiple frequencies coming from different directions and interfering with each other to get envelopment. This is one of the reasons why vibrato sounds better than no vibrato: it increases the bandwidth of the sound and therefore the envelopment. Check this out next time you listen to music in a big room.

There is so much more to say on this subject, but that will have to do for now. This audio thing is just so dang fascinating...

John

 

Re: can the relative drop in level, posted on November 4, 2002 at 10:34:33
hancock


 

Hi Sam,

Yes, you can infer that the woofer and tweeter are pretty close to 360 degrees out of phase at that frequency. I guess that highlights why you have to measure these things rather than guess. There are a lot of strange things that go on with 15" woofers at 1.6kHz. Which woofer are you using? A lot of 15" woofers have a resonance around 2kHz. Is this equalized with a notch?

Using the crossover simulations that Wayne did in his offset paper, you'd have to have 160mm of offset to get the woofer and the tweeter 360 degrees out of phase. The problem with such a big offset is the resultant comb filtering effects on axis and the poor off-axis response. I am enclosing below the off axis plot using Wayne's own model. I am not saying it is an accurate model--it ignores a lot of effects that you would have to measure to be able to model. However, it is illustrative. Again, using a 3rd order on the tweeter and a second order on the woofer exacerbates the offset problem. A crossover properly designed to take the offset into account will have a MUCH better on and off-axis response. With the automated design tools available, it's not difficult to design one. You do, however, have to be able to measure your system to do it. How about taking Mark Seaton up on his offer to get your speakers measured?

John

 

audibility of phase distortion, posted on November 4, 2002 at 10:53:47
hancock


 
Hi Tom,

With the DSP filtering I do, I get flat magnitude and linear phase response from 40Hz up to 22kHz. I do the all-pass filtering separately from the minimum-phase filtering so I can easily try with and without phase distortion. On loud percussive sounds, it is most definitely audible--the only way to describe it is it sounds real. Everything is significantly "punchier". Sometimes I get an involuntary startled reaction from sounds I know are coming. I don't get that when the phase is wrong. On non-percussive sounds, though, I really can't tell a difference. That's really not so surprising since non-percussive sounds tend to be narrow band.

Anyone who says phase is not audible does not understand human hearing. Our ears are highly non-linear devices. In a non-linear system, phase affects magnitude--so even if we don't hear phase directly, we will hear it indirectly through magnitude. The impact of the ear's non-linearity will be greatest for loud percussive sounds, so it is not so surprising to me that that is exactly the circumstances under which I notice the difference--and I came up with this theory after I noticed this effect, not before.

But then again, maybe I am emotionally tied to this work as well...John

 

Re: can the relative drop in level, posted on November 4, 2002 at 13:01:53
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
John -

You might address something I've been wonding about for a while. You run the Lambda Unity's don't you? What did you do to correct the crossovers on them? The Lambda Unity had a 15dB dip at 4kHz that was fixed by "tweaking" the crossover, but not until after the product had already been in production. To quote Mark Seaton when asked about the poor performance of the Lambda Unity, he said, "This response curve above also is for an earlier crossover which was designed on a version of the lense with slightly different characteristics, and resulted in some significant response anomalies."

As for π Speakers, your chart is not accurate. You posted this erroneous chart before in an earlier thread, along with comments that appeared technical but were actually quite false. Also on this thread, you made comments on subjects as if you had intimate knowledge of the subject and aquaintance with the author, but then when confronted about that matter, you admitted having not even read the article you commented about.

So I think it best not to start acting the same way again.

See the response chart below:

The measured response shows no correlation with your model, but it does look like what I had predicted. See the offset document I provided, which shows this configuration and several others.

Wayne Parham

 

Re: can the relative drop in level, posted on November 4, 2002 at 14:53:34
hancock


 
Wayne,

You are one twisted dude. I am going to trust that everyone here understands that your statements are just more blather and not waste any more of my time challenging your nonsense. I do, however, want to say that I am glad to see that you appear to have acquired the basic tools that are an absolute necessity for designing speakers. Here's hoping you put them to good use. At least this nastiness has come to some good.

John

 

the drivers are summing flat, posted on November 4, 2002 at 15:25:36
Sam P.


 
thru the xover region when wired as Wayne advises, and show a 16dB drop in level when the HF is TEMPORARILY reversed.
The only offsets involved are those dictated by following standard building practices...woofer bolted to baffle, horn flare that positions the HF well to the rear. Using actual physical dimensions and xover theory, IIRC the HF should be ending up about 90 degrees ahead of the LF at 1.6kHz.
The SPL summation/cancelation DATA implies the drivers are fairly CLOSE together in phase at 1.6kHz. Measured on axis.

My initial inquiry was how to convert the summation/cancelation SPL readings to an estimate of phase matching at the xover freq. I think the correct equations, worked backward, would provide the answer. Altec AN-9 states a notch of a certain depth(8 dB?) indicated good phase matching between the drivers at xover. As the notch depth is a function of the phase matching between the drivers, its size should also be useful as an indicator of the phase. Sam

 

Re: can the relative drop in level, posted on November 4, 2002 at 16:14:30
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
John -

I was curious what your comment might be about the crossover flaw in the Lambda Unity's. It isn't as if I was the only person to notice it - It was a known issue. I wondered how you might comment about that problem, and why it was produced this way only to be redesigned later. It's a beautiful horn, by the way. Very aesthetically pleasing. And probably sounds pretty good too, but it had some huge peaks and dips, and one tell-tale spiked dip that indicated the system was suffering from severe cancelation from adjacent drivers.

You've been pretty vocal with your opinions about crossover design, and it seems that the "basic design tools" might have been overlooked on the Lambda Unity. I wondered what you did to solve this problem, or if you just left 'em alone. Seems to me that a huge anomaly that was later fixed by a crossover might have been easily seen by the most rudimetary modeling tools. So I just thought you might comment on that, but if you don't want to, that's OK too.

On another note, you boasted recently on another thread that you have flat phase and amplitude response from 40Hz to 22kHz. How exactly do you keep the system from becoming reactive in the lower octaves? If not a huge horn, the system cannot possibly be purely resistive that low, and even with good horn loading, today's technology proves to have some phase "jitter." You suggested that you are using DSP, but what equipment do you use that corrects the frequency domain and the time domain simultaneously? Particularly in the lower octaves where it is moving the most, that would seem an incredible feat.

Wayne

 

Re: try 4 or more high efficiency subs, posted on November 4, 2002 at 22:01:03
Mark Seaton
Manufacturer

Posts: 75
Location: Chicago
Joined: June 2, 2000
Wayne wrote:

"I was curious what your comment might be about the crossover flaw in the Lambda Unity's. It isn't as if I was the only person to notice it - It was a known issue. I wondered how you might comment about that problem, and why it was produced this way only to be redesigned later. It's a beautiful horn, by the way. Very aesthetically pleasing. And probably sounds pretty good too, but it had some huge peaks and dips, and one tell-tale spiked dip that indicated the system was suffering from severe cancelation from adjacent drivers."

I'm not sure why I bother asking anymore as in the past you have continually not answered the questions posed and only tried to question more in response, but over and over we have explained the nature of the notch you were so quick to diagnose from nothing more than a single frequency plot.

So I pose the question, where would the "severe cancelation from adjacent drivers" come from if it is observed when only a single compression driver is measured?

Tom Danley sent you real measurements of at least one of our products, for which we do finally have some basic info back on the website. I have measured them in a handful of environments and even watched test and measurement gurus of the pro-sound world get nearly identical measurements on their own. Also understand Lambda Acoustics is a separate company from Sound Physics Labs and ServoDrive. They mistakenly started delivering the horns with the crossover before full confirmation was made that the measured results matched those Tom Danley obtained. The results in fact did not match, and it was found that the pre-production horn with which the crossover was designed was in fact physically different enough to affect the response. I won't even bother to review what effect any sort of smoothing would have on the plots...

Mark Seaton

 

"No spin zone", posted on November 4, 2002 at 23:09:37
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi Mark!

You wrote:

>> I pose the question, where would the "severe cancelation from
>> adjacent drivers" come from if it is observed when only a single
>> compression driver is measured?

Anomalies caused from the crossover can occur more than an octave away from the crossover point. And the fact is that you said the problem with the Unity was solved by changing the crossover, not me.

In the thread where you and I originally discussed this issue, I described a simple observation that the response curve posted by Lambda for the Unity product was very peaky, having a sharp 15dB downward spike. That was plain for anyone to see, at least until the response charts were removed. Further, my comments were made in response to your comparison of π Speakers with the Unity, which did not merit comparison because the Unity's response curve was poor.

It was you that indicated that the crossover was changed to address this issue. Lambda customers indicated that there was a known problem with the crossover as well. So it would seem that the problem was with the crossover in the Unity, and I don't think it is right for a person to boast about crossover performance when their flagship product performed so poorly.

This early dialog between you and me is at the heart of the matter. When you boasted about the performance of the Unity and then compared it with my products, it was only natural for me to respond to your challenges. The fact that the response curve posted by the maker of the Unity showed poor performance was not something that I could have possibly fabricated. The data was on the Lambda website, it was gathered by Lambda and it was made available by Lambda. The only participation I had in this matter was to respond to your challenge and to report what I saw, using a link to your own data to make the point.

Don't forget Mark, that the "first strike" was yours. It was your comparison of the Unity with π Speakers that began this. You boasted that the performance of the Unity was better, yet your own measurements showed what was actually pretty poor response. So it was not difficult to find fault in this reasoning, and to tell the truth, it made me angry that you would even suggest a comparison between my speakers and the poor performance indicated by the Unity response curve shown on the Lambda web site.

But what I think is the main thing here - What I find to be completely reprehensible - is the attempts by you and your associates to belittle my efforts in an effort to make the Unity appear better by contrast. One would wonder why you, Tom and Hancock find the need to compare it with my product line with such an obsessive zeal.

If it's a successful product, then let it stand on it's own.

Wayne Parham

 

2nd try..., posted on November 5, 2002 at 00:04:04
Mark Seaton
Manufacturer

Posts: 75
Location: Chicago
Joined: June 2, 2000
Wayne wrote:
-----------
Hi Mark!

You wrote:
>> I pose the question, where would the "severe cancelation from
>> adjacent drivers" come from if it is observed when only a single
>> compression driver is measured?

Anomalies caused from the crossover can occur more than an octave away from the crossover point. And the fact is that you said the problem with the Unity was solved by changing the crossover, not me.
-----------

Hmm... the comment I originally quoted, "severe cancelation from adjacent drivers" refers to MULTIPLE drivers interfiering. The point you continue to skate around is that you believe the drive units are interfiering, when in fact they are coupled very well. You continue to stand on this matter as evidence and will listen no further after you deemed such measured response as very poor. Such evaluation is rather disconcerting when you have not seen other devices measured at similar resolution. If you already have LSPcad, why not also go for JustMLS and a mic and see what is really going on.

Once again, Lambda Acoustics liscenced the design from ServoDrive, well before I was part of the company, and decisions to post response curves was theirs. As Tom has clearly shown you, our own production products measure much differently, and the later correction to the crossover yielded a very acceptable response as well. Not that anyone is probably reading this, but when those measurements were posted, Nick McKinney of Lambda Acoustics clearly stated that the measurement still had some reflections making the response more ragged, but wanted to depict the uniformity of the design over the horn's coverage pattern, which was in fact very easy to see.

Mark Seaton

 

Re: 2nd try (at spin), posted on November 5, 2002 at 00:47:25
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hello again Mark!

You wrote:

>> Hmm... the comment I originally quoted, "severe cancelation from
>> adjacent drivers" refers to MULTIPLE drivers interfiering.

That's right, yes. The Unity device has multiple drivers.

>> The point you continue to skate around is that you believe the
>> drive units are interfiering, when in fact they are coupled very
>> well.

Perhaps, but according to the response charts indicated on the Lambda site, they weren't coupled together well at all. The fact that the crossover was redesigned to address the problem suggests this as well.

>> You continue to stand on this matter as evidence and will listen
>> no further after you deemed such measured response as very poor.

I believe what I see, and not necessarily what you say. If what you say matches the facts, then I'll grow to trust you. If not, then I won't. You should not have invited the comparison, if you weren't willing to hear an opposing view.

By the way, I wanted to remark on the subject title of your previous post called "try 4 or more high efficiency subs". I realize now that you may have been saying that in response to my question of Hancock how he could make the bottom octaves less reactive. But having a number of "high efficiency subs" isn't going to cut it.

Even efficient subs present a predominantly reactive load in the bottom octaves and having four of them doesn't help. The only way to make a subwoofer act more like a resistive load is to make it as an extremely large and well-loaded horn. And the problem with basshorns is that they are necessarily undersized unless they are huge permanent installations built-in to basements or floors.

A smaller horn may increase efficiency, but it will still be reactive. That may actually help the bottom octave response by peaking a bit there, but it also removes the possibility of zero-phase behavior. So there really is little one can do to make the system zero-phase (purely resistive) at these frequencies from any horn smaller than a house.

Wayne Parham

 

Re: Binaural delay and harmonic delay, posted on November 5, 2002 at 05:32:22
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Hi Tom

Sounds like the job is zipping along.
I would be interested to hear what you think about the difference between "normal" and preserving the waveshape.
Wayne says you can't hear such things but after hearing "with and without" I have never been more convinced you can.
Let me know when you would be able to go north and hear it.
Cheers,

Tom

 

Re: Binaural delay and harmonic delay, posted on November 5, 2002 at 08:58:20
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Hi Tom!

I wrote:

"If I were to build a machine that needed to detect spatial information using binaural clues, it would use phase
between
sensor "ears." But it would also not need to detect harmonic to fundamental relationships, in fact, that would be
something
I would want a robot to disregard."

And you replied:

>> Why? all the information would be useful for identifying the
>> source angle.

No. The phase difference between one ear and the other will give direction by triangulation. But the phase difference of
high frequency components scattered from their their fundamentals is indication of clutter in the environment. So source
angle detection is actually made more difficult by scatter, and it is something that should be disregared for the
determination of sound source. What we are told by the amount of phase scatter, is the amount of clutter in the
environment.

So it stands to reason that for a creature to survive, direction of a potential preditor is much more important than the
amount of clutter in the area. Maybe for a small creature that eats bugs, the "clutter" becomes important. But for us
mammals, this is of secondary importance.


In the systems I have worked on to locate flaws in concrete, image things in bodies and in the way the TEF locates a
loudspeaker, increasing the bw of the signal increases resolution, not decreases it. There are other ways to do this other than
simple phase comparison you know, why assume the ears is the simplest.
In fact Gary at Northwestern concluded the ear is VERY sensitive to time, especially ear to ear differences.
This is "Time" not phase sensitivity so why don't you think some of that same "time" sensitivity is available in mono?
Why do you think a left saw tooth would sound the same as a right sawtooth? at say 500 Hz, the time locations of the hf
components are FAR further apart than the difference in time known to be detectable by ear?
Granted, if using a speaker which scrambles up all that time information (as nearly all do), they may well sound the same but I
suspect it is the fact that there are very very few speakers which can preserve waveshape even over a narrow band, that leads to
the assertion. Without the ability to compare to "without" saying there is no difference "with" is a weak arm waving argument at
best. Having heard it myself, nothing you could say would convince me that what I hear is not real.
What I need now are more "un-involved" people to hear it VS normal and get there opinions, it is still possible after all that all
this was not worth all the effort.


I wrote:

"The reasons I would want these things are that tiny differences in arrival times between binaural sensors give a
clue as to
direction of the source. So that's an important ability, and the smaller the delay that is discernable, the better the
resolution
of direction that could be detected. That's why very small delays between ears are important and discerning them
is an
important ability to develop. But tiny differences in harmonics to a fundamental are created by reflections in an
environment."

And you replied:

>> No, your assumption is not correct here, a reflection generally
>> delays all components, there is no shifting of acoustic phase here.

Nonsense!

A twig or branch reflects high frequencies, but low frequencies go right around it. A slightly larger surface reflects
midrange but not bass or midbass. Only a very large surface like a rock face reflects all components of a sound source.


Sorry, I should have been more clear, I was referring more to how reflections in a listening room tend to measure as relatively
broad band lumps that generally look as described (broad band, low phase shift) compared to a speaker's problems which
generally shifts phase at all frequencies. Yes, you are correct that reflectors can also have significant frequency / size
dependence especially if acoustically very small, like your examples.
The point over all was that speakers generally screw up "time" unlike any thing found in nature and larger degree. For one, in
time, they impart a fixed delay and a variable delay which is frequency dependent and do so over the entire range (generally).

One can hear time, the issue is at what degree, obviously anyone can tell if a song was played yesterday, or an hour ago or even
played with a 1 second echo.
It is known that adding several small echo delays can make a voice sound like several, since the time delays encountered there
are in the neighborhood (at the short end) of the delays encountered in a speaker system (woofer delay + xo delay), why is it
hard to imagine that re-combining the voice back into a single "time" would not also be audible as the opposite effect (one voice
sounding more like ONE voice)?

It is known that at least so far as Voice recognition (understanding a spoken words) that a big part of intelligibility is related to
the ratio of direct to ambient or reflected sounds (level spectrum, distortion etc. being equal).
It is clear, the greater the direct is compared to the "noise" (all non direct), the greater the voice recognition.

In recording studios and many homes, where maximizing the stereo information in the recording is desired, the speaker end of the
room is made very absorptive and often speakers used with directivity. Absorptive to soak up all the sound that otherwise
would be a short delay reflection, these are time errors destroy stereo image, also in the same neighborhood as the time issues in
multiway and acoustically small point source speakers (crossover phase shift and driver time and phase)
If those time problems are worth treating in a room, why would fixing that same kind of problem at the source (the time issues of
the speaker) not also make a difference?
Not having had a way to make drivers behave ideally in time before is not the same as saying doing so can't be heard.


>> On the contrary, one can develop excellent stereo hearing
>> perception in real life, vastly more acute than on any
>> recording / reproducer.

When I wrote of "developing an ability" to use certain sound clues, I was refering to an evolutionary process that selects
the ability to determine sound source by binaural clues, and also allows the creature to sense this in the presense of ground
clutter, and to not be confused by it.

>> This is partly based on the fact that in real life, source sounds
>> are not contaminated in time the way loudspeakers spread things
>> out. While a loudspeaker(s) can give an excellent stereo effect,
>> they fall far short of real life stereo hearing.

In real life, a sound source is scattered by the environment it is contained in, unless a sound is made in a wide open field.
The biggest thing a person notices is in an anechoic environment, such as a wide open field, is that the sound seems to
have less high frequency content because there is no reflected energy. If the environment is very cluttered or closed in, the
reflections and scattering may be enough to make the sound cluttered. But in either case, binaural clues work pretty well,
allowing you to locate the source of a sound.

Sometimes in very reflective and cluttered environments, the clues are too scattered and it makes sound source location
confused. But the fact that we are able to determine direction in cluttered environments at all tends to demonstrate the
ability we have to ignore phase differences between harmonics and their fundamentals. Since it is rare that an environment
is so cluttered to confuse our sense of direction, this tends to show that we are pretty good at ignoring that kind of
information, and not be confused by it.

I suggest the sawtooth wave demonstration to illustrate this matter. Left peak or right peak - You can't tell the
difference.

As I mentioned in an earlier post and shown in the photo I sent you, My moog not only does have both choices of waveshape
but the difference in perceived sound between the two shapes is much more pronounced than I am comfortable with, not
inaudible.

A better test maybe is comparing the sound from the same drivers but normal crossover to this one where it is truly time / phase
correct. Since the frequency response, directivity and distortion would be relatively "the same", remaining differences would
largely be due to time. That audible difference is significant sounding to those that heard it so far.

Based on what I hear, especially now, I am more convinced than ever that some people may in fact be very sensitive to "time"
issues. If I can figure out a way to make a normal speaker like yours preserve time well enough to reproduce a waveshape over
a significant bw, I will describe how to do it and then you can see for your self if you can hear the difference or not.
Until you have a speaker which preserves time, you will not be able to say first hand that it does or does not make a difference.

Perhaps the antique references that keep popping up (so far in audibility of as time / phase) are due to be revised now that 60
some years has gone by and now that it is possible to correct the problem.
I suspect (just like the issue of how low do you need to go) that "what is needed" so far as time / phase has a lot more to do
more with what loudspeakers can do than what can one hear.
I suspect that to be fair, a person would have to hear a speaker that preserves time / phase before they could conclude it has no
effect, I hope you leave your mind open enough so that if you ever get the chance to hear one, you take it.
Cheers,

Tom

 

Re: 2nd try (at spin), posted on November 5, 2002 at 09:03:31
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Wayne

I have (to you, several times, in increasing detail) explained how the Unity got to Nick at Lambda and why his initial
version needed additional xo work as well as a host of "your issues" with it such as driver interference.
Your posture of having "forgotten" all that here and posing the questions yet a new can only be from wanting to try to "one up"
Mark somehow, or its a personal medical issue.
If you really did forget, go back and look.


TD
BTW, fixing the phase on a sub is easier than you might think.

 

Re: Binaural delay and harmonic delay, posted on November 5, 2002 at 09:09:26
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Hi Tom

Let me know when your available for a trip north, I'll shovel out some of the kids stuff from the livingroom.
I am keen to hear what it sounds like to you, Wayne says you can't hear time like this but after hearing it, I am more convinced
than ever you can.

Cheers,

Tom (also Irish)

 

Sawtooth wave experiment, posted on November 5, 2002 at 10:02:54
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi Tom!

You wrote:

>> Why do you think a left saw tooth would sound the same as a right
>> sawtooth?

Because they do sound the same. This has been shown, time and time again.

>> at say 500 Hz, the time locations of the hf components are FAR
>> further apart than the difference in time known to be detectable
>> by ear?

So then make this part of your Unity demonstrations. Hook up a scope to demonstrate the waveshape, and play the sound of a 500Hz sawtooth with its peak on the left. Then reverse the waveshape so that the peak is on the right - keeping frequency and amplitude the same.

>> Granted, if using a speaker which scrambles up all that time
>> information (as nearly all do), they may well sound the same
>> but I suspect it is the fact that there are very very few
>> speakers which can preserve waveshape even over a narrow band,
>> that leads to the assertion.

Use a single driver for the demonstration if you feel that's important. Or use your Unity if you would feel better.

You might consult the work of Dr. Arthur Benade too. It's not just Bob Moog and myself that find a sawtooth and its reverse sound the same. In fact, you'll find everyone that attends a sawtooth phase demonstration will agree, provided you do not change the waveshape in some other way in order to make it sound different.

>> Without the ability to compare to "without" saying there is no
>> difference "with" is a weak arm waving argument at best. Having
>> heard it myself, nothing you could say would convince me that what
>> I hear is not real.

That last statement makes your comments very suspicious. I challenge you to include the sawtooth wave demonstration when you show the Unity's. If you're as convinced as you say you are, then this is something you'll want to do.

Wayne Parham

 

Re: 2nd try (at spin), posted on November 5, 2002 at 10:14:55
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi Tom!

You wrote:

>> I have (to you, several times, in increasing detail) explained how
>> the Unity got to Nick at Lambda and why his initial version needed
>> additional xo work as well as a host of "your issues" with it such
>> as driver interference.

The fact is that the response curve posted for the Lambda Unity was substandard, showing a large 15dB dip and several smaller 5dB peaks and troughs. Its performance was shown to be poor, yet Seaton compared it to π Speakers.

>> Your posture of having "forgotten" all that here and posing the
>> questions yet a new can only be from wanting to try to "one up"
>> Mark somehow, or its a personal medical issue.

I forget nothing - Quite the opposite, my posts that describe it are accompanied by links to the thread to demonstrate it accurately. Anyone who cares to read the "things forgotten" can go directly to the threads in question.

That's the point here, Tom. You and some of your supporters are quick to discuss the crossover in my speakers, but you prefer to overlook the known crossover flaws in your own. That's what started the controversy in the first place.

Wayne Parham

 

Re: 2nd try (un-wind), posted on November 5, 2002 at 13:30:37
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Hi Wayne

Yes it is a fact that his first one had some "issues" and his measurement showed it.
Personally, I thought it was sort of brave of him to show what it did (measurments of the real thing) but he was proud of how
good they sounded.
In any case, the event is quickly retreating into the distant past and at no time represented what we make and sell, nor the
performance of his units with a proper xo (suited to the shape of his final production corner radius).

I don't see how, without measurments of your &pi's, that one could compare them to any Unity from a measurment point of view.
Without measurments one doesn't really know what they actually do acoustically after all.
That leaves discussing generalities, you are stuck on thinking discussions of time and crossovers have been focussed on "your"
speakers when the problems I have been discussing have been common to all (nearly) speakers, not just yours.
Mostly, this has been in the context of explaing to you how Unity's work, not discussing your speakers.

Known flaws? thats a hoot, the point of the Unity design is it gets around most all of the flaws incured doing it the normal (not
time coherent and/or not a single acoustic source) way.
Having the drivers any way BUT this (aligned in time / phase etc) throws away time coherence, oh, I forgot, your also saying you
can't hear that so it doesn't matter.
I even sent you a real high resolution TDS measurments of one Unity I was working on, right off the TEF, the crossover was
invisible in amplitude and phase so just what the heck are you talking about when you say "Known Flaws", please, fill me in,
what can you tell me that measurments of the real speaker don't.

You had another post here where you refer to having a single speaker preserving time.
This is very very weak at best, in the range where the source is acoustically small, it does not preserve time at all if it has flat
frequency response.
Flat response midband is acompanied by a ~ - 90 degree acoustic phase shift, that is, the signal produced is about -90 degrees
behind the input signal.
The problem is that -90 degrees at 1KHz is like the signal was delayed 3.3 inches but by the time one gets down to say 30 Hz, it
is delayed nearly 10 feet.
Re-aranging the time signals like this prevents even a single perfect driver from ever reproducing a complex signal faithfully.

On the other hand, if the driver behaves into the region where it has directivity (a gain in on axis SPL and loss off axis), then the
VC inductance (or network) can be sized to compensate that rise (increasing SPL on axis with increasing F) and at the same
time adding an additional 90 degrees (making the acoustic phase around zero).
So your statement would more accuratly read "between the points where directivity takes hold and cone breakup, if flat (what,
an octave or two or three??), a single driver can preserve time.
What I am talking about is a passive speaker having preserved time (zero degree acoustic phase) from 200 to 4KHz with only a
change coresponding to shift of 5/8 inch above that (@18KHz).
It also has the simple constant directivty radiation pattern of a one horn acoustic source, is very low in distortion soft and can go
very loud if desired, it is very little like that full range cone driver
.


Tom

 

Re: 2nd try (at spin), posted on November 5, 2002 at 13:54:44

G'day Wayne

I guess it's time for a fresh view here.

I purchased my Unity horn kits during the pre order. When they were delivered, I quickly wired them up per Lambda's instructions, and crossed them over actively to a JBL 2226H in a sealed enclosure, which is used for bass duties. To say I thought I'd made a big mistake selling my ribbons was an understatement. I'd picked up a fair serve of dynamics, and the speakers sure had a nice sharp cutoff beyond 30degrees from the main axis, but they just didn't sound right. Measurements with IMP/MLS confirmed there were response/phase issues. My plots pretty closely matched those on Nick's website.

After emailing Tom about my problems, he sent me a new crossover to try. I sourced all the parts then built them up. The Unities now sound superb! They have the transparency and detail of my old ribbon speakers, but the dynamics are breathtaking. The ability to portray the original acoustic space is superb. Small reverberant details in the music shine through what used to be the acoustic mud of the listening room. The speakers are now very neutral in character, with no audible acoustic anomalies.

I'm glad I stuck with Tom's design. It looked great in theory, but was a huge leap of faith buying unheard. These will be my main speakers for a very long time. Your bias against this design is based on a few plots that were taken before the kit version of the Unity horn had been properly implemented. The flaws that you state are characteristic of this design are no longer an issue, and they most certainly DO live up tho their expectations.

Cheers

William Cowan

 

I am also very interested, posted on November 5, 2002 at 15:08:22
KCHANG
Audiophile

Posts: 475
Location: Chicago
Joined: March 8, 2002
Hi Tom,

If you don't mind, perhaps I can try to come with Irish Tom to your place. I've been wondering for a while about the importance of preserving the waveform of the music. I cannot imagine how the reproduced sound can ever sound "real" when the recorded waveform is so different from the totally disfigured, unrecognizable, waveform coming out of the speakers. I am pretty intrigued that we can even enjoy the music. Our brains must have a very powerful interpreter for retrieving information from the distorted waveform.

By the way, do you know whether anyone has done any in-depth study comparing the original recorded signals with what's coming out of the speakers and explaining how we are able to recognize what's in it?

Kurt (Chinese)

 

Re: 2nd try (at spin), posted on November 5, 2002 at 16:30:14
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi William!

Those are really stunning speakers. And I'm glad you posted this here because it illustrates my point. I'm glad you got the bugs worked out on your Unity's and I think your implementation is truly excellent.

Take care!

Wayne Parham

 

Re: 2nd try (un-wind), posted on November 5, 2002 at 16:31:26
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi Tom!

Again, the known flaws in the Unity crossover were discussed by Mark Seaton and by some of your customers, on this forum and others. As for comparison of the π crossover, a graph is provided in the position/offset document that shows measured performance of the speakers, and it is clearly better than what was shown for the Lambda Unity on their website. Finally, in discussions about the perception of phase, I think it would be best to include the sawtooth wave demonstration and let that stand on its own.

Take care!

Wayne Parham

 

Linkwitz views on the subject, posted on November 5, 2002 at 20:48:36
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi Kurt!

Check out what Siegfried Linkwitz says on the subject.

I find his views to be an accurate assessment of the situation, without sensationalism or exaggeration.

Take care!

Wayne Parham

 

Re: Linkwitz views on the subject, posted on November 6, 2002 at 07:35:00
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Hi Wayne

A good write up as far as it goes, his first words on the subject should not give you too much comfort.

"Few studies have been made about the audibility of linear distortion of the time waveform for an acoustic signal."

This is not the same as saying "you can't hear any of this".

The statement later on
"The phase of a speaker's frequency response has to change linearly with frequency in order to preserve
waveform fidelity. This is equivalent to saying, the group delay has to be constant (f)"

Is correct however is from a different (less clear) standpoint than I have been using.
The wave shape is only preserved when the acoustic phase is Zero degrees (acoustic pressure with respect to the input signal)
AND if the amplitude response is flat.
A hilbert transform has a flat frequency response BUT also has a -90 degree phase shift at all frequencies (like a small point
source has mid band). Flat amplitude but -90 degree shift takes a square wave and turns it into something unrecognizable.

The group delay statement is spot on.
Group delay is the measure of how frequencies are delayed with respect to each other.
If one remembers that sound travels at about 1132 feet per second, one can figure that there are .884 milliseconds per foot.
Looking at the plots of delay for a woofer, one see's that the delay increases dramatically as the frequency goes down.
It is as if the woofer is retreating to the rear in time with decreasing frequency. "why"??
Looking at the acoustic phase, one see's that the woofers phase response is lagging ~90 just like it should be, given the
frequency and phase shift. That -90 degree delay is an increasing distance / time as the frequency falls.
An acoustically small point source with flat response (like a woofer) has this -90 degree phase shift and so its group delay shows
an increasing delay with decreasing F.

From the Heyser view, acoustic phase is the shift between the input signal and what the driver output when all the fixed delays
are removed from the measurement. This response is the one which matches what the equivalent circuit (of the driver) predicts.

A fixed delay (like the distance between the mic and speaker or the speakers internal delay) shows up as a linear phase shift (a
change in phase proportional to frequency) but all frequencies are delayed equally so wave shape is un-altered.

He had some filter measurements but without measuring the drivers output, it is hard to draw much from them.

He said
"Group delay increases dramatically at low frequencies, even for the full range speaker".

Again, "Looking at the acoustic phase, one see's that the woofers phase response is lagging ~90 just like it should be, given the
frequency and phase shift. That -90 degree delay is an increasing distance / time as the frequency falls.
An acoustically small point source with flat response (like a woofer) has this -90 degree phase shift and so its group delay shows
an increasing delay with decreasing F.

He says:
"Further experiments might be performed with a high quality 3-way speaker system, whose phase
response or group delay have been carefully measured or modeled. Knowing the phase distortion,
which the speaker will introduce, use digital signal processing with the appropriate software to
pre-distort various music files on a PC, such that the overall playback phase response will be linearized.
Burn the newly created files to CD-R. For comparison take the same music files and add a constant
delay to them, so that pre-distorted and un-distorted material have undergone the same process and
any artifacts are common to both. In this way one could switch directly between nearly identical tracks of
a CD-R and listen for differences that can only be the result of phase response differences.

This would do it but I think my solution is better, at least for me as a speaker designer.
I have a speaker which already is corrected to a significant degree w/o dsp and although multi driver, none of the drivers or
ranges interfere with each other spatially, unlike a 3 way box speaker and its far lower in distortion and goes loud.
I can compare it to another speaker with most of the same parts but the difference is in "time" preservation.

I will be curious to see what Tom B. and Kurt's impressions are when they hear it (maybe I'll have 2 made by then).
Cheers,

Tom


 

Re: Linkwitz views on the subject, posted on November 6, 2002 at 13:12:41
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hello Tom!

You wrote:

[about the the views of Linkwitz on the subject of speaker phase performance]

>> A good write up as far as it goes, his first words on the subject
>> should not give you too much comfort.
>>
>> "Few studies have been made about the audibility of linear
>> distortion of the time waveform for an acoustic signal."

Actually, Tom, I am quite comfortable with Linkwitz's views; That's why I posted the link in the first place. His views are very much like my own, and in fact, much of the audio community is like minded.

It is you that has made a career out of making something distinctly different, in an attempt to say that the difference is inherently "better."

>> The statement later on "The phase of a speaker's frequency
>> response has to change linearly with frequency in order to
>> preserve waveform fidelity. This is equivalent to saying,
>> the group delay has to be constant (f)"

That's right; It's exactly what I've been saying. It also explains why the Unity cannot possibly be time linear over the audio spectrum. You have even admitted that it shifts at frequency extremes, but you often get back up on your time-alignment soapbox and fail to mention that point.

>> Is correct however is from a different (less clear) standpoint
>> than I have been using.

Your arrogance is overwhelming at times. Your spin does not make the situation any more clear, and others describe the apparent movement of loudspeakers more succinctly, because they are not trying to promote a paricular time-alignment scheme. But what I am most uncomfortable with, is the obsessive extremes to which you and your associates will go to further an argument, and the rude and condescending tone you take with people who simply don't agree with you.

Take care!

Wayne Parham

 

Suggested reading, posted on November 6, 2002 at 13:18:03
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hi Norman!

I don't know if you were able to glean any useful information from this thread. So I suggest that you read what Siegfried Linkwitz says on the subject. I find his views to be an accurate assessment of the situation, without sensationalism or exaggeration.

You might also be interested in the crossover document and the position/offset document, available on the π Speakers website. These documents describe several different crossover networks, showing schematics and response graphs of each. Many loudspeaker configurations and diaphragm placements are also explored, so that you can see the effects of each.

Take care!

Wayne Parham

 

Re: the drivers are summing flat, posted on November 6, 2002 at 13:57:50
hancock


 
I'm working from memory, but I believe the formula you are looking for is:

sin(x)+sin(x+a)=2*sin(x+a/2)*cos(a/2)

According to your measurement, the speakers are approximately 360 degrees out of phase--if you want to call that in phase you are not incorrrect, but the drivers will definitely not sum flat. You correctly observed that the location of the drivers results in physical offset. The fact that Wayne is using a 2nd/3rd order crossover also adds to the phase offset electrically. The net result is there is NO WAY the crossover sums flat. I realize you may not want to hear it, but you can't break the laws of physics.

John

 

Re: can the relative drop in level, posted on November 6, 2002 at 14:37:05
hancock


 
Wayne,

I'll interpret this to be a serious question:

"On another note, you boasted recently on another thread that you have flat phase and amplitude response from 40Hz to 22kHz. How exactly do you keep the system from becoming reactive in the lower octaves? If not a huge horn, the system cannot possibly be purely resistive that low, and even with good horn loading, today's technology proves to have some phase "jitter." You suggested that you are using DSP, but what equipment do you use that corrects the frequency domain and the time domain simultaneously? Particularly in the lower octaves where it is moving the most, that would seem an incredible feat."

First off, time domain and frequency domain are two different representations of the same information.

SSSecond, inverse filtering is not at all an incredible feat. It is just basic signal processing. The equipment is not really relevant, it just has to be able to do the math at a high-enough precesion in real time. But since you ask, I do DSP on the PC and on a Motorola 56362.

I like to do the minimum phase filtering separate from the all pass filtering. The minimum phase filtering equalizes the magnitude of the system and to the extent that the system is minimum phase, it will simultaneously equalize the phase of the system. Minimum phase filters do not introduce a delay. However, they will not correct any excess phase in the system. To correct the phase, you also need to use an all-pass filter. An all-pass filter has a flat magnitude and an arbitrary phase. All-pass filters, however, introduce a delay.

To construct you inverse filters, take an impulse response. Do an FFT on it. Take a Hilbert Transform of the magnitude of this FFT. Convert magnitude and phase you just constructed with the Hilbert Transform back into imaginary numbers. Take the complex conjugate. This is the minimum phase inverse filter. Do an Inverse FFT (IFFT) on the filter to get the impulse response of the filter. Subtract the impulse response of the minimum phase filter from the original impulse response. Reverse the time index of this "subtracted" impulse response and this is your all-pass filter. Like I said, it is not at all "incredible". It is just basic math.

John

 

Re: can the relative drop in level, posted on November 6, 2002 at 16:55:31
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
John -

Actually, you interpreted incorrectly because my question was purely rhetorical.

I have made the case that loudspeaker phase cannot be corrected with an analog filter crossover or baffle offset. I have also made the case that it is a non-trivial matter using DSP techniques, particularly in the bottom octaves. To describe it and to perform it are two very different things.

This is like talking about 3Hz performance from a loudspeaker. It is nothing but ostentatious hyperbole.

In short, John, I am saying that your claim to have a sound system that is phase-linear from 40Hz to 22kHz is a gross exaggeration, and that's putting it politely.

Wayne

 

Re: the drivers are summing flat, posted on November 6, 2002 at 16:56:38
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
John -

360 degrees phase offset at 1.6kHz is 8.5 inches. Get out the yardstick and slide rule and you won't find that figure comes out anywhere. I realize that you're used to depending on computers for this kind of thing, but it isn't really all that difficult.

If you align centers, there is not 8.5 inches of offset. If you consider the woofer delayed, there is even less. And since your champion suggests that woofers be placed in front of the tweeters to compensate delays, it is a bit odd that you would make statements suggesting the opposite is the case. So what is it that you believe exactly?

I submit that you believe that fabrication is important to win an argument, and that there is nothing more to your comments than that.

Wayne

 

Re: can the relative drop in level, posted on November 7, 2002 at 00:20:10
hancock


 
Wayne,

Look at your own simulations in your offset paper. With your pi crossover the drivers are 360 degrees out of phase when their acoustic centers are 6" apart, not 8.5". Take a look at your own charts--notice how with 6" of offset the individual responses are 6dB below the summed response? That means they are 360 degrees out of phase. You are forgetting the relative phase shift introduced by your 2nd/3rd order crossover. As I have stated on numerous occasions, your dubious choice of crossovers exacerbates the offset problem. Tom suggests offsetting the drivers to adjust for the relative phase shift of an entirely different crossover with relative phase ôf the OPPOSITE sign. In Tom's case, the relative phase shift of the crossover and the relative phase shift of the physical offset compensate for each other. With your setup, the two sources of relative phase REINFORCE each other. That is why I say your choice of design is a horrible one.

Go ahead and take the last word on this Wayne. I know it is pointless to discuss these matters with you because you just don't get it.

John

 

Re: can the relative drop in level, posted on November 7, 2002 at 00:34:34
hancock


 
Wayne,

Why exactly do you think it is not possible? Do you persoanlly have any idea what the limiting factors in DSP are? Can you explain to me why you are calling me a liar?

John

 

Re: can the relative drop in level, posted on November 7, 2002 at 00:38:26
hancock


 
I wrote that to quickly...

The word subtracted in the last two sentences of the above post should say convoluted--in case anyone out there is actually trying to do this.

John

 

Re: the drivers are summing flat, posted on November 7, 2002 at 00:43:59
hancock


 
Wayne,

Look at your own simulations in your offset paper. With your pi crossover the drivers are 360 degrees out of phase when their acoustic centers are 6" apart, not 8.5". Take a look at your own charts--notice how with 6" of offset the individual responses are 6dB below the summed response? That means they are 360 degrees out of phase. You are forgetting the relative phase shift introduced by your 2nd/3rd order crossover. As I have stated on numerous occasions, your dubious choice of crossovers exacerbates the offset problem.

Tom Danley is offsetting the drivers in the Unity to adjust for the relative phase shift of an entirely different crossover with relative phase ôf the OPPOSITE sign. In Tom's case, the relative phase shift of the crossover and the relative phase shift of the physical offset compensate for each other. With your setup, the two sources of relative phase REINFORCE each other--they don't offset. That is why I say your choice of crossover in the 4pi is a horrible one.

Go ahead and take the last word on this Wayne. I know it is pointless to discuss these matters with you because you just don't get it.

John

 

Re: can the relative drop in level, posted on November 7, 2002 at 00:51:32
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
John -

The points I made are that:

1. 360 degrees is 8.5 inches at 1.6kHz, and there is no offset anywhere of this scale, either physical or electronic, and
2. You are of a camp that promotes displacement of LF diaphragms behind those of HF units, yet this is something you balk about in my designs. So do you believe that the large inductance of the woofer voice coil delays the signal more than the small inductance of the tweeter, or don't you?

Beyond that, I'd be interested in a description of the crossover you've used in your Unity implementation rather than hiding behind some nebulous description of it. I'd also like to know what kind of crossover was in the Lambda Unity to give it such poor performance and whether that's the unit you've retained, or if you replaced it.

Wayne

 

Re: the drivers are summing flat, posted on November 7, 2002 at 00:52:53
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
John -

The points I made are that:

1. 360 degrees is 8.5 inches at 1.6kHz, and there is no offset anywhere of this scale, either physical or electronic, and
2. You are of a camp that promotes displacement of LF diaphragms behind those of HF units, yet this is something you balk about in my designs. So do you believe that the large inductance of the woofer voice coil delays the signal more than the small inductance of the tweeter, or don't you?

Beyond that, I'd be interested in a description of the crossover you've used in your Unity implementation rather than hiding behind some nebulous description of it. I'd also like to know what kind of crossover was in the Lambda Unity to give it such poor performance and whether that's the unit you've retained, or if you replaced it.

Wayne

 

You asked for it, posted on November 7, 2002 at 02:07:08
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
John -

Let me clue you in on something. I've designed massively parallel processor cards based on the Inmos T800 processor, and I've done fiberglass runs in the thousands. Each node has optional digital and analog I/O, so a massively parallel network has quite a bit of DSP capacity. Basically what I'm telling you is that all this boasting you do is childs play to me.

But the most important reason I think you're lying is your behaviour in the recent past. You've posted fabricated data in an earlier thread, along with comments that appeared technical but were actually quite false. An example on this thread is where you made comments on subjects as if you had intimate knowledge of the subject and aquaintance with the author, but then when confronted about that matter, you admitted having not even read the article you commented about. And finally, we've found you here posting under assumed names - sockpuppeting - in an attempt to decieve. Several of your other sockpuppeted messages were deleted already, but this thread where you posed as "Mike" was allowed to stand.

In short, John, you are a liar.

Wayne

 

Re: You asked for it, posted on November 7, 2002 at 07:16:30
hancock


 
Wayne,

I'm trying to be civil here, but you're getting way out of line calling me a liar. I have never lied about ANYTHING here. It is very obvious that you know squat about DSP. One more case of you pretending to be an expert in areas where you clearly are not. It is immediately obvious to anyone who does know DSP that you are talking through your arse. I GAURANTEE that there is no one here, who does know DSP, who is going to come to your defense and say I'm wrong---GAURANTEE!!!!! The only ones you're fooling here are the ones who don't know enough about the subject to see through your BS. That's pretty pathetic Wayne. Seems to me like you'd be better off spending your time studying these subjects rather than charading on the net.

...and I'm still waiting for you to answer the question: "what are the limitations that would make the filtering I say I am doing impossible?" I'm very curious to know how something I consider to be trivial could be impossible.

...and when did I ever lie about O'toole's paper? I attended O'toole's workshop at the Audio Engineering Society Convention where he presented that paper. How would you know if I was there or not to call me a liar?

I have to admit you caught me trying to get the plans for your speakers. Why won't you let me see them? What are you hiding?

If your next post contains more nonsense, it will be ignored regardless of what distortions it contains. I'm sure you will come up with some good ones and once again avoid answering any questions.

John

 

Re: Linkwitz views on the subject, posted on November 7, 2002 at 12:01:50
tomservo
Manufacturer

Posts: 8149
Joined: July 4, 2002
Hello Tom!

You wrote:

[about the the views of Linkwitz on the subject of speaker phase performance]

>> A good write up as far as it goes, his first words on the subject
>> should not give you too much comfort.
>>
>> "Few studies have been made about the audibility of linear
>> distortion of the time waveform for an acoustic signal."

Actually, Tom, I am quite comfortable with Linkwitz's views; That's why I posted the link in the first place. His views are
very much like my own, and in fact, much of the audio community is like minded.

I'm sorry, I took your posting of his write up as if it were suppose to be something supporting your argument to the effect that "all
this concern over time / phase is irrelevant as it is not audible".
Traditionally, that has been part of the tact you have taken for your arguments that the Unity horn can't work "because it can't
work or if it did you couldn't hear it".

It is you that has made a career out of making something distinctly different, in an attempt to say that the difference is
inherently "better."

Your nearly right, it is a career made of making things for people (or under industrial, NASA or gov't contract) that usually did
what they were looking for, that often others had said "couldn't be done ". Some of this was in acoustic sources, some not, but
your right too, most of my solutions are not at all the "text book" approach but so what?, they work.
As for speakers, in the industrial scientific areas we sell to it is assumed that the products will be measured many times in there
life. Aside from the acoustically weird stuff, it is living up to the specifications which we are known best for, don't believe it, just
ask around, go to AES or ASA conventions as ask around. It was that track record that brought us contracts to come up with and build the
goofy things like the sonic boom simulators and sources to trigger avalanches etc.
You have brought up the 3 Hz thing as if irrelevant yet it was in fact audio, one can easily hear 3 Hz at 130 dB and such vlf sounds
are a part of everyday life. If one was limited by what could done "off the shelf", then those frequencies are un-important (a
marketing word for too difficult), if you are a physiologist studying the human arousal system, one MUST reproduce those
frequencies to see what happens in a person when an impact or explosion /sonic boom happens.
On the other hand, if one has a home theater system, it may also be desirable to have extended frequency response for the same
reasons. I don't get your argument, are you saying because I did that VLF stuff that it precludes me from designing a full range
speaker??

>> The statement later on "The phase of a speaker's frequency
>> response has to change linearly with frequency in order to
>> preserve waveform fidelity. This is equivalent to saying,
>> the group delay has to be constant (f)"

That's right; It's exactly what I've been saying. It also explains why the Unity cannot possibly be time linear over the audio
spectrum. You have even admitted that it shifts at frequency extremes, but you often get back up on your time-alignment
soapbox and fail to mention that point.

I think it is you who have not been listening.
A constant group delay is the same as a constant physical distance or time delay ( a constant delay which Heyser removes to
reveal acoustic phase)
For a loudspeaker to have a constant delay vs freq, it must have a zero (or 180 if flipped) degree acoustic phase.
Anything but that and that phase shift indicates a different position (or delay) at each F.
So when one says constant group delay, once that constant delay is removed the only condition which preserves waveshape is
when there is no acoustic phase shift (relative to the input signal IE: zero degrees acoustic phase)
The last revision of the Unity was around zero degrees acoustic phase from about 250 Hz to about 4 KHz and both by
measurement and by Sigfrieds description, DOES preserve the wave shape of the input signal inside either extreme.
Yes at either extreme it so does deviate from what I am shooting for (zero degrees) and at 20KHz, it is all of 5/8 inch in acoustic position.
At the low end, by the time one reaches about 80Hz, it is acting like any other direct radiator (@~90 degrees).

You are the one with the soap box proclaiming the Unity doesn't work, I ask you again what part of this isn't as described?

>> Is correct however is from a different (less clear) standpoint
>> than I have been using.

Your arrogance is overwhelming at times. Your spin does not make the situation any more clear, and others describe the
apparent movement of loudspeakers more succinctly, because they are not trying to promote a particular time-alignment
scheme. But what I am most uncomfortable with, is the obsessive extremes to which you and your associates will go to
further an argument, and the rude and condescending tone you take with people who simply don't agree with you.

Wayne, I have really tried hard to be patient with you and explain things in great detail to you, because you wanted to know
how it worked and because you were going around saying it doesn't work, can't work etc
Yes I don't agree with you, you say the Unity doesn't work and I have tried to explain to you many times why it works /
measures like it does. Somehow my trying to explain where you don't get it becomes "overwhelming arrogance", I ask you to
explain what is going on (in your opinion) that isn't indicated by the measurements and you change the subject.
Again, what part of the Unity does not work like I have described?

I don't think you can find an example of me being rude or condescending anywhere on the internet other than out of frustration dealing with your proclamations that the Unity doesn't work.
To the extent I have been, well I should have been more patient I guess.
It is frustrating, words are the ladder on which one can climb to see a picture in the minds eye. Why I have not been able to
convey the operation of it to you is a mystery, like trying to find out what that "something" is you see in a measurement that
eludes you and you just don't want to quit.
I probably should quit too because if you don't get it by now, its because you don't want to.
Cheers,

Tom

 

Re: Linkwitz views on the subject, posted on November 7, 2002 at 17:31:21
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
Hello again Tom!

You wrote:

>> I'm sorry, I took your posting of his write up as if it were
>> suppose to be something supporting your argument to the effect
>> that "all this concern over time / phase is irrelevant as it is
>> not audible".

My view about phase is that it is low on the list of priorities unless it causes a frequency anomaly or an audible echo. That's been my stated opinion since the first time I wrote about loudspeakers in the 1979 "Pi Alignment Theory" paper.

I have also said that you cannot make a loudspeaker system time accurate with baffle offset and analog filter crossovers.

>> The last revision of the Unity was around zero degrees acoustic
>> phase from about 250 Hz to about 4 KHz and both by measurement
>> and by Sigfrieds description, DOES preserve the wave shape of
>> the input signal inside either extreme.

That's four octaves of ten.

So to call this a time-aligned loudspeaker system is a bit of a stretch, isn't it? I mean, less than half the audible bandwidth is even "around" phase-linear. And really, the same could be said of other large format horns, so one could boast similar performance from most systems, having "around zero acoustic phase" through the passband of each subsystem's horn.

>> You have brought up the 3 Hz thing as if irrelevant yet it was in
>> fact audio, one can easily hear 3 Hz at 130 dB and such vlf sounds
>> are a part of everyday life.

I have brought up the 3Hz thing to demonstrate how willing you and your associate are to make wildly exaggerated claims. One can develop 3Hz easily enough by putting actuators on the floor, as is done on a flight simulator. But that isn't the point.

The point is that you and your associates try to tie this kind of performance to your standard product offering, and that's an example of your style of spin. And the worst part is the extremes you guys will go in order to try to make your claims, which is what I really object to the most.

Wayne Parham

 

Re: You asked for it, posted on November 7, 2002 at 18:16:48
Wayne Parham
Manufacturer

Posts: 5564
Joined: March 11, 2001
John -

Your complaints fall upon deaf ears here. You have been consistently rude and have made it your own personal crusade to call me a fraud, yet my design philosophies about phase and speaker placement are shared by the majority. So for you to get excited that I would call you a liar is underwhelming.

I have listed several links that prove my case. You are a liar.

How many digital circuit designs have you done, John? What exactly do you know about digital engineering or circuit layout or anything of the sort? What makes you believe your understanding is superior, John? You go on and on, but I'm confident that your experience is limited to PC software and home theater DSP. I'm absolutely certain that you've never designed a single circuit, and don't even know what signals are required of a DAC or ADC - Not a single device.

In contrast, I speak from experience, and quite a bit of it. I have dozens of complex digital designs under my belt, and literally hundreds of printed circuit boards of my designs in inventory. Thousands of them have been shipped, and my products have been very successful for some of the nation's top companies. These products include processors, communications systems and industrial sense and control equipment, which necessaarily includes analog sensors and digital processing. So unless you've designed and built the stuff, John, you aren't as able to discuss their limitations as I am. Said another way, if you don't have the experience and yet you talk as though you do, then that makes you a fraud.

And as for plans to my speakers, they were sent to you and there is quite a lot of material about my designs that is available for download, and doesn't require me to send it. My designs are open, which is more than I can say for the Unity, whose crossover design seems to be "shrouded in secrecy." But the fact remains that the Lambda Unity had a huge flaw, which indicates that the thing was made by someone who didn't realize that the crossover used would cause a large frequency anomaly. But you still bought it.

Perhaps that is why you are so defensive.

Wayne

 

Page processed in 0.035 seconds.