|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
In Reply to: Compare with today's high technology... posted by tony montana on June 18, 2003 at 16:19:21:
This is an oft cited reason that audio cables should not be difficult to deal with, that there are much higher frequencies.Yet few ever stop to ponder the primary difference in these super high frequencies, and audio signals.
In most case, these super high frequencies are not being used directly, rather they represent a carrier wave, or a data transmission protocol, rather than a direct signal. The transmiission system is deliberately made as robust as possible, to that interference will not damage the data transfer. This means that non-linearities in the cables or conducting medium will have very little effect on simple data transfer, whether it is an FM audio carrier, or a high speed digital link.Things such as television signals have a very limited dynamic range, the difference between 100IRE units and 7.5 IRE units is only 22 1/2 dB. You physically can not see below approx. 30-40 dB from Max White on a CRT even if the TV signal had more dynamic range.
Distort the TV signal while it is in RF form, and it is hard to see any effect on the visible signal, it just isn't THAT high up.
Similar issues are present for other very high frequency signals, digital or otherwise, seldom is the signal information directly encoded at that frequency.
With audio, the signal information IS encoded in the direct frequencies involved, the signal is literally an analog of the sound wave that was captured. There is no carrier wave to isolate the cable distortions, there is no digital encoding to wash out minor cable signal distortions and abberations.
In point of fact, the dynamic range of audo signals is in excess of 90 dB, this means that signal abberations that are only .00003 or less of full signal are going to be potentially noticed. Some have questioned this very low level of signal distortion as to real world audibility.
I have found an excellent example of a known audio phenomenon that exists at levels below -90 dB, and is clearly audible, and has been heard by many professionals, as well as audiophiles in their homes.
I am talking about the variuos digital audio dither algorithms in use, and the fact that you can hear differences between the various different types. Sony's SBM (Super Bit Mapping) sounds different than Apogee's UV-22 process, which sonds different than the other record company's dither algorithm. Studio recording professionals can tell you this is so, and folks who own CDP's with user selectable dither algorithms have also heard the differences too.
So audio has the requirement that the signal linearity must be maintained dow to VERY low levels, much lower levels than for almost ANY other data transmission paradigm.
This is the fundamental difference bewteen audio and these very high frequency signals. Sheer frequency is not the only factor, and once again, your lack of exprience with these matters is all too apparent.
Follow Ups:
""This is the fundamental difference bewteen audio and these very high frequency signals. Sheer frequency is not the only factor,""JRYou are partially incorrect..(here, I should say your lack of experience is all too apparent, but that of course, would be an insult, which the moderator would have to deal with, as that violates forum rules.)
The main fundamental difference for non modulated signals IS the sheer frequency..
Audio is significant not only for the dynamic range, distortion levels allowed, and noise floor issues, as you correctly pointed out...It is one of the few applications which span three decades of frequency, and that span covers the threshold where skinning takes effect.. Higher frequency applications do not have to worry about skinning other than for resistance..speaking of the conductor only. The difference between gigahertz and terahertz will basically be the surface finish of the wire. (no dielectric talk here..)
""and once again, your lack of exprience with these matters is all too apparent""JR
Why don't you stop with the insults???
"In most case, these super high frequencies are not being used directly, rather they represent a carrier wave, or a data transmission protocol, rather than a direct signal."Ah, 'in most cases', so you're just going to ignore the others I suppose. Most high density rf data signals(like 16PSK) depend on accurately decoding the phase shift of a signal, small distortions in phase cause data loss and force the system to resend the information slowing throughput. Although you might not "see" it as lost data, it doesn't change the fact that there was destructive interference that caused the system not to accurately capture the data on the first try.
"The transmiission system is deliberately made as robust as possible, to that interference will not damage the data transfer."
Hmmm, so audio engineers haven't done that? Seems to me that a 50 ohm impedance to drive one that is 10 to 50 thousand ohms is pretty robust.
However, there is the luxury in sending data that allows you to retry. However, to the design engineer, this is a back-up plan. the primary goal is to ensure the data gets there correctly on the first try if system throughput specifications are goign to be met.
"Distort the TV signal while it is in RF form, and it is hard to see any effect on the visible signal, it just isn't THAT high up."
I guess you've never seen ignition noise, "snow", or "ghosting". What about selective fading, where parts of the carrier are distorted causing color shifts?
"I am talking about the variuos digital audio dither algorithms in use, and the fact that you can hear differences between the various different types. Sony's SBM (Super Bit Mapping) sounds different than Apogee's UV-22 process, which sonds different than the other record company's dither algorithm. Studio recording professionals can tell you this is so, and folks who own CDP's with user selectable dither algorithms have also heard the differences too."
This has never been verified under rigorous test conditions, has it. The other problem with what you cite is whether or not the application was even implemented properly in the first place.
"So audio has the requirement that the signal linearity must be maintained dow to VERY low levels"
While I agree with you here, you are simply over stating the apparent robustnes of other mediums and the significant engineering challenges that must be overcome in order to have a sucessful system.
Let's take a couple hunks of hardline, put a huge kink in each. In one, we'll run audio through, in the other, a complex modulated rf signal.Which do you think will have the greatest opportunity to experience significant signal distortion?
This post is made possible by the generous support of people like you and our sponsors: