|'); } // End -->|
I've seen a few threads here discussing the "sound" of various Toslink and coaxial digital audio cables and was wondering how cables that transmit 0's and 1's can affect the resulting sound. My assumption was that, with a digital source, the digital audio data read from the source material would be sent over the cable as an unaltered stream of 0's and 1's, and that the conversion to analog would happen at the receiving end, which would mean the cable has no effect. This would also mean that the quality of the source component also has no effect in this case. Please correct me if I'm wrong.
how you cue-up a track to play (which sequence of buttons to press)also will have an effect on sound reproduction.
There's one buffer memory chip too many somewhere in our players....
Due to a less than stellar method of interfacing digital audio components, jitter can become a problem, see the link below for the details why.
Of course the digital cable does not send 0's or 1's from one component to another. It sends the equivalent of a waveform just like analogue cables. The receiving equipment interprets this waveform at predetermined time intervals as representing a 0 or a 1. The waveform is open to being distorted by various mechanisms which, in turn, affect the resultant sound.
so there is a good chance those won't fall out.
On a more serious note, digital signal is usually represented by some form of a square wave, meaning the bandwidth is much larger vs the analog signal. So the requirements for the 'digital cable' are more stringent (in terms of different kinds of signal distortions) than for the cable dealing with analog signal.
Yes (on the serious note), but in practice I understand that the transmission in effect is more akin to a sinusoidal waveform with the receiver discriminating only the central portion. This is done to increase the robustness of the system so that ringing and overshoot do not affect things. Of course the bandwidth that the cable is required to handle is still an issue as you point out.
Of course, if you take a Fourier transform of a square wave, you'll need an infinite bandwidth required to reproduce the square wave, so we have to limit ourselves to the 'central portion', and *yet* you still need very large bandwidth for that 'central portion' to keep the resulting signal sufficiently close to square wave so it will remain within some pre-defined degree of tolerance. That's in general is the major difference between the transmitting analog-encoded analog signal and digital-encoded analog signal. It's all ends up as a bunch of sine waves, of course. The key difference is how much bandwidth do you need.
Too bad understanding, explaining and correcting the situation via those same mechanisms isn't quite as simple : ) Sean
There are two issues. The first is digital distortion, which you correctly assess as being almost impossible. That is, transmitted ones and zeroes generally come through exactly right. You could use bread ties crimped together and the signal would probably come through unscathed. Pretty much not an issue. In fact, this is the clear advantage of digital: Instead of tiny nuances of signal strength carrying the detail of the signal, the bits that represent the signal slam all the way on or slam all the way off, making errors unusual.
The second issue is timing, as the other writers pointed out. Jitter sucks--it frequency modulates the audio signal. The answer would seem to me to be caching and reclocking at the DAC. This should make jitter irrelevant.
A slightly better answer is to grab existing handshaking technology used in hard disk and almost every other digital communication protocol. The transport would intentionally read faster than than real time. The DAC and transport would carry on a constant conversation: One signal from the transport would mean Ready to Receive Data" and another would mean "Not Ready to Receive Data." The transport's objective would be to keep the DAC's buffer full. Heck, CD players for cars do that now--it's how skip resistance works. The idea is to make the transport's clock independent of the DAC's clock.
None of this is new. It's just how we should be doing it. And is we did it that way, premium digital cables would only be sold by the likes of Machina Dynamica.
I think a surprising number of audiophiles try to carry over analog principles into the digital domain, and a lot of it is just plain different.
And please note that I fully believe that the accuracy of the original sampling clock and the clock used for reconstruction of the signal is critical to musicality. Also, the algorithm used for reconstruction of an analog signal from digital is yet as much art as science.
Christine Tham posted on the Hi-Res asylum how she built her own player and double-buffered the read-out. You can read about it by doing a search of her name. Apparently it wasn't that hard to do (she claims, but maybe she's a genius), but it makes me wonder why something like it isn't more commonly implemented. What's a few hundred dollars compared to the price of many high-end CD players that strive for low jitter? They could have zero jitter.
Even if you triple buffered the data, there will STILL be PS transients, still be jitter at the DAC, and still have an affect on the sound.
I have no doubt that double-buffering could greatly reduce jitter, but only if done extremely well, and with PS limitations directly in mind.
Most professionals in the recording bussiness strongly feel that 16 bits is not enough to keep us free from digital artifacts, and that at least 20 bits are neccessary to avoid the obvious digital colorations.
In order to maintain a full 16 bits of resolution at 20 kHz requires less than 20 pS of jitter. Nothing out there measures this low yet, nothing, not even close.
Hell, most facilities do not have the capability to measure this low!
We are a long way from unmeasurable (and IMO from inaudible)
Second paragraph should read "This should make TRANSPORT jitter irrelevant."
Third paragraph should read "One signal from the DAC (not transport)..."
And as you may surmise, the right data at the wrong time has a very adverse effect on the sound. The clock has to be extracted from the incoming serial stream using a high jitter prone phase lock loop circuit. And any bandwidth restriction or imperfect transmission termination on this interface will lead to data-correlated jitter, as the data pattern being transmitted will affect the rise and fall time of the bits, and hence add jitter to the recovered clock. Many clocking schemes have been developed to try and isolate the timing of this incoming clock from the actual clock running the D/A conversion, but most of them are not nearly effective enough.
Cables can alter the sound in the form of RFI transmission, ground noise, and jitter. I personally prefer Toslink simply becuase the RFI generated at the transport is isolated from the DAC.
For electrical cables, different shielding techniques affect RFI and ground noise. Low bandwidth or poor impedance matching can exacerbate jitter. If you go S/PDIF, I'd recommend using 75-ohm RF connectors, not RCA connectors, to minimize reflections.
For starters impedance mismatch causes odd harmonic ringing along the length of the cable. Materials used in the construction of the cable also cause odd harmonics pending the dampening properties. You are correct to say digital signal is nothing but 1's and 0's, but the odd harmonics cause signal smearing and ground noise.
> You are correct to say digital signal is nothing but 1's and 0's, but
> the odd harmonics cause signal smearing and ground noise.
Are you saying the "odd harmonics" create new 1's and 0's that weren't there or destroying ones that should be?
I follow the jitter argument since it is important at what precise point the information in the 1's and 0's is returned to the analog state. But to create "ground noise" in a digital state means totally new info needs to be created. If digital were susceptible in this fashion, no computer program - which can't tolerate any errors - could be installed.
"Are you saying the "odd harmonics" create new 1's and 0's that weren't there or destroying ones that should be?"
No....harmonics create noise on the ground. This is why the pro industry prefers AES/EBU for all digital and analog signal connections.
You're still using an analog concept in a digital realm. Noise in the background of a digital cable doesn't mean anything as long as the receiving end of the system can differentiate whether a sent bit is a one or a zero. The analog signal to noise ratio concept does not function in the same fashion in the digital world. You can run over 300' of cheap twisted pair CAT5 cable and have bit perfect transmission at 100 Mbps to the other end in a computer system. (That's about 70 times faster than a CD bit stream needs. Plenty of head room for error correction as needed.)
If noise does become a problem, that means you are either "creating" new ones and zeros that are being read at the receiving end or you are completely dropping existing ones.
Interesting that we can send 40 million lines of code in an XP network install without a problem but if the data contains music it all of a sudden becomes susceptible to all manner of old analog problems.
wonderful aspects of digital communication are unnecessary overhead put in place by nervous nilly engineers? You're pretty funny!
Sure it works fine with computer networks, if there is a problem with the data, it is just requested again and again until it is correct. digital audio doesn't work that way.
> if there is a problem with the data, it is just requested again and
> again until it is correct. digital audio doesn't work that way.
Actually audio does work that way. If there is an appropriate buffer at the receiver end (jeez, even car CD players include a buffer) then there is plenty of time to get a good signal to the buffer. That buffer can then parcel out the data to the DAC are just the pace required. For example, I can unplug the CAT5 cable on my inexpensive Squeezebox 3 and it'll continue playing perfectly fine for 8 to 10 seconds. At 1.4 Mbit/sec CD speed that's enough time to transmit 125 gigabytes of data on a 100 Mbps network.
As I noted previously, jitter is a very legit issue, but there are very good ways for the equipment to work around that and the issue has been discussed at length here and a lot of other places. I have to question how well designed a piece of equipment is if it is sensitive to cable-induced jitter.
Toslink and SPDIF are not network connections and do not buffer data. I agree that it would be easy to do. Many new home cd players are starting to spin the disc faster than 1x and buffer the data. I have not heard of any external DAC's that buffer incoming data.
John Watkinson (author of the book "The Art of Digital Audio") said in the July/August 2002 edition of Resolution (UK audio magazine) "...On the other hand, if the DAC has not been properly engineered, changing the cable could affect the amount of jitter reaching the converter. Thus we finally have a practical use for exotic cables as DAC testers. If the use of an exotic cable makes a DAC sound better, then the DAC is not performing adequately and should be repaired or redesigned."
have you ever experimented with different digital cabling between a transport and a DAC? There is definitely something "odd" going on here, and in many cases, that "oddness" is quite audible. My guess is that VSWR based signal reflections cause timing based problems, resulting in shifts in clocking. A lack of focus and an increase in smearing is the end result.
With the above in mind, my best results have been with digital cables that are resistively terminated at the load end. This technique of impedance matching is old school, but still quite cheap, easy and effective. While Stereophile ran a small article about this a few years back, and promised to have a follow up article and never produced, i'm not aware of any manufacturers currently using this technique in a production based cable. That's too bad, as i've definitely seen the benefits of such an approach. I was using a similar method to what the Stereophile article mentioned several years before the article was ever published.
For that matter, i'm not aware of any transport or DAC manufacturer that offers some type of user adjustable input or output impedance compensation networks. While such an approach would be inexpensive to impliment and could solve a lot of problems, it seems as if they are more interested in spending big money on cosmetic upgrades than actually improving performance. I guess that they are simply catering to the market, as there is certainly a large portion of audiophilia that assumes that better looks equate to better audible ( not necessarily better electrical ) performance.
Outside of transmission line and / or input/output impedance compensation, a DAC that re-clocks the incoming signal can also help quite a bit. While more expensive and complex than the very moderately priced impedance based compensation networks previously mentioned, such an approach can take further steps towards preserving signal integrity and improving aural performance of the system. Implimenting all of these aspects of component design and integration into one's system simultaneously can make for a VERY enjoyable aural digital presentation while satisfying even the pickiest of electronic tinkerers. Sean
This post is made possible by the generous support of people like you and our sponsors: