In Reply to: regarding the dAck! posted by tommytube on August 28, 2003 at 07:42:12:
Admittedly I don't know that much about it but apparently it just uses the CS8414 input receiver without any jitter reduction circuits so it should be just as sensitive, if not more so, to the transport and its associated cable as other DACs. Is there something else being done or is it claimed to be less sensitive because it's a non-oversampling design? Just curious. Or has that just been your personal experience when comparing it to other DACs?
Hi Wavy Davy,
The reason why the dAck! is less sensitive to jitter is because it operates at such a low bitrate.
To avoid bit errors, the largest acceptable timing error is something around 240ps for 16-bit 44KHz stereo, whereas with today's crazy up/oversamplers, it's always smaller than around 1/10 of that, down to a few picoseconds in some cases. This is not so problematic in fast digital systems like computers, because clock and signal are separate, and there is a lot of store-and-forward going on. But for digital audio, you lose a factor of two due to biphase mark encoding, and you lose it where it counts the most (external digital cabling). The crux of the issue is that you run into the regime where the slew rate of the driver stage is limiting straightforward interpretation of the signal transitions, and this is right where the signal is most prone to external noise.
This is where things like reclocking come in, but it turns out to be much easier from a system-wide perspective to just reduce clock rate everywhere.
In general, you want to minimize this ratio: (jitter amplitude) / (shortest sample duration in the system). It's not that the dAck! has some sort of jitter-reducing mechanism. It's just able to tolerate larger amounts of jitter without compromising the musical performance.
To begin, I think you would agree that you're generating the same amount of jitter when recovering the clock as other DACs that use the CS8414, although you or any of the other DAC designers could argue that their own circuit layout, power and/or other elements of the design lead to slightly less jitter on the recovered clock in their DAC. But most of those items are, from my own experience, completely swamped out by the huge amounts of jitter on the recovered clock from the CS8414 due mainly to bandwidth restrictions in the interface and improper termination causing irregular but often data correlated rise and fall times on the incoming data from which the clock is recovered. I've no idea where your 240ps susceptibility figure comes from since even the 256 fs master clock on the CS8414 running at approx. 11.2896MHz (and which I assume you don't use for anything in your DAC) has a period of 88000ps, doesn't it? A non oversampling design would have a bit clock period 8 times as big! You can definitely hear the levels of jitter you're speaking of, but not because they are causing bit errors.
I would agree that digital filters do potentially introduce a lot of extra jitter into a system, but it wouldn't be transport dependant, although it still may be data correlated as much of the worst interface jitter is.
I just don't see how one can claim that a non-oversampling design will have less transport dependant jitter on the bit clock at the D/A convertor. But I guess from your post you aren't claiming that, instead you're saying that because the clock rate is lower that the effect will be less and I'm not sure I buy that since either clock would have the same jitter spectrum from the transport.
> you're saying that because the clock rate is lower that the effect will be less and I'm not sure I buy that since either clock would have the same jitter spectrum from the transport.
Correct, the jitter spectrum given the same transport will be identical. However, if you for example took a jittery signal and fed it into an oversampler, the oversampled reconstruction would be distorted in a very different way from that of the non-oversampled one. With the latter, the error is distributed over a larger period, and it does not manifest itself as a distorted waveform, just an early or slightly delayed bit change (the waveform is distorted to hell already because of the 1x OS, but systematically so). My claim is that this is less problematic than what would go on inside an interpolation filter, where the errors is distributed into each interpolated bit level between samples.
And remember, the jitter is described in a statistical spectrum - if the jitter amplitude is correlated to, say, the effect of an external time-varying AC field (a non-guassian probability function), you have problems during reconstruction that you wouldn't be able to predict just by looking at the jitter spectrum. The waveform distortion due to jitter in this case is coupled to the AC field, a rather complex interaction.
Of course there are arguments saying that since non-OS image components are inaudible, the waveform distortion should be inaudible too. I am not saying I understand how jitter rejection behavior correlates to how a non-OS sounds so much more natural; jitter is not well-understood to begin with (The better sound is certainly a combination of effects from rather many factors, including the low bit rate conversion, ZOH reconstruction filter, implementation, post-filtering, etc... How they interact to yield that magic I do not know.). But in practice, the dAck! has definitely proven to be less responsive to those things that seem to make oversampling converters perform better. These include those jitterbug things, bybee filters, etc... Similarly, using a crappy transport with the dAck! v. a pretty good transport does not yield huge improvements like it would a more conventional converter.
Regarding the 240ps figure, it is related to the probability density function of the jitter. Max error is not as straighforward as calculating the error required for some fractional bit error for a single randomly-chosen sample - the RMS value can be the same for different distributions. I calculated the ~240ps value long ago from a formula in my transmission line textbook, I'm sorry but I honestly can't remember the form for max jitter I used off the top of my head. I'll see if I can find the reference for you.
In any event, it's not straightforward to determine what this value is because it depends on the PDF, which changes depending on external conditions. RMS value doesn't tell you the whole story.
Here are some figures I turned up in a quick internet search showing a jitter probability function and how it gives rise to bit error. The figure isn't quite right for the non-OS case, since there the ideal sampling instant is left-edge-justified.
"Non-oversampling units will be quite insensitive to timing errors in the signal. Digital noise rejection with the dAck! is about the best you will find anywhere, which allows users to spend less on transports and expensive cables (and filters) while maintaining the same level of performance, and use the excess to buy more music!"
I've only tried two transports personally. The first was my Creek CD43 (which listed for $1K a few years back) and a cheap Sony NV315 DVD player ($100).
While using the Creek as a transport does sound better than the lowly Sony, the Sony can match with the dAck in such a way that the sound is very nice. You still get the benefit of very low level background noise, etc. The Creek does allow the dAck to do it's thing with a heavier dose of magic.
I'd personally like to have more data on a very nice bang for your buck transport that would smoke the Creek and elevate the dAck even more.....but wouldn't cost an arm and a leg.
This post is made possible by the generous support of people like you and our sponsors: