|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
12.72.208.245
In Reply to: RE: Other than jitter and bitperfect? posted by Gevo on November 30, 2009 at 09:01:06
The USB receiver still works the same way to sync up to the USB datastream, and still has its own fixed and PLL derived clocks running in the DAC, so still most likely propagates the same jitter signature as with a normal adapative isochronous connection. It's just that with the asynch iso mode you can use a local (high quality) clock for control of the USB receiver codec outputs to the DAC chip (clock, data and enable). Jitter will still be on the signals, but can be much reduced with good layout and isolation techniques. In a perfect setup, the cable shouldn't matter, but hard to achieve that in the real world with all of the many clocks and data processing in a USB receiver, along with noisy power lines from the computer, and the resulting jitter noise added to all of the signals. It takes more than a clock to convert digital data to analog, so timing irregularities on any of the signals will be translated to jitter on the clock since they all share the same current return, and there will always be a voltage-inducing impedance in that return.
So the effects of the cable are the same, but there is the opportunity to reduce how much gets to the analog output with async iso mode.
Follow Ups:
One consistent thing I have found is that using usb power for dac, processor, or even external HDDs affect sound adversely.
Charles Hansen in a post elsewhere talks about the charge pump devices that are used being the issue.
Even pulling the usb mouse and keyboard on a tuned audio PC affects the sound to a smaller degree.
I'd like to amplify some of these points and give some examples. Then I'll cover some of the differences between how these work with adaptive and asynchronous modes.
The assumption I see that many people hold is that somehow asynchronous modes eliminates jitter (which can never happen) or at least completely eliminates the possibility of anything on the interface (cable) from affecting the jitter in the DAC. I hope to give some reasons why this is not he case.
As Slider mentioned the USB receiver is the same in asynch mode. Anything that can affect the jitter on the output signals in adaptive mode can affect them in async mode. The USB receiver puts the data in a FIFO and a clock clocks them out. The big difference is where the clock comes from, in adaptive mode its a PLL frequency synthesizer, in async its a crystal oscillator (or at least it should be). In both modes there are currents flowing through the ground and power traces of the chip which can cause jitter on the signals coming out of the chip. These internal currents CAN be affected by whats happening on the USB bus, which can be affected by the cable and by timing inside the computer driving the bus.
What happens next depends on how the DAC is put together, some may feed the I2S from the receiver chip directly to DAC chip(s) others may reclock the I2S, or convert it to a different format and maybe reclock those signals.
Lets look at the case of going directly to the DAC chips. In this case the local clock gets fed directly to the DAC chip and the assumption is that because the clock comes directly from the low jitter oscillator the audio coming out cannot possibly be affected by any jitter from the input. This is NOT the case. The (non clock) input signals cause gates inside the DAC chip to turn on and off, and every time this happens currents flow on the internal ground and power traces in direct response to the jitter on those input signals. Again these currents can add jitter to the other internal signals including the clock and other control signals generated from the clock. The result is that jitter on non-clock input signals can and does add jitter to the internals of the DAC chip, affecting the analog output.
Exactly how much of this happens varies from DAC chip to DAC chip. Its is theoretically possible to design a DAC chip so these affects are minimized but this has not been a priority with any DAC chip manufacturer to my knowledge.
Now lets look at the other option, where the I2S lines are reclocked before entering the DAC chip. If you greatly decrease the jitter on these lines then the above shouldn't matter. Well unfortunately nobody knows how to implement a reclocker that is completely immune to the jitter on its input. What happens in the DAC chip also happens in the flip flop chip, currents flow in the chip due to the jittery input signal causing jitter on the output. So why do it? Because it does help to attenuate the jitter but it doesn't completely eliminate it. So some jitter still gets through.
And on top of all this, the currents flowing through the chips due to jitter at various places also show up on the traces and planes of the PC board. The same thing happens: output jitter is added to the output signal due to jitter on signals in other parts of the board. Again this can be minimized by very careful board layout but you can't completely get rid of it. Unfortunately very few people really have a high level of expertise in this, and there is not an awful lot of information on how to so this floating around. Its only fairly recently that the engineering community has even starting to deal with this so how to effectively work with these issues is still in its infancy.
So if all this is still there why bother with async mode at all? Because it DOES significantly lower the overall jitter of the system. Remember that in the adaptive mode the clock comes from a frequency synthesizer of some sort, these will ALWAYS have significantly higher jitter than a good fixed frequency clock, so going with async DOES eliminate the inherent jitter of the frequency synthesizer. It may not eliminate all sensitivities to cables and such, but that doesn't mean its useless. It still serves a very important task of significantly lowering the over jitter of the system.
There is NOTHING in the designers arsenal of tricks that will eliminate jitter completely, its the designers task to pick and choose a series of techniques which each attenuates the jitter by varying amounts. Every DAC on the market will have a different set of these techniques which will affect the final sound in different ways. Different designers will have different levels of expertise in these different approaches which will determine how well they are implemented.
I guess I'm trying to say that there are no cut and dried formulas for reaching perfection at this time. Its still very much an art form. As we go forward the body of knowledge increases and the overall level of these devices is going to improve, but at least for the foreseeable future the process of designing and choosing digital audio hardware is going to resemble that of guitars and violins more than it does computers. There is going to be a wide range of products to choose from and you really have to experience them rather than just looking at a few specs in the catalog.
PS This post focused on aspects of jitter because that was what the thread was about, but that does not mean that jitter is the only thing of importance in DAC design, its just one of the many different parts that have to work together to make the final product. It just happens to be the new kid on the block so there is a lot to learn about managing it properly.
John S.
Do I understand correctly:
In principle there is no difference between adaptive and asynchronous mode other than in the first case the sender does the timing, and in the second case it is the receiver doing the timing.
If the quality of the clock is the same in both cases, the result will be the same.
The advantage of asynchronous is that one can improve on the sound quality by using a better clock.
So asynchronous mode is not better by design but by implementation because you can implement a top quality (low jitter) clock in the DAC.
The Well Tempered Computer
Essentially yes. Both cases have two clocks in the DAC, the "USB clock" which runs at the USB bus frequency and gets data off the bus, and the "audio clock" which pulls audio data out of the buffer. In adaptive mode the audio clock is adjustable so it can be set to the average of the audio data rate. In async mode the audio clock is fixed and the computer adjusts the rate at which it sends the data so the buffer does not over/under flow.As far as jitter is concerned the difference is in how well you can do between a fixed and an adjustable clock. Most adaptive implementations do not use a particularly good implementation. It is possible to implement an adjustable clock that is better than what is in most adaptive USB chips, and this has been undertaken by a couple companies. But you can always get lower jitter with a good fixed frequency clock.
There is actually a good example of this case of its the implementation of the clock thats important, not the asyncness itself that is important. The recent inexpensive Musiland devices use an asynchronous protocol but then use a frequency synthesizer to generate the local clock rather than use a fixed frequency oscillator. The result is jitter that is actually worse than some of the better adaptive implementations!
John S.
Edits: 12/02/09
Nice example
Thanks for the clarification
The Well Tempered Computer
But isn't it true (all else being equal) that logically the best async implementation will perform better than the best adaptive implementation, given that the async doesn't have to deal with the large amount of jitter from the USB frame that the adaptive one does?
Additionally, in terms of absolute clock frequency, adaptive can only be as good as the PC, whilst in async it can be as good as the DAC manufacturer wishes it to be?
your friendly neighbourhood idiot
Yes, the best async implementation will be better than the best adaptive.
The adaptive doesn't have to be as bad as you mention, the local clock does not have to be slaved directly to the frame. It can have a local adjustable clock and a controller that just looks at how full the buffer is and adjusts the local clock so the buffer stays partially filled. If the buffer is many frames long there should not be a direct relationship with jitter on the frame.
BUT as was the topic of my original post in this thread, such jitter CAN wind its way into affecting the jitter on the local clock in both adaptive and async modes.
And yes, the absolute accuracy of the sample clock is determined by the computer in adaptive mode since the local clock has to match the average data rate otherwise the buffer over/underflows.
John S.
Hi John,
thanks for your reply, it pretty much confirms my opinions - you can mess up no matter how clever the original scheme is if you don't pay attention to all the details!
your friendly neighbourhood idiot
.
In the ESS SABRE chip white paper, the authors talk about how the operation of the on-chip SPDIF decoder creates noise that potentially results in jitter effects on the output, even though the output clock is completely independent of the input (e.g. when the ASRC is enabled), confirming your comments about on=chip leakage.
It does seem, however, that multiple levels of reclocking of all signals, if done on separate circuit boards run by separate power supplies ought to get rid of any jitter effects on the input, i.e. reduce the coupled jitter spectrum to below the residual jitter of the master clock.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
The problem is ground noise coupling between boards (or "independant" sections on the same board). It is possible to setup the grounding schemes to minimize this, albeit not very easy. The easiest way to get around this is to use non electrical connections between sections such as optical, but these generate their own jitter.
I tried this a couple years ago with three boards running completely separate power supplies with grounding schemes designed to minimize ground coupling between boards BUT things like the length of the USB cable STILL affected the sound! It was true that the resulting sound was extremely good, but maybe because of that the USB input differences seemed even greater.
BTW this was a very expensive DAC to make it was close to $3K just in parts, not counting the case with multiple separate shielded compartments etc. Something is still leaking through even when we do our best to stomp it out. I've come to the conclusion that we still don't really know the root cause. The stuff I talked about was stuff we know about, but there has got to be something else going on.
John S.
nt
In general short cables sound better than long cables. Thats not an absolute, its possible to have a 4ft very good cable sound better than a cheap 3ft cable, but that cheap 3ft cable is probably going to sound better than any 15ft cable.
I once tried a 20ft cable (which is longer than the spec allows) and it sounded terrible.
The short is best does not hold true when you go to optical cables. In my setup the 30ft Optics cable, with the device driven by a good linear supply, and a short cable from the optical end to the DAC sounded better than any regular cable.
I even had good results with active extension cables with a short cable from the end of the active cable. Others have had very different results with this setup though.
John S.
I'm going to try 0 cable length on the Musiland 01US by soldering a Male A USB connector to a Male B USB connector - this should allow minimal length. If it proves better than a cable I'll put a male A connector on the pcb for an even shorter & more robust connection - just like the M2Tech HiFace!
Edits: 12/02/09
I have gone through a similar series of upgrades on my reclocker. My initial expectation was that the input cable would have not effect. Well it did. Then I discovered that I had to isolate the power, the ground and the input timing from the output circuits.
Shame you didn't check this out before putting the PaceCar on the market and letting customers (like me) find out that it didn't stand up to the claims being made of it.
I have said this many times here. One often need to find out the hard way, and then not trust reviews or vendor claims.
Yep, no argument there.
Thank you as well, Steve.
Thanks for your insightful, intuitively sharp posts, John. Much appreciated.
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: