Audio Asylum Thread Printer
Get a view of an entire thread on one page
|For Sale Ads|
In Reply to: Re: Your player already has fairly low jitter measurements posted by Charles Hansen on March 17, 2007 at 13:29:50:
I hope you don't mind me asking another question.
Is the Pioneer MPEG decoder also clocked from a derivative of your master clock (based on your earlier statement that everything is referenced to the audio clock)? So when you change the master clock from say 44.1x to 48x won't it potentially affect the decoder?
< < Is the Pioneer MPEG decoder also clocked from a derivative of your master clock? > >
In this particular case, there are about 5 different frequencies fed to the MPEG decoder. Some are based on 27 MHz (video stuff), some on multiples of 44.1 kHz (for CD stuff), some on multiples of 48 kHz (for DVD stuff), and some that change depending on whether you are playing a CD or a DVD.
Normally these clocks are all generated by a 3-PLL chip that is driven by a 27 MHz crystal oscillator. We remove the crystal and inject our own 27 MHz signal that is derived from our audio master clock by a second custom-programmed PLL that we add. The "N" and "M" divide ratios in the second PLL are controlled by the same flag that tells whether to use the 44.1 kHz based audio clock or the 48 kHz based audio clock. So when we change audio clocks, the second PLL just keeps outputting the 27 MHz to the PLL for the Pioneer MPEG decoder. Everything always stays synched up, but is governed by the audio master clock.
I'm amazed that switching clocks does not cause a glitch on the 27MHz line. Good on you, bet that took some effort to get right!
I seem to recall you are also using an upsampling filter. Is that true? If not, then presumably you need to buffer the audio signal from the MPEG decoder to filter out the jitter?
If you are (using upsampling), did you consider using ASRC on everything to a single common rate (eg. 200kHz)?
There's been a thread recently that the Lavry DA10 does this (even though the user manual suggests otherwise). Everything is resampled to 115kHz - I was wondering why Dan chose such a low frequency (since that would mean 192kHz gets downsampled), then I realised he's trying to force the DAC into dual rate mode.
< < presumably you need to buffer the audio signal from the MPEG decoder to filter out the jitter? > >
Yes but since everything is slaved synchronously, only a one-bit buffer (aka flip-flop aka reclocker) is required.
Also remember that each DAC chip's architecture results in different sensitivities to jitter on different pins. In the Burr-Brown DSD1792A that we use in the C-5xe, the master (system) clock is the critical one.
< < did you consider using ASRC? > >
*** No, the very idea of changing the data like that gives me the creeps. It just seems like an inherently bad idea. YMMV. ***
Well, the data gets changed in the digital filter anyway. I don't really like it myself, but only because typical ASRC implementations generate a fair amount of artefacts and have non perfect passbands. If there was a "perfect" ASRC implementation, then I would consider it. Practically though, I don't think we'll see one soon (and it will require so much computational power it will actually *generate* jitter hence defeating it's purpose)
PS - the reason I was asking whether you were using an upsampling filter was because it *could* account for your slightly high Miller results, so the unit is actually measuring artefacts generated by the upsampler rather than the underlying jitter.
PPS - Did you see the Stereophile review of the Transporter? The jitter numbers aren't that great - I think 293ps peak to peak in 16-bit mode. So much for Slim Devices claiming the peak to peak would be single digit ps. I have a feeling most of it is probably caused by logic induced modulation.
< < Well, the data gets changed in the digital filter anyway. > >
Kind of. Many (but not all) digital filters for audio use algorithms that leave the original data points untouched and just interpolate new samples between the original data points. I don't want to get bogged down in semantics, but to me this is clearly different than an ASRC where none of the original data points survive the process.
< < 293ps peak to peak in 16-bit mode. So much for Slim Devices claiming the peak to peak would be single digit ps. I have a feeling most of it is probably caused by logic induced modulation. > >
And like I've said in previous posts, I disagree. I think the bulk of what is measured with the Miller analyzer is simply the noise floor of the spectrum analyzer used.
Do you have access to a spectrum analyzer? It's very simple to make the test disc. The original paper by Julian Dunn is linked below. Look at section 2.6 to see what the waveform is:
C00000 C00000 4000000 400000 (24 times) BFFFFF BFFFFF 3FFFFF 3FFFFF (24 times)
So there is a square wave at Fs/4 that for part of the time is mostly "zeroes" and for part of the time is mostly "ones". Being mostly "zeroes" or mostly "ones" makes a big difference in the jitter added to a bi-phase mark encoded signal such as S/PDIF, but it shouldn't really do much to a one-box player that avoids bi-phase mark encoding.
And that's exactly what I see when I use the (latest, greatest) Audio Precision 2722 as the spectrum analyzer. The noise floor of this machine is below -140 dB. With a 16-bit test signal, the only spuria I see are the expected ones at the frequency that the LSB modulation is applied. But with a 24-bit test signal even this is below the noise floor of the analyzer. I will be glad to e-mail some sample spectra to you (in PDF format) if you will be willing to host them somewhere so that other Asylum members can link to them and also look at them.
This post is made possible by the generous support of people like you and our sponsors: