![]() ![]() |
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
24.6.239.64
In Reply to: RE: Reminder... posted by Scrith on March 22, 2012 at 17:56:37
Hey Scrith,
If you believe you are hearing a sound improvement from your DAC due to programs such as Fidelizer, or cics memory player, or a special version of some audio driver, or a certain brand of operating system, or a certain USB cable, etc., it means that your DAC is highly sensitive to the timing of incoming data, which is indicative of a poor hardware design.
How can you possibly stand listening to that poor dac of yours? I mean even the designer says cables and os tweaks, etc. affect the sound of your dac so this really looks like a glass houses argument on your part.
Afterwards we discovered faith; it's all you need
Follow Ups:
I never said my DAC (or any DAC, for that matter) is perfect, but we'll probably see one eventually. It seems like this shouldn't be difficult a difficult task; after all, other hardware manufacturers have managed to create products that deal with incoming jitter in the data despite their hardware being MUCH more sensitive to timing than some audio device (e.g. hard drives, which have to reliably record data, in specific extremely small locations on disks that are spinning at up to 15,000 RPM, despite the fact that the data is being sent to them at speeds much higher than will ever be required for audio playback).
It's not clear that what people are hearing is related to jitter. Indeed, it's not clear what these people are hearing.
Some of these people are using very high end DACs that are supposedly designed to be highly immune to incoming jitter through the design of their clock circuitry.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
It's not just jitter but also impedance matching to reduce reflections, soundness of socketry and waveform integrity thru transmitter and receiver circuits
Any timing variations that don't move the data through the 0.5 level around the time the data is buffered into a flip-flop won't affect the results further down stream. There may be issues with amplitude variations, i.e. a 0 isn't a fully black zero and a 1 isn't a fully white one. However, logic gates and flip flops include level restoration circuits which have the effect of severely attenuating variations in signal provided it is relatively close to a 0 or a 1. This will definitely be the case after the received signal has been buffered once by a local clock. Each stage of digital restoration will necessarily attenuate any remaining amplitude variations. This is the basic reason why digital computers work. (See figure 5 of the linked page which includes a reversed "S" curve showing how degraded digital signals are improved by a stage of digital restoration as included, say, in a gate.)
Given that each stage necessarily attenuates these variations the questions that remains are: How many stages are necessary? Is there some way that an unwanted signal can bypass these stages and escape some of the attenuation? To answer the first question one must decide how much attenuation of unwanted signals is required (e.g. 20 dB more than the signal to noise ratio available in the analog circuitry) and how much attenuation each stage can provide. (This will require detailed characterization or accurate simulation of actual circuits used.) To answer the second question one will need to look carefully into the complete layout and circuitry of the product. For example, any power supply wiring and power supply circuitry provide a coupling path for unwanted signals, even those from devices that aren't having anything to do with the actual audio processing. If unwanted signals are leaking around through power wiring and power supplies this is not really a matter of the signal path and similar things can happen with circuits that aren't even carrying audio related signals. Saving money by putting most of the digital logic into a FPGA is probably not going to hack it if one is trying to get truly excellent results. In the very good ESS white paper on the SABRE chip the authors describe how on-chip ground bounce related to the timing of incoming SPDIF signals managed to pollute the output analog waveform of the chip and what they had to do to minimize these effects.
As expensive and difficult as this may be, there is some hope of success if a DAC designer sets out to do this. A software hacker trying to achieve similar effects by tweaking a general purpose computer system to minimize noise at its source has zero chance of success and will be on a constant upgrade path to nowhere.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
"Any timing variations that don't move the data through the 0.5 level around the time the data is buffered into a flip-flop won't affect the results further down stream. There may be issues with amplitude variations, i.e. a 0 isn't a fully black zero and a 1 isn't a fully white one. However, logic gates and flip flops include level restoration circuits..."
The problems that Fred mentioned [impedance matching to reduce reflections, soundness of socketry and waveform integrity thru transmitter and receiver circuits] are in the amplitude domain at the interface level and when the signal is sliced (into local 1's and 0's) at the receiver they get mapped into the time domain. Downstream they are now inseparable from the data, that is to say the symbol will never be pure again but if you can manage to decode it accurately then potentially you can create a new improved symbol if you want to.
Where it hits the fan using the scheme you are describing, looking at the data in regions hopefully unaffected by the edge uncertainty, is that it requires a clock synchronized with the data but you don't have one available so all you can do is fake it. And as everyone here knows the fake clock will have some data related jitter and some sensitivity to local noise, both of which you can mostly filter out. Mostly.
Abandoning the one-way, real-time serial streams in favor of newer technology opens up many doors to buffer management but it would be nice to have a solution for the extant SPDIF stuff that doesn't have too high of a usability toll. I haven't heard any recent DAC's so I have no idea how they sound compared to my "vintage" one, maybe we've already arrived, but the reports don't seem encouraging...
Rick
Don't agree with your comments about clock domains. There is no hope for SPDIF or other schemes that don't force the transport to synchronize to the receiver's clock. If one is forced to work with unsynchronizable input one will have to go the reclocker approach and require suitably tight clock tolerances on the transport, suitably large FIFO buffers, suitably large stop/play user interface latencies and restricted play list lengths. But SPDIF is a bad system design. Why make a hard problem even more difficult. One has to throw away broken standards if one is ever going to make progress.Therefore, I am assuming that the input has already been synchronized into a single local DAC clock domain. This can be accomplished by various means, e.g. a traditional reclocker with big buffer and high latency, slaving the transport so that it is frequency locked to a word clock or master clock emitted by the DAC or async USB interface with some kind of circular buffer to accommodate the variable filling by packets and the steady emptying by samples or bits. I assume that all the circuitry in the input clock domain is powered separately from all the other equipment in the DAC and provides a simple interface, e.g. for a monaural DSD DAC there would be two signals: a clock signal and a data signal, with suitable setup and hold times to ensure that the data signal never changed while the clock signal was asserted. The clock signal goes from the DAC proper to the input domain while the data signal goes from the input domain to the DAC proper. At this point all synchronization has been completed and all of the bits that are passed on are assumed to be correct, otherwise there has been a data error on the input receiver, something that need not happen with a half-decent implementation. The clock signal will have variations due to noise, but this noise will be locally generated. The data signal will have noise, in the form of timing variations and amplitude variations, to be dealt with as below.
Next there would be some number of stages of a shift register, individually clocked from the local master clock and some kind of multi-phase clock scheme such that there are defined periods between loading each stage where the input at the previous stage has had time to thoroughly be restored, i.e. amplitude variations removed from a previous transition, or leakage from gates. (Note that the output voltage of an Or gate may be slightly different depending on whether one or two input signals are asserted, even though logically the output should be equally true.) This also means that the transition time of a flip flop that is changing state as a result of a clock pulse will vary slightly according to how high the incoming signal level is that is being strobed. One has to look at the circuit design and layout carefully to make sure that each stage of the shift register reduces these timing variations as well as the obvious reduction of amplitude variations. If one clocks slowly enough one can use multiple stages of inverters between flip flops to clean up any leakage by the gates from previous stages. One has to get down to the transistor level and transmission line (layout level) and understand all the parasitics involved, etc. if one wants to do this right.
In your terminology, each stage of this synchronously running shift register copies a slightly dirty symbol and produces a new symbol that is slightly cleaner with respect to any disturbances in the original. (There will be a residual dirtiness at each stage caused by timing variations in the clock and clock distribution and power and ground noise.) After some number of stages of this buffering these local effects will completely dominate any source effects so that what one hears will be transport independent. Whether it's any good or not is a different issue of course and that will depend on the circuits that take these bits and turn them into an analog waveform. In the case of DSD this could be nothing but a two stage buffer followed by a passive low pass filter. Such a DSD DAC could be very light weight, perhaps having no more than a few dozen gates that run in the local clock domain.
I think the number of transistors required is small enough that one could do a suitable SPICE simulation to validate and quantify these concepts. If one built this device it would also be possible to create test signals to measure how much variations on the input are attenuated by the shift register. (The locally generated variations can be taken out by statistical averaging.) BTW, I believe that each stage of this register is going to have to be implemented in a separate chip. It may be possible to work with off the shelf logic chips. There aren't many gates required in this design so this would be practical. Putting a bunch of gates in an FPGA is unlikely to provide isolation measured in amounts such as -160 dB.
This design does not require any computation, so there is no need for any processors, software, etc. that can be tweaked. It consists of a bunch of transistors that are configured in a mixed signal configuration but which are analyzed as analog components. All of the complex software goes elsewhere, many feet away, possibly isolated by multiple boxes. Some of this software may be fairly complex, e.g. a PCM to DSD128 modulator, but at least at this point bits are just bits if the DAC works as intended.
In this design, one would want to put the DAC on the "Tranquility Base". :-)
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Edits: 03/26/12
"SPDIF is a bad system design"
Yes, it is indeed. I'm not sure what the SOTA currently is in reclockers but that functionality really belongs inside the DAC housing so it can use the D/A clock.
But since you are "assuming that the input has already been synchronized into a single local DAC clock domain", the SPDIF issues are mute.
"The data signal will have noise, in the form of timing variations and amplitude variations, to be dealt with as below."
Here is where I get boggled, you can run it through all the inverters and shift registers in the world at this point but you won't improve the S/N. Once you've got a steady local clock that occurs well out from the settling time of the signal transitions, your done. You just dump it into your D/A converter chunk.
The reason that additional stages don't provide further isolation is because every gate decodes every bit and regenerates it using the rails as the ideal. The main source for amplitude variations in the signal at the clock point of the next device is noise on the power and ground and that's largely constant for all gates powered by the same planes.
I suppose the current issue is why Async DACs aren't immune to cable variations. Well, I bet they are more immune than the average bear because they eliminate the need for a variable clock in the DAC which can be a problem, especially if implemented with a VCO. Depends on the implementation of course but "fixed" is the limit of "variable" so they are hard to beat, all else being equal. But outside of improving the local clock issues all the other sneaky paths and problems remain and without delving into specific implementations further speculation seems futile.
Please don't think I am trivializing these issues but we are discussing stuff at the level where implementation is inseparable from performance. Without simplifying assumptions life as we know it would grind to a standstill so they are crucial, but we dare never regard them as givens. You are assuming the 'source contamination' is somehow propagated by the signal and probably some of it is, but I assume more of it is via the power planes and probably some of it is. Some of it is bound to come from EM coupling and that has many paths. Ground loops between gear are really likely to be a term on paths without galvanic isolation. It never ends, all we can ever do is try to have sufficient specifications so that implementations that meet them will have inaudible levels of artifacts. Well, that's the engineering take, marketing knows it ends when the CTM is low enough for good margins and the performance adequate to keep most of the rubes from returning it after they have been spurred into buying by advertising.
Regards, Rick
I suggest you look at some sample circuit diagrams for a typical gate. One such involves two pull down transistors with their output in parallel connected to a source via a pull up resistor. In this case, the output level when both transistors are "on" is going to be lower than when only one transistor is "on". What this means is that if one input is held "on" the voltage level on the output will still be a function of the fluctuating voltage input on the other input. However, the output will definitely be fluctuating less than the input, e.g. the input could be fluctuating across the full range of logic levels, while the output necessarily fluctuates about the region corresponding to a legal logic level (e.g. 25% of the full range). Now imagine that the flucutating signal is being gated into the input of a latch by a clock. The output of this gate will have a small amount of fluctuation and this output will be input to one input of one of the gates in the latch. So the output of the latch will also be fluctuating slightly with the input signal, even if there is no clock pulse to enable the gate sufficiently to "change" the state of the latch. This is just one reason why a single stage of a flip-flop does not produce an output that is uncorrelated with the input signal. In the absence of clock pulses, this is the simple analysis, it becomes much more complicated when analyzing situations where the input voltage is fluctuating (but not changing logic levels) at the same time that the register is being strobed. Example: stage N of a shift registor is being loaded. At this clock phase, stage N-1 is not being loaded, so produces constant output (viewed as bits) or a certain amount of fluctuation (viewed as analog signal). In a 4-phase clock scheme for the shift register, stage N-2 is being loaded, hence there will be a complete change of logic level at roughly the same time stage stage N is being loaded. This means that at roughly the same time stage N is being loaded the voltage on one of its gates may be fluctuating due to slight leakage through the unclocked stage N-1. It is reasonable to assume that this will affect the rate at which stage N latches into the new state, but here the circuit design issues will be less obvious. It would be my guess that a SPICE simulation would show these effects if the circuitry can be adequately modeled at the transistor level. I also suspect that it would be practical to set up a test bed and measure these effects as well, although it would be difficult to eliminate coupling caused by "gate leakage" from other forms of leakage, e.g. "ground bounce". This is something for a circuit designer with mixed signal expertise to carefully consider. It is not something that most people, myself included, have the skills or tools to investigate.
It is possible to conduct experiments to distinguish whether propagation comes through the signal path vs. through power and ground. Suppose one has two signal sources and two isolation/outputs. These can be connected together (via jumpers) in one of two ways, direct vs. crossed. One provides two identical input signals except that one is deliberately distorted by addition of noise, including noise that is carefully synchronized to the clocks). One then compares the various possible combinations of inputs, cross wiring and outputs and measures output noise. The layouts have to be symmetrical for this experiment to be valid. It is convenient to work with a single ended DSD DAC, i.e. treat the output of the final flip-flop directly as a DSD audio signal. Any jitter or AM modulation will appear at this point and using synchronous averaging it will be possible to correlate any coupled noise to a very low level by measuring over a long time period (very narrow frequency band).
Audiophile hobbyists can conduct similar experiments where the sources are separate boxes. So, for example, if an untweaked PC is coupling area digital cable it will affect the output only when it is actually connected to a DAC, but if the coupling were through power wiring it would still pollute the sound from being powered up and playing, even if the DAC were connected and playing music from a separate transport. The key to these experiments is some reliable means of detecting when sonic pollution is occurring. Believers in subjective listening can do these experiments by ear if they have sufficient time, patience and self-discipline.
Note that even if complete isolation of all forms of input noise is provided by a DAC, it is likely that some audiophiles would still complain that there was an effect. This likelihood provides strong demotivation for me to investigate this in detail or to get involved in the business designing and producing high end audio gear, for that matter. :-(
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
"I suggest you look at some sample circuit diagrams for a typical gate. One such involves two pull down transistors with their output in parallel connected to a source via a pull up resistor."
In general OC gates (oh I know, open drain nowadays) are only used if you want to do a dot-OR or a level conversion, they aren't typical. However this structure may be more common internally since the parasitics are low and controlled.
"So the output of the latch will also be fluctuating slightly with the input signal, even if there is no clock pulse to enable the gate sufficiently to "change" the state of the latch."
Nope, that's not the way it works. The input is "decoded" at the trailing clock edge and the if the levels are within spec latches the state properly. After that the input can do whatever it feels like and the output doesn't care. The output will fluctuate due to rail variation and a tiny amount from common impedances but will be way within limits. The symbol has been decoded and reconstructed and is now independent of the input.
"This means that at roughly the same time stage N is being loaded the voltage on one of its gates may be fluctuating due to slight leakage through the unclocked stage N-1. It is reasonable to assume that this will affect the rate at which stage N latches into the new state, but here the circuit design issues will be less obvious."
Actually they are pretty obvious but they don't matter. The whole beauty of using a clocked data scheme is to make sure that you don't try to decode the symbol until the inputs have settled.
"This is something for a circuit designer with mixed signal expertise to carefully consider."
In process.
"It is possible to conduct experiments to distinguish whether propagation comes through the signal path vs. through power and ground."
Yes, you can conduct the experiments but the answer is already known: both. "logic levels" only have local validity and must be decoded with respect to the local plane. At the slicing point, whether just level driven or also gated, the signal and reference are compared and a decision made. If something is screwed up and the differential level is between the valid bands then the system falls into the category of broken. Bear in mind that this is all local stuff and we aren't trying to decode long-hauled signals with all the distortions and deprecations that they are prone to.
"Any jitter or AM modulation will appear at this point and using synchronous averaging it will be possible to correlate any coupled noise to a very low level by measuring over a long time period (very narrow frequency band)."
Now your talking! Synchronous demodulation is just magic but I don't think you need the second board since I believe all that matters is the voltage differential. If you sync to your injected noise you can measure how much it's attenuated with a storage scope or you can just use a spectrum analyzer since, being the god of the experiment, you will of course have chosen noise with a unique signature.
"The key to these experiments is some reliable means of detecting when sonic pollution is occurring."
AMEN! If you can't measure the output or force the input it takes a ton of patience to sort stuff out if it's possible at all.
"Note that even if complete isolation of all forms of input noise is provided by a DAC, it is likely that some audiophiles would still complain that there was an effect."
True, but there are other factors that can confuse the issue and it's impossible to ever achieve complete isolation of anything. If an ant farts in Texas it affects the orbit of Mars just a little... The best you can hope for is to get the stuff that you are trying to control to have a small enough effect that it's masked by something else. Thank God for Brownian motion!
"This likelihood [unhappy audiophile] provides strong demotivation for me to investigate this in detail or to get involved in the business designing and producing high end audio gear, for that matter. :-("
Oh... So that's why DEC is no longer around, they finally had an unhappy customer? Most manufacturers have fairly broad shoulders and I would suppose those in high-end home audio must be especially well quipped since their fancy gear can be brought to it's knees by so many, many factors out of their control...
Good listening! Hey speaking of which I'm quite interested in your (new?) speakers, how long have you had then and how do they do without the sub? Prolly should be another thread I suppose.
Regards, Rick
DEC is no longer around because the founder's vision of minicomputers became inoperative when technological advances followed Moore's law to make personal computers economical. A completely new business model was not possible without major changes to the organization. The founder was pushed out by money men, who provided a short term financial focus rather then an emphasis on quality products and customer service. The breakup and disappearance of the company also appears to have been accelerated by the system of financial accounting between international subsidiaries, e.g. foreign revenues were immediately repatriated to the main corporation while foreign manufacturing costs were delayed for several years. AFAIK this was (quasi) legal but provided a huge accelerator to the collapse, as would happen with an (illegal) Ponzi scheme. The company did not disappear completely, pieces got sold off to other companies, e.g. semiconductor technology and CPU technology went to Intel, disk technology migrated to (I believe) Seagate, and systems software migrated to Compaq which was then absorbed by HP. I receive a pension from HP.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
ooo, this looks interesting Tony but it will be a couple of days afore I can give it proper attention. Despite being "retired" somehow a consulting project has attached itself to me. It's for a friend and I couldn't say no. And, it should have been simple since it's an update and I suppose if the new S/W had been willing to read the old files and if my memory was better and if chatting about audio on AA wasn't far more fun...
Anyway gotta get it out of my remaining hair!
73, Rick
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: