Audio Asylum Thread Printer
Get a view of an entire thread on one page
|For Sale Ads|
In Reply to: RE: Baloney posted by Tony Lauck on January 09, 2011 at 11:38:49
Thanks for the good wishes.
Here's, perhaps, a more complete explanation of my position.
I didn't say that my DAC was transport independent: I said that my DAC wasn't sensitive to the transports I had used it on up to that point in time.
I always knew that it would be sensitive to transports, power cords, RFI, etc. because it exists in the real world. My goal was to see how much I could ameliorate these effects with the obvious (to me) implementations.
I didn't build my DAC for anyone else's system or goals. It was for my own edification and my own use. I found that it works better than I expected in some instances and a little worse in others. More importantly I learned that some of my shortcuts/engineering decisions failed miserably (hence the three separate incarnations of my boards.)
More incidentally I now have physical proof to back up some of my statements which were scoffed at in the past: some examples:
.) Some said that (in addition to the analog filer) you need digital filtering and/or other processing to convert a raw DSD bit stream to analog in spite of me posting the spectra of various digital simulations of simple analog filtering of a DSD bit stream. It was obvious to me that they were missing something. A simple analog filter works quite well.
.) Some said (and still say) that a proper FIFO is sufficient to get rid of jitter. Once again that's obviously false to me but obviously true to the bits-is-bits people. I defy anyone to find a fault in my FIFO implementations or to build a FIFO that works gets rid of all audible effects of jitter.
.) Some claimed that FPGA based filtering of audio was too crippled to work acceptably. It seems obvious that conversely a FPGA is much better than a general CPU or DSP chip IF programming time isn't counted and the engineers involved know what they are doing. I've programmed DSP chips, general purpose CPUs and now FPGAs to do correct DSP processing.
.) Many people say that a flat frequency response doesn't sound the best or conversely that room correction or speaker correction is a great idea. I've repeatedly had the experience that using room correction systems or speaker correction systems to flatten the frequency response of a system takes the life out of the music, but I always belived that a system which is inherently flat and didn't need such corrections should have good PRaT. Unfortunately one of my mistakes subverted a direct and compelling test of this, but still my DAC with a flat response is very involving and has plenty of life.
.) Some claim that any properly implemented power supply would be immune to changes of power cords. To me this should be obviously false to anyone who understands power supplies for audio components well. I did hope that I would render power cord changes on my DAC inaudible, but since designers I respect have failed at that same goal I wasn't too surprised that I did too.
I read your "It's the fault of the DAC designers. They haven't done their job." post as another post espousing a simplistic view of real world engineering and I stand by my intended response that all real engineering involves compromises.
P.S. Oh, also, I have plenty of familiarity with DACs which send the clock back to the transport. I have a Meitner stack and I've worked for multiple companies who also provided for the use of a single clock domain in their products. I was surprised that I have less audible effects in my DAC from jitter than I hear in any of those systems or other "jitter proof" systems.
It's your project and it's for you to say what it's goals are and whether or not they have been (adequately) met. All and all, I think you've done a great job and provided an inspiration to many as to what can be done in a DIY project. Thanks for taking the time and effort to post about your project and putting up with the various flac that has been returned. :-)
There is one point where I personally would have slightly different goals had it been my project, so if you don't mind I'd like to clarify one point.
".) Some said (and still say) that a proper FIFO is sufficient to get rid of jitter. Once again that's obviously false to me but obviously true to the bits-is-bits people. I defy anyone to find a fault in my FIFO implementations or to build a FIFO that works gets rid of all audible effects of jitter."
A proper FIFO is necessary to get rid of jitter. It won't be sufficient, as there can be other jitter coupling modes, e.g. power used by input decoding stages can couple through power and ground into output clock circuitry. In addition, a FIFO may work perfectly at moving bits, but fail to achieve jitter isolation. Such a FIFO will be suitable for some applications, e.g. a buffer in computer interface, but not suitable to achieve jitter isolation in a DAC. Each component in the DAC FIFO needs to be modeled in the analog domain and the jitter attenuation of the FIFO as a whole needs to be modeled and verified empirically. Until this has been done it's not possible to conclude that a given FIFO is "proper" for the DAC application. I suspect there are many FIFO architectures that work satisfactorily for pumping bits that fail to achieve additional jitter attenuation when cascaded. This will depend on the circuit design, layout and, especially, the clock architecture. There aren't a lot of components (e.g. transistors) in some FIFO designs, so it would seem possible to design in such a way that each stage of the FIFO provides a constant (dB) attenuation of jitter. It looks to me like you've got most of the tools at hand to investigate this aspect of the design, should you decide to do so at some point in the future. However, if your FIFO is in the FPGA there may be no way to achieve sufficient isolation, due to the design of the cells and/or the available wiring.
It won't be possible to get perfect isolation, nor is it necessary. The effect of jitter is to introduce noise modulation onto the output analog signal, and if you can get this noise modulation well below the output noise of the DAC itself that will be sufficient. Modeling and measuring the jitter related output noise caused by the available degree of isolation won't be easy, but it can be done and must be done if one wants to solve this problem.
It is also possible that the audible effects of changing the transport have nothing to do with jitter, or even to do with the DAC. There could be other modes of coupling (e.g. RFI/EMI) to the downstream analog components. There are probably experiments that can be devised to evaluate these coupling modes.
The only other part of your discussion that I could possibly disagree with concerns power cords. But I'm not really interested in power cords, since all the evidence seems to indicate that the effect of power cords depends on all the components in the system and the general electrical environment. If I were a fanatic about power cords, I would just get rid of them completely e.g. run my components on internal batteries. :-)
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
A few minor points: a FIFO can't provide "constant jitter attenuation". It is at best a low pass jitter filter: you can talk about the slope of the attenuation and it's corner freq, etc. In my experience as the corner frequency of the jitter filter goes down the bass gets firmer (in addition to other effects.)
I've also wanted to try the opposite experiment: instead of trying to get rid of all jitter (which paradoxically can cause it's spectrum to become more colored) perhaps whitening the jitter spectrum might provide a more practical way of achieving a cleaner sound. (Obviously I don't believe it will given the direction I've taken, but still it's a different possible approach.)
As I've mentioned in other posts I assume that jitter is sometimes the only reasonable explanation for the effects of changing, say, a power cord. In the specific case I've talked about before the transport and the DAC were connected by glass fiber so along that path the only possible effect is jitter (or bit errors). The transport was 10' away from the DAC and the rest of the system so I don't expect that RFI was significant. The DAC, the transport and the other system components were on separate dedicated circuits so the AC coupling was fairly minimal, tho possibly audible. But now with further experience with jitter's audible effects I recognize that the firming of the bass when the transport had a more substantive power cord is one common effect of lower jitter. I don't claim this would convince someone else, but it's personal experiences like this that help to clarify one's journey in understanding (or at least rationalizing) "audiophile tweaks", in this case power cord effects and jitter effects.
It's a little off topic, but I often wish I could have more "bits-is-bits" (or objectivist) people over so they could hear for themselves the effects of some simple experiments: we could then do other experiments to help them clarify possible mechanisms in their own minds. I've found over and over that when even some of the die hard double blind proponents hear a significant difference sighted often it opens their eyes to possible explanations for the effects they hear in ways that no amount of discussions will.
I'm not sure we are communicating regarding what a FIFO can and can't do as regards removing "bits ain't bits". Or perhaps I'm off base and have been consistently missing something.
I'm not really concerned about the rate adaptation function of a DAC FIFO, as this can be dealt with by the clock architecture (e.g. slaving the transport to the DAC master clock). It looks like you've solved the problem for most interesting cases, as you can make the corner bandwith of your digital phase lock filter effectively zero once you've found a frequency setting that drains the buffer sufficiently slowly that no changes in rate are needed through the course of an entire track. The key to this, as it was obtaining stable jitter with the FDDI reclocking system, is to use buffer load state as well as rate differences as input to the feedback loop. My understanding is that you are doing this.
The other problem arises if you just take a jittery signal and reclock it with a clocked flip flop.(Here I'm talking about what is going on within a single clock domain.) The output transitions are supposed to follow the local clock, not the transition times of the input signal so long as the setup and hold times have been met. Of course they do not do so exactly, but the question is whether the net effect of the flip flop is to attenuate the variations. If so, then it should be possible to string a bunch of flip flops in series with appropriate clock scheme and achieve any desired degree of attenuation. I suspect the problem is that the output level of a gate depends slightly on all the input signals (e.g. the output of a NOR gate will be at a slightly lower level if all the input signals are true compared to just one) and the propagation delay through a gate depends on the level of the input signals. Perhaps your SPICE simulations are such as to demonstrate this phenomenon (or lack thereof).
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
The issue is how do you clock those flip flops without having the signals going into the first flip flop affecting it's clock or power supply ... and hence the clock or power supply of the final flip flop... If you get into it further it's the same problem as using a PLL to attenuate jitter: you are at best are filtering the jitter and that filter isn't as simple as uniform attenuation. TANSTAAFL. Tho I don't have the proof at hand I believe that once again the best you can do is some form of low pass filtering of the jitter. In the limit tho low passed jitter becomes wander and wander slow enough to not affect the bass firmness isn't a problem.
I'm sure that I've posted my implementation of the idea you are talking about elsewhere but here's a synopsis:
I use a FPGA to do the bulk FIFO storage. The FPGA outputs a clock and the data associated with it and these signals go into a high quality flip flop to align the signals with a separate clean power supply. Then the signals go from that flip flop to another with it's own PS which is clocked directly by the master clock (with it's own PS and thru only a few mm of traces.) And when I say "own PS" I'm not just talking about a local ferrite bead and some caps.
The initial mistake I made was to clock the two external flip flops exactly out of phase with each other which is almost the worst timing possible: the only worse timing would be to clock them in phase. The beauty of FPGAs is that I only had to change one parameter to advance the FPGA and it's flip flop 90 degrees and make a significant difference in the audible jitter. Taking this lesson to the limit you can see how there isn't room for an arbitrary number of flip flop stages: you eventually are switching some of the flip flops too close to the switching of other flip flops and increasing the possibility of jitter leakage thru the power supplies or clocks.
I'm not claiming that this proves my case, but just like other "jitter solutions" things aren't as simple as a first level analysis might lead one to believe.
"all real engineering involves compromises."
Why limit the scope to engineering? I reckon it's an attribute of life...
The beleaguered Design Engineer has done his job if the thang he designed meets specs, is under budget and on time. Usually he's doing damn well to hit one them. My take is that home audio is far more plagued by systemic definition difficulties than implementation problems, hence the 'system synergy' effects. No one knows how to specify what 'properly designed' is. On AA it's usually synonymous with perfect.
On the bright side it all makes for a great hobby. If everything sounded the same it wouldn't be interesting!
It would be nice if some of the "standard" audio interfaces were replaced with sane/well-engineered interfaces. But downward (or backward) compatibility also has many benefits - it's another of those compromises.
I originally didn't have any PCM interfaces on my DAC but one day I woke up and realized that I had plenty of extra pins on the FPGA so I "graphed" on DSD ST glass, TOSLink, S/PDIF and AES/EBU inputs. Then dealing with these signals was a software problem and I can deal with those :)
Even if I were designing a DAC from scratch and intending to market it I'd probably still meet essentially the same interfaces: for a while it'll still be more useful to most people to have TOSLink, S/PDIF, AES/EBU, USB Audio 1.0 and other "legacy" interfaces than a random custom "perfect" interface. If I can get my jitter susceptibility to near that of which a "perfect" interface could provide, so much the better.
I also suspect that even if I were to decide to allow myself to use some custom software on the PC and a custom "perfect" hardware interface I'd still have essentially the same results by using some software to carefully downsample any 24/196 to 24/96 (or 176.4 to 88.2) and use a standard hardware interface and apply my creative energy elsewhere. Beyond a certain point, if time or money matter, there are more important things that absolute sample rates. (I know that marketing concerns would tend to a different direction.)
One thing I do know for certain is that every time I listen to music it's so relaxing and involving that I don't get much work done :)
"...every time I listen to music it's so relaxing and involving that I don't get much work done :")
Seems like a character risk alright, there's just nothing like success to sap a man's motivation! The cool thing is that you, far more than most of us, have really 'earned' it.
I was actually thinking more of the analog and power interfaces, EMI issues, that sort of thing. The spectre of folks spending thousands of dollars on power cords fascinates me. It's sort of like having the engine fail in your car so you solve the problem by renting a team of elephants from the circus to pull it around. Yes, you do get there, but you know, the solution just isn't very elegant, albeit expensive and impressive.
I agree, even tho I have aftermarket power cords on almost all of my equipment and silver interconnects and speaker wire :)
For interconnects the two obvious (to me) technical things are differential connections and connections which are (accurately) terminated at the receiving end. Both make technically sound differences and are available on some equipment.
But truthfully I get more of a difference with swapping power cords and/or power conditioning. But I'm not about to hack my amps to get rid of their sensitivity to power cords :) Someday I might build my own amps or power conditioners, but I doubt it. I do find a sweet spot with medium cost cords and conversely have experienced some real stinkers: both cheap and expensive.
The most surprising thing like this that I ever heard was a quality DAC and cheaper transport connected with ST glass hooked up with the DAC being the clock master and separate clock and data connections coming back from the transport (A Meitner DAC but not a Meitner transport). Even with the transport and DAC on separate dedicated AC feeds and the galvanic isolation of the interconnects, a change of power cords on the transport caused a very noticeable difference in the perceived loudness of the bass :) What the...?
In my current DAC I have no components which were selected by ear: all decisions were based purely on my preconceived notions of the desired specs and the parts meeting them :) I'm not adverse to selecting components by ear (assuming they meet the technical specs), I just don't have the time or inclination: I'd rather spend my time making bigger differences with technically sound modifications.
I do get a lot of satisfaction of having done something real and even tho it didn't change my view on most audio matters I feel I can speak with a little more experience/authority on the subject.
"I'm not about to hack my amps to get rid of their sensitivity to power cords"
Me neither! Leastwise without a pretty severe problem which fortunately I either don't have or am not aware of. Yet I'm not bothered by just changing interconnect cables and don't feel obligated to redesign the interface. No accounting for audiophiles...
"For interconnects the two obvious (to me) technical things are differential connections and connections which are (accurately) terminated at the receiving end."
Yeah, prolly more important for digital although even for analog I ended up mostly using 300ohm open lines with terminations and build-out resistors. The driver load ends up ~600ohms which is a bit low but seems to work OK with most gear and they eliminated the mild 'tiz' that I ended up with after switching to open lines but the, well, the 'openness' remained. My premise was that the most likely problems with interconnects were dielectric absorption and stray currents on the ground lead and for speaker cables skin effect and current loop area. Since the stuff I did to try and address those concerns perked up the sound I'm happy. However that's not conclusive and there is clearly more going on, some of it related to the wire itself. Even though I'm just using magnet wire (had a roll on hand) I don't scoff at folks using silver or specially pulled, treated or forged wires. However I suspect that the 'problem', whatever it is, could be ameliorated in other far less expensive ways if well understood.
Guess I have a small but finite cynical side that believes that there is more interest in selling (and buying) essentially linear jewelery for audio systems than in identifying problems and implementing efficient solutions. Heck nowadays they actually have jewels strung on those necklaces! If there's anything to that it might be a better way to clean up reflections than terminations. One thing about audiophiling, there is never a dearth of things to explore!
Post a Followup:
Post a Message!
This post is made possible by the generous support of people like you and our sponsors: