![]() ![]() |
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
66.74.198.230
I've been playing around with different systems lately. I have an iMac and a Core i3 GA-H55M-UD2H system for Win7 and Linux. I installed Ubuntu 10.10 as a VM recently for work, and I liked what I was seeing, so I rounded up a 2.5" drive and put Ubuntu 10.10 on it to see what it would sound like with my current rig and its latest tweaks (audiophile USB cable and riser to use the USB socket on the mobo and its dedicated IRQ). I'm using a new player (to me) - Exaile. It sounds good. There sure isn't a lack of music players in Linuxland these days - puts OSX to shame.
I'm bummed that the real time kernel isn't supported on Ubuntu 10.10 anymore. I had stability issues before when I tried it with Mint, so I might just forgo that for now...or try Mint again.
Compared to a reasonably striped down and optimized Win7 system, the Linux box has better prat.
The hardware is using all the usual power tweaks - separate and isolated power to P4 and mobo, separate power to the hard drive.
I took a listen to the iMac with my big rig, and it did seem to have better prat than the Win7 PC, but the PC was quite a bit cleaner, since I could underclock and undervolt it a lot compared to the Mac. I got the Mac for other purposes - I know the Mac Mini is all the rage here now.
I'll check out MPD soon and finally get to play with the MPD app for the iPhone I downloaded a while back...
My pet theory is there are two types of jitter, which I call micro and macro. Micro affects inner detail and macro affects prat. And they may have very different causes - micro jitter mostly related to EMI / power issues and macro jitter mostly related to the OS and player software used.
Follow Ups:
Astro,
sit sit.. :)
Look again for all that is reading... computer do not cause audio related jitter. They can't!!!!!! Now they can have an effect of adaptive and Firewire interfaces because the clock is derived from the stream and if the computer lags a bit it will change the clock speed and this will cause jitter.
BUT!!!!!!!!! remember this still happens at the device and not in the computer, not in the file, not even in the interface. But only when the parallel data is converted into serial data for the DAC chip or filter.
Please remember this!
We focus too much on Jitter here being the end all of problems. There is much more to consider, please do so.
Thanks
Gordon
J. Gordon Rankin
"Now they can have an effect of adaptive and Firewire interfaces because the clock is derived from the stream"
Gordon, perhaps you meant to say "adaptive and S/PDIF interfaces" rather than Firewire?
Most, but not all, of the Firewire interfaces that I'm aware of (being used for audio purposes) use async transmission.
cheers,
clay
Clay,
Other than Metric Halo all the Firewire protocols work the same way adaptive does. That means any company using TC Electronics Firewire (From the Oxford set).
Only Metric Halo has an Async Firewire setup, everyone else is not.
Thanks
Gordon
J. Gordon Rankin
I read a post from Weiss saying that his Firewire is async.
Steve N.
Steve,
No they use the TC Electronics parts and then use an asyncrhonous upsampler. Got prodded pretty much in the press for that one.
So no Weiss does not do async Firewire, only Metric Halo that I know of.
Thanks
Gordon
J. Gordon Rankin
Gordon,
Thanks very much for the clarification.
That seems a shame.
My Firewire experience is only with the Metric Halo ULN-2 and now the LIO-8.
Clay
Yes, my V-DAC uses adaptive USB. I hear clear differences in transparency and reduced grain when I undervolt the system, which you can't do on Macs as far as I know.
Gordon, why do you say Macs sound better if asynchronous USB is "the" solution to computer induced jitter?
Gang,
Look, sure jitter is a problem for any digital product. But there are tons more variables out there that people are not looking at.
These seem to have a huge effect on the sound.
Ok if application A and application B both output bit true signal then why do they sound different?
Why when I boot camp Windows and run Foobar or J River does it sound different if both of these output bit true and also the same with OSX and OSX sounds better?
These are the questions people should be asking.
Thanks
Gordon
J. Gordon Rankin
"Ok if application A and application B both output bit true signal then why do they sound different?"
Because the DAC still has to convert the analog SPDIF signal to a digital form for the DAC proper. It basically has to parse it, and if the signal is not super clean the very fine timing of it will be off. The cleaner the signal, the more precise the timing.
Every digital source will output a slightly different SPDIF signal because it's a physical electrical square waveform. Those differences must account for differences between sources, even with "jitter-immune" DACs. I don't know why that's so hard to understand around here!
I'm a programmer, so I understand "digital thinking", but we live in an analog world, and in this world, everything is slightly different.
The slightly different waveforms all have one thing in common. There is a timing region in the waveform approximately centered mid-way between two transitions. During this region the signal voltage level will be excluded from the middle values of possible amplitudes. It will be either below this range (e.g. corresponding to a 0 bit) or above this range (e.g. corresponding to a 1 bit). When this situation applies consistently, as bit by bit goes by then one can see this clearly on a scope, and this is called an "open window" or "open eye pattern". When this situation no longer obtains because the signal has been horribly degraded then the receiver will no longer be able to accurately decode the 0's and 1's, i.e. there will be bit error, but these are not the problems being discussed by audiophiles when they talk about jitter.
If the receiver is operating using a clock that is approximately synchronized to the correct clock, i.e. within a small phase error at each transition which implies that the frequency is also correct, then the receiver will sample the incoming waveform somewhere in the middle of the stable region and will pick up a certain amount of energy. The timing of the receiver does not have to be perfect, just good enough that the entire time period that it uses to pick up the signal is within the open region. At this point a certain amount of energy has been captured, and this energy depends on the signal amplitude at the particular moments that were used for the sampling. If the timing was slightly off on the incoming waveform then the amplitude of this captured signal will be slightly off. This signal is loaded into logic gates that operate to reduce the magnitude of any errors from the "ideal" values that correspond to a good 0 or a good 1. The effect of this is to erase the remaining variations in signal amplitude, or at least to reduce them. There won't be timing variations at this point, because the timing was done locally.
Each stage of this processing will be imperfect. However, if multiple stages are used then the imperfections can be made arbitrarily small. The logic circuitry will tend to do this, but there will be stray coupling between separate stages of this circuitry due to power and ground wiring, power supplies, physical proximity of components and wiring, etc. It is not necessary to attenuate the variations completely, just enough so that they are well below the noise of the analog circuitry. Of course the care and effort to provide isolation won't be free and there will be costs of using multiple stages, multiple separate power supplies, etc. But there seems little reason to believe that this will pose much of a cost problem in high end products costing many thousands of dollars.
An SPDIF system is a poor design to start with, because the receiver recovers the timing from the incoming signal. However, it is possible to avoid this situation by running a timing signal from the DAC back to the transport and slaving the transport to the DAC. Then all of the timing can be locally controlled by the DAC. Getting this right won't be easy, because the phase of the timing will depend on delays in the transport between the incoming clock and the outcoming signal, as well as delays caused by the round trip down the cables which will vary according to length. So in general, compromise designs will exist that won't eliminate all of the problems. In addition, if the timing of the input waveforms vary randomly due to mediocre signal quality, there will be transistors switching at unpredictable times, i.e. the times at which these transistors switch will vary according to changes in the transport, the cable, etc.. If these are small, the bits decoded will still be correct, but electrical crosstalk in the DAC circuitry (as described above) will cause noise on the clock signals used in the DAC to convert the final signals to analog. However, this interference exists because of imperfections in the design of the DAC, i.e. it could have been reduced by better design, for example more stages of isolation, more power supply isolation and better signal integrity inside the DAC box.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
I agree; I have asked these questions but there appears to be no resoned answers. Waveform integrity is one factor (power supplies, hardware signal handling, cable effects etc) but all I hear is jitter. What is your take?
"These are the questions people should be asking."
I rather think that you should be answering these questions, since you have more knowledge and test equipment than nearly all the audiophiles.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
It has occured to me that it ironic that:
Whereas computers are used in science and engineering to model most processes, it is not possible to model processes in the audio chain of a PC or MAC. This is because of the multitude of providers of hardware and softawre whereby it seems impossible to derive hard and specifications of what processess and signal transforms are involved.
Well, it comes down to what one means by "process" and "signal" what tools one uses and what information one has access to.
If one works in the mathematical domain of bits then one can definitely model the transforms used by some audio software. It is rather trivial when it comes to software of the "bit perfect" variety, i.e. one compares a string of integers for identity. If one works on the analog waveforms output by the PC on the cable to the DAC, one can also model things in detail if one has complete models of the components used and "unlimited" computer time to run SPICE simulations. The problem only arises when one uses proprietary commercial off the shelf hardware and software. That's the fools errand, I should think.
Of course, models are only models and to the extent that they are simplified sufficiently to be practical they won't be accurate. The modeling problems for computer audio are not nearly as intractable as modeling global climate change, but then the budgets are hardly comparable either. :-)
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Climate change is a different matter. The turbulence and heat and mass transfer models are rather inexact.
In the case of audio in PCs, it is the multitude of information which is held back or not available which makes it difficult to do.
The amount of money generated thru MS and Intel sales should make it quite possible for these companies to model the processess if they want to, or see it commercially advatages to.
Instead, we have a situtaion in IT wherby nobody seems to shoulder blame or responsibility if something doesn't work properly. The amount of business and individual resources wasted in getting programs and hardware to work properly must run into billions! Just read some of the posts here.
you're playing a bit with words and definitions, no? 'computers don't cause audio related jitter', that's because in this context they're not audio devices, they're digital devices. jitter-related distortion happens only at conversion, yeah yeah. but anything in the path from data source to the converter potentially can affect timing and that includes the computer, it all depends how the converter *system* handles the data. in the real world computers can influence how a DAC system converts D to A.
drrd,
If there is an appreciable buffer in the endpoint and the async stream is not interrupted by the Computer then there will be no differences in jitter.
I have verified this with my Prism dScope III and my WaveCrest jitter analyzer that there is no differences in jitter in my products between applications and operating systems.
Now with Adaptive, SPDIF and most Firewire devices the computer does have an effect on that stream which can effect the jitter. This is why most companies have some kind of jitter reduction circuitry (reclocker, reclocker+FIFO, upsampler, JETT PLL whatever).
again people can you use some names. I hate these aliases. It would be nice to address people with a true name!
Thanks
Gordon
J. Gordon Rankin
That makes perfect sense in the case of async transmission. I haven't done any tests myself but my listening confirms what you say, in my case with dCS USB. I did have an issue with Foobar under Windows7 not sounding quite right to my ear but actually the latest Foobar beta seems to have sorted this. I presume it relates to sync'ing Foobar with Windows audio. Certainly now with any app's DS output they sound identical to me. I also don't hear differences with USB cables, I don't see a reason why they should make a differnce in a properly functioning async scheme though some people's experience differs.
As I said though I think whether computers influence jitter and distort the final audio output is going to depend on the DAC system and how it handles the data.
Just out of interest have you measured jitter with Windows ASIO4ALL? It's just never sounded right to me even though I can confirm it's bit perfect. It sounds a bit like Foobar DS used to sound before the recent beta update. I think in both cases the data timing was off somehow, I can't think of any other reason. Maybe out of step somehow, I don't know, just what I hear.
I've swapped my moniker as requested :)
"I have verified this with my Prism dScope III and my WaveCrest jitter analyzer that there is no differences in jitter in my products between applications and operating systems."
So Gordon, to your mind, are there other possible sources for the sonic differences between software applications than the oft-mentioned "noise" contamination of the DAC by the computer either airborne, via digital cable or via AC circuit?
I'm finding it more and more difficult to imagine that software could account fully for the magnitude of differences being reported between the players.
thoughts?
clay
Gang,
I have been working with mastering engineering mail list for the last 2 months.
It all started when Chesky told the others that FLAC did not sound as good as WAV. Then he bought a MAC and started using Amarra, Pure Music and AyreWave. Again differentials in sound.
So we are going to look into some things. This is not going to be an easy task as the above have been verified as bit true applications and yet they sound different.
We have all experiences this in one way or another. I am a bit swamped right now but have some basic tests I want to conduct to see what the differences are.
Anyone know of a good RFI/EMI wand set of any kind?
Thanks
Gordon
J. Gordon Rankin
"Anyone know of a good RFI/EMI wand set of any kind?"
This is a bit dated (I am too!) but over the years I've found two things that work well, one is just using a chassis mount BNC's center pin to probe E fields. Maybe with a 1" wire on it for more sensitivity. The other is to take a slug from an adjustable coil, I think a #6 size and wind magnet wire in the threads then twist them between that and a connector maybe 1" away. I can send you a picture if you want to build one.
Hp makes some RF current probes but I've never found them effective. Again, this is very dated so there may be some wonderful products now out there.
Working at the source end where I want to eyeball circulating currents, typically to ground, I often use a scope probe with a short ground lead hooked to the sleeve at the tip or that little springy thing that comes with the probes. Often I can then see (hear) whatever it is on a scope, spectrum or receiver and figure out where the currents are going by moving the probe around.
Good luck, I think you're barking up the right tree...
Rick
is there of getting rid of RFI when CPU makers are putting GPUs on the same die as CPU?
They'll need to do it properly at chip level!
"is there of getting rid of RFI when CPU makers are putting GPUs on the same die as CPU?"
Guess it depends on what it is... If you take the limiting case where the data conversion to graphics or audio occurs on the same die as the CPU than any temporal or amplitude intermod effects will be locked in. Naturally stuff that's just riding on the signal rather than mixed in can be filtered off externally.
I have no idea what the future portends. Tablets I think, I even want one and I'm about as stodgy as they come. Obviously for that market the more highly integrated the better so you've probably nailed it, they have to do it right on the chip as there simply won't be anyplace else. At least for the targeted product. But that doesn't mean that they may not include the ability to stream stuff in and out serially for Mfg's that want to do special stuff, it doesn't add very many pins or much die area if the data node already exists so who knows?
If I may slip into pure speculation, on the bright side even though they are physically small, the ability to precisely control the fields and currents on the chip is high and the currents are small. I'd guess that the most difficult aspect is signal induced thermal gradients. If you have a ton of stuff happening at once it's hard see how you can solve them all by symmetry. As always I think we can count on 'audiophile' sound quality to NOT be one of the key parameters so it will probably continue to a pig in the poke.
Like cars, electronic devices become less tinkerable as they mature and that's a mixed bag.
Regards, Rick
@100A?
Amps huh?
The local currents in slower internal sections like audio are small since there is little capacitance or speed. They will also probably be on their own planes.
I think it's tough to predict how well something, anything, will work based on broad generalizations as implementation is (almost) everything. It's the priorities reflected in the design specifications that matter and since personal computerish devices seem to be shifting towards entertainment devices that can also be used for a little work, things aural and visual will likely get more attention.
Rick
Does all this mean that computers have less chance of being great music providers, that dedicated boxes like the Bryston,Auraliti, Weiss server etc will outperform sonically?
PC/MAC audio was touted one or two years ago as being 'far superior' to boxes. A large no of inmates just listened to Red Book and made various assertions. Some of them did not have high quality boxes. Others preferred 'quirky' dacs and replay systems. Many use Toshlink.
Then came the PC 'optimisation' game, claiming certain $1500 setups beat $80000 high end boxes. On the last count there were 1500 users (?) in Europe. The MAC boys maintained that they didn't need to 'optimise' but needed 8 GB RAM, SSDs and $1000 usb cables and so on for best sonics.
Now, we have the MAC setup revision suites and inmates are paying for them and opting for constant updates/upgrades.
What is best? Who can say?, other than being satisfied with a system that suits. True, high end boxes are getting far too expensive, but a slimmed down computer system is just tryig to emulate dedicated boxes from 'the other end'.
My appraoch is to have both computer and high end boxes in a competitive but mutually non exclusive situation.
"Does all this mean that computers have less chance of being great music providers, that dedicated boxes like the Bryston,Auraliti, Weiss server etc will outperform sonically? "
It means nothing of the sort. Good is good whether computer or appliance.
This is just speculation concerning the audio performance (and tweakability) of future highly integrated systems-on-a-chip based computers. Guess we'll find out when we get then...
Regards, Rick
We have to define jitter as both a timing issue and a signal quality issue. Based on my experience with AC quality affecting the sound quality, digital signal quality is very important. I haven't tried other high end DACs, but how clean the signal is has to affect the performance of most DACs, since it is an analog electrical signal. This is where digital becomes as tweakable as a vinyl rig...
I just got Ubuntu Studio 9 running last night (10 would hang on install). When the rt kernel would hang playing video, I did a brief test comparing the real time kernel with the stock "low latency" kernel and the stock kernel actually sounded a bit better, but it was very close. It sounds better than my Win7 and Mac systems ever did.
I think Macs already have the timing/prat (macro jitter) issue nailed, but in my experience, the ability to undervolt is key to getting max SQ, and that's where Linux gets you great timing and the ability to undervolt and underclock. Anyone here have a Hackintosh?
Hi AstroD,
In my cMP setup I can boot into WIN-XP, WIN7, OS-X SL (a Sojugarden Hackintosh) and Linux (for easy installation I used Mint).
Last spring I created this setup, too experience/hear myself the sound quality differences between these 3 operation systems. The nice thing in this comparison is: that the hardware used, is exactly the same for each OS.
I especially intensively compared the sound quality of WIN-XP, WIN7 and OS-X SL because I needed a new notebook. With sound quality in mind I wanted too know if I should buy a MAC or a Windows Notebook.
I wrote about that comparison in this post:
http://db.audioasylum.com/mhtml/m.html?forum=pcaudio&n=81010
From the cMP-project I already had heard myself, that software (drivers, programs, ect, ect) have an impact on sound quality. But this is not so new. Computers are already widely used in recording studio’s for 10 – 15 years now, and nobody in that profession denies that the software being used will have there own impact on sound quality. For instance: think of Steinberg. Already in 1999 they developed the ASIO protocol to get better sound quality from windows.
I find it hard too speculate on why various software sound so different. Sometimes it’s easy too understand: the number-crushing is not done right or differently. But mostly it’s a black-box too me. Although it can be fun too discus about it.
Mark
fully optimized cMP2 PC -> ESI Juli@ -> Van den Hul Optocoupler MkII-> Lavry Black DA10 -> XLR Mogami Gold -> Klein & Hummel O300
Mark,
You didn't mention how the Linux set up compared. In terms of transparency, detail and ambiance, nothing beats underclocked/undervolted Linux with MPD in my experience - Ubuntu Studio 9 in my case. Especially when you lose the X server.
I just got my Audio Advisor catalog today. Bryston just came out with a music server that simply uses an external USB drive for the music source - no CD drive at all. It's using Linux. I wonder if they did their own kernel tweaks.
Dave
Hi Dave,I can make use of 3 digital audio interfaces / soundcards in my cMP setup: ESI Juli@, RME 9632, Lynx AES16.
Although being able too chose from 3 cards, I still wasn’t able too do a direct comparison because:
ESI Juli@ runs on: Windows, Linux Mint, but not on OS-X SL Sojugarden Hackintosh
RME runs on: Windows, OS-X SL Sojugarden Hackintosh, but not (out of the box) on Linux Mint
Lynx AES16: Windows, OS-X SL Sojugarden Hackintosh ,but not on Linux Mint.So it was a little difficult too compare Linux with Windows / OS-X SL because I had too swap a card too do that, but ‘sound wise’ there is very little difference between these cards.
(Almost sure there now will be a follow-up post from fmak with a lot off bla-di-bla, but please ignore)I did not do any fine tuning too the Linux Mint install, as I don’t know what and how.
I just used Linux Mint ‘as is’ (as it is after installation) with the ESI Juli@ (digital part only).Too my ears Linux Mint has the same transparency, detail and ambiance as Windows or OS-X but still it sounds much (!) different too my ears.
With good software players and both properly fine tuned, Windows and OSX-X SL sound the same too me.
But, in my setup, I find the sound coming from Linux Mint rather ‘dark ‘ and ‘brown’.
Best described as: with Windows and OS-X SL one is looking through clear glass window, and with Linux Mint one is looking through a sun-coated glass window. So: same transparency, detail and ambiance but different. Too my ears with some sort of 'color' added. Not being my taste.But keep in mind: I didn't compare Linux very thoroughly with Windows and OS-X SL on my steup. Just one evening.
Mark
fully optimized cMP2 PC -> ESI Juli@ -> Van den Hul Optocoupler MkII-> Lavry Black DA10 -> XLR Mogami Gold -> Klein & Hummel O300
Edits: 01/10/11
While tweaking a computer transport, under volting and other hacks may be a nice DIY pastime for hobbyists, this pastime exists only due to the cheapness, ignorance, incompetence or carelessness of DAC designers, who if they set their minds to it could build DACs where incoming signal digital quality would not matter so long as there were no actual bit errors.
If some of these designers are on this forum and are annoyed by this post, so much the better. Some of these designers are more than competent enough to solve this problem, they just haven't set their mind to it yet, so it is good to remind them periodically that these problems are all their fault . Indeed, these people are the only ones who have a hope of solving the problem. Computer tweaking can, at best, only minimize the problem.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
You can equally say the above
No, I don't agree.
If the programmers aren't hired to write software as part of a complete system, e.g. computer, operating system, cable, DAC, other analog playback gear, then they can't be expected to optimize the complete system. The problem is that the software interacts with the computer hardware, the cabling, and the DAC in complex, non-linear fashion and various ways of writing software that works well in one configuration may work less well in a different configuration. Another way of putting things is that in the absence of solid system specifications to work from, there is not possible for the programmers to work in an engineering role.
Programmers writing software (or firmware or whatever you want to call it) for bespoke boxes are in a different situation entirely, since they are (or should be) part of a team that is doing system optimization. In the absence of a well defined system or rigorous specifications for components that enable individual components to work together well as part of a system, this is impossible. Hence the mess with Commercial Off The Shelf PC software (COTS). It is suitable for tweakers, to be sure, but the tweaks aren't going to translate into other environments. These issues are responsible for a huge amount of traffic on this forum. With proper engineering most of these problems would be long gone and we could discuss other items, such as musical and recording quality. But first, we have to get the DAC designers to stand up and admit that it's their problem and do something about it.
An example of the problems of system optimization in the absence of proper interface specifications is the impact of buffer size (e.g. latency) on sound quality. Assuming no gross problems (buffer underruns or overruns) the impact of buffer size decisions is to affect the frequency and duty cycle of various processing and data movement operations going on in the computer system as a function of hardware, software, and firmware. These in turn affect various voltages according to power supply design, wiring, PC board layout, bypassing, etc. The resulting electrical environment affects the operation of clock oscillators and clock buffers and signal drivers out the PC onto the cable to the DAC. The net result is that the jitter spectrum on the cable to the DAC will be affected by the choice of buffer size. How this affects the sound on the other end will depend on how the DAC is designed, e.g. how it recovers the clock from the incoming signal. If this is recovered using some kind of (analog or digital) phase lock loop, then there will be some frequency dependent attenuation of the jitter signal, and changes to the frequency spectrum of the jitter signal due to buffer changes will have different impact on different DACs. In addition, there will be psycho-acoustical impacts of jitter distribution that will depend on the music being played, the high frequency linearity (or lack thereof) of the analog amplifiers and speakers, and possibly listener dependent issues as well. This is all very complicated and imprecise, a.k.a. a huge mess. It is just not reasonable to blame the programmer (e.g. the author of foobar2000) for these problems when they depend on so many variables that are unknown and completely out of his control.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
'An example of the problems of system optimization in the absence of proper interface specifications is the impact of buffer size (e.g. latency) on sound quality.'
And people are writing software without any knowledge?
Such programmers will not survive for long in proper R&D.
Indeed, proving your point was one of my intentions. :-)
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
![]()
S: It's all the fault of the programmers . . .
W: No, it's the engineers fault . . .
S: I blame the chips . . .
W: The trade's going to the dogs . . .
DB: So why not show us your own designs so we can see where the others got it so wrong?
Howdy
The world isn't perfect, everything is a compromise. Even a component's power cord. Every manufacturer has to pick a compromise they can live with.
You can't get rid of jitter, you can only filter it. There are some methods of dealing with data which are more jitter prone than others, but even with the best (read "really expensive and well designed") methods jitter is still an issue, but hopefully less of an issue than the other (unavoidable) faults in a DAC (or any piece of equipment.)
Things are better now, I remember when most hardware designers/engineers didn't understand that you couldn't avoid metastability issues when more than one clock was involved. Once people understood the problem they realized that they just had to make a decision about the cost comprises (both in engineering and in the resultant product) of narrowing the metastability window. Unlike, say a decade ago, I think most audiophile DAC designers understand that jitter is an issue and there are a lot of creative approaches to lessening it's effects and corresponding trade offs.
-Ted
I am sure you are smart enough to solve the problem (of transport independence) if you put your mind to it. But you might have to junk your systems architecture, since you ultimately derive clock from the transport. This is not necessary or desirable in a high quality DAC and I was shocked that you had adopted this architecture. Your entire DAC could perfectly well be implemented as a single clock domain given a suitable clock architecture. There would be no need for any synchronizers anywhere in your box. They could all be moved into the computer sound card where they would be safely away from sensitive analog circuitry.
The metastability issue is a red herring. Competent computer designers have understood the problem since around 1970, when Alan Kotok first described the problems he encountered designing the KI10 processor. (I worked down the hall from him and used to go into his office to use his computer terminal on occasion, which is why I became aware of the issue at that time. There was even a conference on the subject which he attended.) Of course there have been no shortage of snake oil peddlers and even text book authors who couldn't get a grip on a problem that was, on the one hand, completely impossible to solve to perfection while on the other hand, quite simple to solve in practice. Carver Mead has a good chapter on the subject in his "Introduction to VLSI design". The more philosophically inclined can read Leslie Lamport's paper on the subject, link attached.
Your reply confirms my earlier comment about designers. They aren't going to solve the problem if they make excuses rather than get on with finding solutions. Now that you have been humbled by discovering that your design wasn't transport independent like you had hoped, perhaps you will go back and figure out where you went wrong. Since you are retired and doing this work for fun, you are not subject to commercial pressures.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Howdy
Read what I wrote: you didn't address any of my points.
Metastability isn't a red hearing. I didn't say that it's a problem now, I was trying to imply that jitter is like metastability in that both used to be scoffed at and as time goes on they both got taken more seriously.
I also said nothing about my designs or my design skills. Nor was I referring to my own designs. I was speaking from a fundamental knowledge of engineering. (As to my design choices you obviously haven't been paying attention and are making many unfounded assumptions.)
Tho many of your posts are helpful to at least some of the readers here, some of you other posts confuse the issues because you talk out of hat instead of speaking from knowledge or at least first verifying your assumptions. Wisdom is knowing the difference between what you do know and what you don't know.
-Ted
What I do know is some early posts of yours implied that your DAC was not sensitive to digital transports. Then later posts reported that you had discovered that (at least in another system) transports seemed to matter.
I wish you well in your quest to build a DAC and hope you are ultimately successful.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
HowdyThanks for the good wishes.
Here's, perhaps, a more complete explanation of my position.
I didn't say that my DAC was transport independent: I said that my DAC wasn't sensitive to the transports I had used it on up to that point in time.
I always knew that it would be sensitive to transports, power cords, RFI, etc. because it exists in the real world. My goal was to see how much I could ameliorate these effects with the obvious (to me) implementations.
I didn't build my DAC for anyone else's system or goals. It was for my own edification and my own use. I found that it works better than I expected in some instances and a little worse in others. More importantly I learned that some of my shortcuts/engineering decisions failed miserably (hence the three separate incarnations of my boards.)
More incidentally I now have physical proof to back up some of my statements which were scoffed at in the past: some examples:
.) Some said that (in addition to the analog filer) you need digital filtering and/or other processing to convert a raw DSD bit stream to analog in spite of me posting the spectra of various digital simulations of simple analog filtering of a DSD bit stream. It was obvious to me that they were missing something. A simple analog filter works quite well.
.) Some said (and still say) that a proper FIFO is sufficient to get rid of jitter. Once again that's obviously false to me but obviously true to the bits-is-bits people. I defy anyone to find a fault in my FIFO implementations or to build a FIFO that works gets rid of all audible effects of jitter.
.) Some claimed that FPGA based filtering of audio was too crippled to work acceptably. It seems obvious that conversely a FPGA is much better than a general CPU or DSP chip IF programming time isn't counted and the engineers involved know what they are doing. I've programmed DSP chips, general purpose CPUs and now FPGAs to do correct DSP processing.
.) Many people say that a flat frequency response doesn't sound the best or conversely that room correction or speaker correction is a great idea. I've repeatedly had the experience that using room correction systems or speaker correction systems to flatten the frequency response of a system takes the life out of the music, but I always belived that a system which is inherently flat and didn't need such corrections should have good PRaT. Unfortunately one of my mistakes subverted a direct and compelling test of this, but still my DAC with a flat response is very involving and has plenty of life.
.) Some claim that any properly implemented power supply would be immune to changes of power cords. To me this should be obviously false to anyone who understands power supplies for audio components well. I did hope that I would render power cord changes on my DAC inaudible, but since designers I respect have failed at that same goal I wasn't too surprised that I did too.
I read your "It's the fault of the DAC designers. They haven't done their job." post as another post espousing a simplistic view of real world engineering and I stand by my intended response that all real engineering involves compromises.
-Ted
P.S. Oh, also, I have plenty of familiarity with DACs which send the clock back to the transport. I have a Meitner stack and I've worked for multiple companies who also provided for the use of a single clock domain in their products. I was surprised that I have less audible effects in my DAC from jitter than I hear in any of those systems or other "jitter proof" systems.
Edits: 01/09/11 01/09/11
It's your project and it's for you to say what it's goals are and whether or not they have been (adequately) met. All and all, I think you've done a great job and provided an inspiration to many as to what can be done in a DIY project. Thanks for taking the time and effort to post about your project and putting up with the various flac that has been returned. :-)
There is one point where I personally would have slightly different goals had it been my project, so if you don't mind I'd like to clarify one point.
".) Some said (and still say) that a proper FIFO is sufficient to get rid of jitter. Once again that's obviously false to me but obviously true to the bits-is-bits people. I defy anyone to find a fault in my FIFO implementations or to build a FIFO that works gets rid of all audible effects of jitter."
A proper FIFO is necessary to get rid of jitter. It won't be sufficient, as there can be other jitter coupling modes, e.g. power used by input decoding stages can couple through power and ground into output clock circuitry. In addition, a FIFO may work perfectly at moving bits, but fail to achieve jitter isolation. Such a FIFO will be suitable for some applications, e.g. a buffer in computer interface, but not suitable to achieve jitter isolation in a DAC. Each component in the DAC FIFO needs to be modeled in the analog domain and the jitter attenuation of the FIFO as a whole needs to be modeled and verified empirically. Until this has been done it's not possible to conclude that a given FIFO is "proper" for the DAC application. I suspect there are many FIFO architectures that work satisfactorily for pumping bits that fail to achieve additional jitter attenuation when cascaded. This will depend on the circuit design, layout and, especially, the clock architecture. There aren't a lot of components (e.g. transistors) in some FIFO designs, so it would seem possible to design in such a way that each stage of the FIFO provides a constant (dB) attenuation of jitter. It looks to me like you've got most of the tools at hand to investigate this aspect of the design, should you decide to do so at some point in the future. However, if your FIFO is in the FPGA there may be no way to achieve sufficient isolation, due to the design of the cells and/or the available wiring.
It won't be possible to get perfect isolation, nor is it necessary. The effect of jitter is to introduce noise modulation onto the output analog signal, and if you can get this noise modulation well below the output noise of the DAC itself that will be sufficient. Modeling and measuring the jitter related output noise caused by the available degree of isolation won't be easy, but it can be done and must be done if one wants to solve this problem.
It is also possible that the audible effects of changing the transport have nothing to do with jitter, or even to do with the DAC. There could be other modes of coupling (e.g. RFI/EMI) to the downstream analog components. There are probably experiments that can be devised to evaluate these coupling modes.
The only other part of your discussion that I could possibly disagree with concerns power cords. But I'm not really interested in power cords, since all the evidence seems to indicate that the effect of power cords depends on all the components in the system and the general electrical environment. If I were a fanatic about power cords, I would just get rid of them completely e.g. run my components on internal batteries. :-)
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Howdy
A few minor points: a FIFO can't provide "constant jitter attenuation". It is at best a low pass jitter filter: you can talk about the slope of the attenuation and it's corner freq, etc. In my experience as the corner frequency of the jitter filter goes down the bass gets firmer (in addition to other effects.)
I've also wanted to try the opposite experiment: instead of trying to get rid of all jitter (which paradoxically can cause it's spectrum to become more colored) perhaps whitening the jitter spectrum might provide a more practical way of achieving a cleaner sound. (Obviously I don't believe it will given the direction I've taken, but still it's a different possible approach.)
As I've mentioned in other posts I assume that jitter is sometimes the only reasonable explanation for the effects of changing, say, a power cord. In the specific case I've talked about before the transport and the DAC were connected by glass fiber so along that path the only possible effect is jitter (or bit errors). The transport was 10' away from the DAC and the rest of the system so I don't expect that RFI was significant. The DAC, the transport and the other system components were on separate dedicated circuits so the AC coupling was fairly minimal, tho possibly audible. But now with further experience with jitter's audible effects I recognize that the firming of the bass when the transport had a more substantive power cord is one common effect of lower jitter. I don't claim this would convince someone else, but it's personal experiences like this that help to clarify one's journey in understanding (or at least rationalizing) "audiophile tweaks", in this case power cord effects and jitter effects.
It's a little off topic, but I often wish I could have more "bits-is-bits" (or objectivist) people over so they could hear for themselves the effects of some simple experiments: we could then do other experiments to help them clarify possible mechanisms in their own minds. I've found over and over that when even some of the die hard double blind proponents hear a significant difference sighted often it opens their eyes to possible explanations for the effects they hear in ways that no amount of discussions will.
-Ted
I'm not sure we are communicating regarding what a FIFO can and can't do as regards removing "bits ain't bits". Or perhaps I'm off base and have been consistently missing something.
I'm not really concerned about the rate adaptation function of a DAC FIFO, as this can be dealt with by the clock architecture (e.g. slaving the transport to the DAC master clock). It looks like you've solved the problem for most interesting cases, as you can make the corner bandwith of your digital phase lock filter effectively zero once you've found a frequency setting that drains the buffer sufficiently slowly that no changes in rate are needed through the course of an entire track. The key to this, as it was obtaining stable jitter with the FDDI reclocking system, is to use buffer load state as well as rate differences as input to the feedback loop. My understanding is that you are doing this.
The other problem arises if you just take a jittery signal and reclock it with a clocked flip flop.(Here I'm talking about what is going on within a single clock domain.) The output transitions are supposed to follow the local clock, not the transition times of the input signal so long as the setup and hold times have been met. Of course they do not do so exactly, but the question is whether the net effect of the flip flop is to attenuate the variations. If so, then it should be possible to string a bunch of flip flops in series with appropriate clock scheme and achieve any desired degree of attenuation. I suspect the problem is that the output level of a gate depends slightly on all the input signals (e.g. the output of a NOR gate will be at a slightly lower level if all the input signals are true compared to just one) and the propagation delay through a gate depends on the level of the input signals. Perhaps your SPICE simulations are such as to demonstrate this phenomenon (or lack thereof).
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Howdy
The issue is how do you clock those flip flops without having the signals going into the first flip flop affecting it's clock or power supply ... and hence the clock or power supply of the final flip flop... If you get into it further it's the same problem as using a PLL to attenuate jitter: you are at best are filtering the jitter and that filter isn't as simple as uniform attenuation. TANSTAAFL. Tho I don't have the proof at hand I believe that once again the best you can do is some form of low pass filtering of the jitter. In the limit tho low passed jitter becomes wander and wander slow enough to not affect the bass firmness isn't a problem.
I'm sure that I've posted my implementation of the idea you are talking about elsewhere but here's a synopsis:
I use a FPGA to do the bulk FIFO storage. The FPGA outputs a clock and the data associated with it and these signals go into a high quality flip flop to align the signals with a separate clean power supply. Then the signals go from that flip flop to another with it's own PS which is clocked directly by the master clock (with it's own PS and thru only a few mm of traces.) And when I say "own PS" I'm not just talking about a local ferrite bead and some caps.
The initial mistake I made was to clock the two external flip flops exactly out of phase with each other which is almost the worst timing possible: the only worse timing would be to clock them in phase. The beauty of FPGAs is that I only had to change one parameter to advance the FPGA and it's flip flop 90 degrees and make a significant difference in the audible jitter. Taking this lesson to the limit you can see how there isn't room for an arbitrary number of flip flop stages: you eventually are switching some of the flip flops too close to the switching of other flip flops and increasing the possibility of jitter leakage thru the power supplies or clocks.
I'm not claiming that this proves my case, but just like other "jitter solutions" things aren't as simple as a first level analysis might lead one to believe.
-Ted
"all real engineering involves compromises."
Why limit the scope to engineering? I reckon it's an attribute of life...
The beleaguered Design Engineer has done his job if the thang he designed meets specs, is under budget and on time. Usually he's doing damn well to hit one them. My take is that home audio is far more plagued by systemic definition difficulties than implementation problems, hence the 'system synergy' effects. No one knows how to specify what 'properly designed' is. On AA it's usually synonymous with perfect.
On the bright side it all makes for a great hobby. If everything sounded the same it wouldn't be interesting!
Regards Rick
Howdy
Thanks.
It would be nice if some of the "standard" audio interfaces were replaced with sane/well-engineered interfaces. But downward (or backward) compatibility also has many benefits - it's another of those compromises.
I originally didn't have any PCM interfaces on my DAC but one day I woke up and realized that I had plenty of extra pins on the FPGA so I "graphed" on DSD ST glass, TOSLink, S/PDIF and AES/EBU inputs. Then dealing with these signals was a software problem and I can deal with those :)
Even if I were designing a DAC from scratch and intending to market it I'd probably still meet essentially the same interfaces: for a while it'll still be more useful to most people to have TOSLink, S/PDIF, AES/EBU, USB Audio 1.0 and other "legacy" interfaces than a random custom "perfect" interface. If I can get my jitter susceptibility to near that of which a "perfect" interface could provide, so much the better.
I also suspect that even if I were to decide to allow myself to use some custom software on the PC and a custom "perfect" hardware interface I'd still have essentially the same results by using some software to carefully downsample any 24/196 to 24/96 (or 176.4 to 88.2) and use a standard hardware interface and apply my creative energy elsewhere. Beyond a certain point, if time or money matter, there are more important things that absolute sample rates. (I know that marketing concerns would tend to a different direction.)
One thing I do know for certain is that every time I listen to music it's so relaxing and involving that I don't get much work done :)
-Ted
"...every time I listen to music it's so relaxing and involving that I don't get much work done :")
Seems like a character risk alright, there's just nothing like success to sap a man's motivation! The cool thing is that you, far more than most of us, have really 'earned' it.
I was actually thinking more of the analog and power interfaces, EMI issues, that sort of thing. The spectre of folks spending thousands of dollars on power cords fascinates me. It's sort of like having the engine fail in your car so you solve the problem by renting a team of elephants from the circus to pull it around. Yes, you do get there, but you know, the solution just isn't very elegant, albeit expensive and impressive.
Regards, Rick
Howdy
I agree, even tho I have aftermarket power cords on almost all of my equipment and silver interconnects and speaker wire :)
For interconnects the two obvious (to me) technical things are differential connections and connections which are (accurately) terminated at the receiving end. Both make technically sound differences and are available on some equipment.
But truthfully I get more of a difference with swapping power cords and/or power conditioning. But I'm not about to hack my amps to get rid of their sensitivity to power cords :) Someday I might build my own amps or power conditioners, but I doubt it. I do find a sweet spot with medium cost cords and conversely have experienced some real stinkers: both cheap and expensive.
The most surprising thing like this that I ever heard was a quality DAC and cheaper transport connected with ST glass hooked up with the DAC being the clock master and separate clock and data connections coming back from the transport (A Meitner DAC but not a Meitner transport). Even with the transport and DAC on separate dedicated AC feeds and the galvanic isolation of the interconnects, a change of power cords on the transport caused a very noticeable difference in the perceived loudness of the bass :) What the...?
In my current DAC I have no components which were selected by ear: all decisions were based purely on my preconceived notions of the desired specs and the parts meeting them :) I'm not adverse to selecting components by ear (assuming they meet the technical specs), I just don't have the time or inclination: I'd rather spend my time making bigger differences with technically sound modifications.
I do get a lot of satisfaction of having done something real and even tho it didn't change my view on most audio matters I feel I can speak with a little more experience/authority on the subject.
-Ted
"I'm not about to hack my amps to get rid of their sensitivity to power cords"
Me neither! Leastwise without a pretty severe problem which fortunately I either don't have or am not aware of. Yet I'm not bothered by just changing interconnect cables and don't feel obligated to redesign the interface. No accounting for audiophiles...
"For interconnects the two obvious (to me) technical things are differential connections and connections which are (accurately) terminated at the receiving end."
Yeah, prolly more important for digital although even for analog I ended up mostly using 300ohm open lines with terminations and build-out resistors. The driver load ends up ~600ohms which is a bit low but seems to work OK with most gear and they eliminated the mild 'tiz' that I ended up with after switching to open lines but the, well, the 'openness' remained. My premise was that the most likely problems with interconnects were dielectric absorption and stray currents on the ground lead and for speaker cables skin effect and current loop area. Since the stuff I did to try and address those concerns perked up the sound I'm happy. However that's not conclusive and there is clearly more going on, some of it related to the wire itself. Even though I'm just using magnet wire (had a roll on hand) I don't scoff at folks using silver or specially pulled, treated or forged wires. However I suspect that the 'problem', whatever it is, could be ameliorated in other far less expensive ways if well understood.
Guess I have a small but finite cynical side that believes that there is more interest in selling (and buying) essentially linear jewelery for audio systems than in identifying problems and implementing efficient solutions. Heck nowadays they actually have jewels strung on those necklaces! If there's anything to that it might be a better way to clean up reflections than terminations. One thing about audiophiling, there is never a dearth of things to explore!
Regards, Rick
of this hinges on the way data is read through triggering in regions of high dV/dt s. It is an awefullly difficult thing to do.
A lot of the hardware chips used are losuy in producing symmetrical waveforma without considerable ringing.
dCS pro gear is the only boxes I have meaured whereby they used to provide verifiable digital waveforms that are 'high quality'.
To do this right you won't be able to use existing chips that have been made to implement inexpensive interfaces to ill designed specifications. It may be that there is no way to solve the problem at all using existing interface specifications, i.e. it may be necessary to separate the DAC into two parts that can be physically and electrically isolated from each other and which are interconnected by a properly designed and implemented signaling system. One is going to have to work down at the transistor level at least (as well as physical layout) of every single component in the DAC box to solve this problem. To do this will require a combination of analog and digital circuit design skills, something that is completely out of the league of most hardware designers and certainly most programmers, who barely rise to the level of "software engineers".
The short answer is that data should never be read in regions where it is high dV/dt. This is possible. The data rates involved are very low in regard to the speed of existing logic circuitry. Low by perhaps a factor of 1000 compared to available circuitry.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
'To do this will require a combination of analog and digital circuit design skills, something that is completely out of the league of most hardware designers and certainly most programmers, who barely rise to the level of "software engineers".'
This was why I only employed properly eduacted and trained 'Computer' or 'Information' engineers, not people who sat for 4 years in front of a screen doing 'virtual' design.
Yes, it's more accurate to say computers can cause the DAC's jitter to be worse. I say, anything that affects is a "cause" of jitter.
dear gordon,
are you not using a little too many exclamation marks in your reply here ?
while there definitely is no jitter in a file, the transportation/read&write/buffering procedures within a computer can surely add jitter to the signal.
a PLL loop can of course resurect most of that before it enters the DAC chip, but certain digital philosophies suggest that the jitter has to be tackled at it´s root, and not cured afterwards.
I´m sure you´re familiar with the cics approach :
http://www.cicsmemoryplayer.com/index.php?n=CMP.03Jitter
-or have I misunderstood your comment ?
kind regards
cMP2 Computer (XP minlogon) Intel E7400 cPlay039>Allocator>Lynx Two B. /192kHz throughout. 2x AcousticReality Ref. 202 & 2x AcousticReality Ref. 601´s ICEpower. Magnepan MG3.3R beechwood frames & custom stands. Miller chokes on Ribbons.
Got MPD running with the MPoD iPhone app.......sweet setup!
MPD sounds a bit better compared to Exaile. I'm using ALSA in MPD.
Do CUE sheets finally work good in Linux?
qmmp ( http://code.google.com/p/qmmp/ ) works fine with .cue files.
-h
No idea, I never used them.
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: