Home Computer Audio Asylum

Music servers and other computer based digital audio technologies.

RE: It's the fault of the Programmers. They haven't done their job.

No, I don't agree.

If the programmers aren't hired to write software as part of a complete system, e.g. computer, operating system, cable, DAC, other analog playback gear, then they can't be expected to optimize the complete system. The problem is that the software interacts with the computer hardware, the cabling, and the DAC in complex, non-linear fashion and various ways of writing software that works well in one configuration may work less well in a different configuration. Another way of putting things is that in the absence of solid system specifications to work from, there is not possible for the programmers to work in an engineering role.

Programmers writing software (or firmware or whatever you want to call it) for bespoke boxes are in a different situation entirely, since they are (or should be) part of a team that is doing system optimization. In the absence of a well defined system or rigorous specifications for components that enable individual components to work together well as part of a system, this is impossible. Hence the mess with Commercial Off The Shelf PC software (COTS). It is suitable for tweakers, to be sure, but the tweaks aren't going to translate into other environments. These issues are responsible for a huge amount of traffic on this forum. With proper engineering most of these problems would be long gone and we could discuss other items, such as musical and recording quality. But first, we have to get the DAC designers to stand up and admit that it's their problem and do something about it.


An example of the problems of system optimization in the absence of proper interface specifications is the impact of buffer size (e.g. latency) on sound quality. Assuming no gross problems (buffer underruns or overruns) the impact of buffer size decisions is to affect the frequency and duty cycle of various processing and data movement operations going on in the computer system as a function of hardware, software, and firmware. These in turn affect various voltages according to power supply design, wiring, PC board layout, bypassing, etc. The resulting electrical environment affects the operation of clock oscillators and clock buffers and signal drivers out the PC onto the cable to the DAC. The net result is that the jitter spectrum on the cable to the DAC will be affected by the choice of buffer size. How this affects the sound on the other end will depend on how the DAC is designed, e.g. how it recovers the clock from the incoming signal. If this is recovered using some kind of (analog or digital) phase lock loop, then there will be some frequency dependent attenuation of the jitter signal, and changes to the frequency spectrum of the jitter signal due to buffer changes will have different impact on different DACs. In addition, there will be psycho-acoustical impacts of jitter distribution that will depend on the music being played, the high frequency linearity (or lack thereof) of the analog amplifiers and speakers, and possibly listener dependent issues as well. This is all very complicated and imprecise, a.k.a. a huge mess. It is just not reasonable to blame the programmer (e.g. the author of foobar2000) for these problems when they depend on so many variables that are unknown and completely out of his control.

Tony Lauck

"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar


This post is made possible by the generous support of people like you and our sponsors:
  Kimber Kable  


Follow Ups Full Thread
Follow Ups

FAQ

Post a Message!

Forgot Password?
Moniker (Username):
Password (Optional):
  Remember my Moniker & Password  (What's this?)    Eat Me
E-Mail (Optional):
Subject:
Message:   (Posts are subject to Content Rules)
Optional Link URL:
Optional Link Title:
Optional Image URL:
Upload Image:
E-mail Replies:  Automagically notify you when someone responds.