|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
24.255.4.240
In Reply to: RE: Chord up-samples? posted by play-mate on June 18, 2017 at 12:21:02
From the article, stated by Chord engineer John Franks.....
It uses very advanced digital sampling and whereas a typical industry DAC chip would have maybe a 125 or so sampling filters, in the Dave we have 166000 digital sampling filters, it's many orders of magnitude, more complicated.
I don't know how Franks came up with the "quantity" of digital filters in a DAC..... Unless he's referring to the number of calculations performed by a digital filter, over a set period of time.
The ultimate idea of the DAC is to reproduce the original waveform in all its complexity and perfection. The idea of the DAC is not to reproduce digital samples, which is that waveform. And if you can do that by taking as many samples as possible, faithfully reproducing all of the timing information which is locked in that signal, the strange thing about our brain is that it seems to be able to resolve information that technically our ears can't even hear.
I thought almost all DACs process "as many samples as possible".... At least for CD playback..... A DAC cannot process "more samples" than what's on the digital media itself.
People often say how wide is the soundstage but it has no depth because their DAC isn't measuring enough samples to give enough timing information to the brain.
This implies that a lot of DACs "omit" some of the data samples on the media..... This would be a damning indictment of the digital audio processing industry if this is actually the case. But I don't think this is actually the case.
Follow Ups:
If you read interviews with Rob Watts, the DAVE designer, he states that his goal is to make as long an FIR as possible. In theory, if the original analog signal is perfectly band-limited before sampling then the perfect reconstruction filter in the DAC should have an infinite number of taps and ring forever. I suspect that is what Franks is referring to in you quotes you provided. DACs do not omit samples but upsampling DACs create 'new' samples in the process. This is done not to add information but to increase the sampling frequency of the filter. The signal bandwidth stays the same and it makes it easier to craft a given filter response.
13DoW
The longer the FIR filter, the more "perfect" the brickwall response..... But I was never impressed with longer filters used for CD playback, from a subjective listening perspective.
Oversampling filters 'create' new samples of the original signal. I think this is what Franks is alluding to.
Here's an Analog Devices tutorial that gives a good overview of the basic motivation and process.
"Oversampling filters 'create' new samples of the original signal. I think this is what Franks is alluding to."Franks used the phrase "faithfully reproducing all of the timing information"..... Oversampling cannot provide or improve this. And most DACs in existence already oversample/upsample. This is nothing new. This is why I thought Franks implied that DACs omitted some of the samples from the original data.
The "creation" of the samples is the execution of the digital filter function in the time domain. Once again, nothing new.
Edits: 06/21/17
Again, if you read what the Chord DAC design has written, he believes that humans are very sensitive to small timing errors and using a very long FIR filter is the way to reproduce the sample without a timing error compared to the original sampling point from the ADC. So, yes, nothing really new Chord just take filter length to the extreme but the minimum phase filter has become popular which is almost the exact opposite.
But the original question was really whether Chord uses two clock sources for 44.1x & 48x data and we still didn't answer him.
13DoW
Digital signal processing technology often is non-intuitive. One such confusing area is the 'altering' of sample values via digital filtering. While a digital reconstruction filter does indeed create new output sample values, these are mathematically derived from and correlated with the input samples. When a digital filter changes the output sample values it is akin to what happens when an analog filter changes the characteristics of an analog signal. We don't view the analog filter as corrupting the signal. Likewise, we shouldn't view the digital filter as improperly corrupting the original samples simply because their values are being changed. That is what filters do, whether digital or analog, they alter the signal fed them. The notion that this somehow represents an corruption or improper degradation is wrong.
_
Ken Newton
"While a digital reconstruction filter does indeed create new output sample values, these are mathematically derived from and correlated with the input samples."
That's a good point and why I put 'create' in quotes. What is the interpolating filter really doing? It's outputting sample values *as if we had sampled the original signal at the new higher rate in the first place*, at least in the ideal case. In reality, there will be inaccuracies because of filter design.
"What is the interpolating filter really doing? It's outputting sample values *as if we had sampled the original signal at the new higher rate in the first place*"
This is the very thing a lot of digital audio sales ads want unsuspecting consumers to believe..... (This has been going on for over 20 years.) But resolution cannot be improved beyond that of the original data on the media.
The interpolation filter in essence "connects the dots" between the original samples. In a way which preferably doesn't corrupt the original signal. In the frequency domain, the filter removes frequency reflections beyond half the sample rate frequency.
The oversampling is simply the digital application of filtering. (An analog application would be to add an LCR circuit following non-oversampled conversion.) It cannot enhance resolution.
Understanding what an interpolating reconstruction digital filter does provides insight in to the nature of the analog signal which is output from a D/A converter unit. An unfiltered D/A output contains the complete desired analog signal band, unfortunately, it also contains a series of repeating copies of that signal band, called image bands or simply images.The audio reconstruction filter's job, whether it's digital or analog, is to remove these image bands by low-pass filtering the desired signal band. It is the image bands which give an unfiltered post DAC signal it's familiar discrete (typically, though not necessarily, stair-stepped) appearance. An pre-DAC digital signal (samples) can be filtered by a digital low-pass filter, or alternately, an post-DAC analog signal can be filtered by an analog low-pass filter to remove the image bands. Removing the image bands renders the otherwise discrete looking output smooth and 'analog' looking. Keep in mind that the desired complete analog signal was mixed in there all along, it just needed the image bands to be removed in order to be recognized as such.
A digital reconstruction filter doesn't really produce samples as though the original native sample rate were higher. There's no increase of encoded signal bandwidth. It simply shifts the repeating image bands up in frequency by a multiple equal to the filter's oversampling ratio, where they become much easier to completely remove with a with an relatively simple analog output filter. You'll alternately see the reconstruction filter refered to as an image-rejection filter, or as an interpolation filter.
_
Ken Newton
Edits: 06/20/17 06/20/17 06/20/17
Two points of clarification... probably more for me than for you.
You're right - there's no increase of encoded input signal bandwidth because that signal is already band limited to prevent aliasing at the ADC.
The interpolation filter should already be a low pass filter because of those reflected images at the original base sampling frequency (fs) that you mentioned. However, the low pass filter won't remove the image at the new, higher sampling frequency, L * fs, where L is the interpolation factor. That's the job of the reconstruction (or antialias) filter, which can now have a much simpler design because the images are now centered around L * fs.
Yes, that's correct. The image bands are still present, which is indicated by the fact that the D/A output still exhibits an stair-stepped appearance, except with finer granularity steps after oversampled digital interpolation is applied. Since the image bands are shifted up in frequency by a multiple of the oversampling ratio the residual image bands are mor easily removed via analog final filtering. Also, as I recall, the SinX/X frequency domain masking effect of zero-order-hold (stair-stepped) D/A unit operation inherently suppresses the oversampled interpolation image bands more effectively than it was before oversampled interpolation.My DSP explanations intended for the lay person aside, I mostly jumped in to counter the commonly expressed notion that since digital interpolation changes the data sample values, it necessarily corrupts the pristine signal captured by the original data values. That is an (intuitive) analog signal domain processing notion, which has nothing to do with (often non-intuitive) digital signal domain processing.
_
Ken Newton
Edits: 06/20/17 06/20/17 06/20/17
When performing synchronous interpolation ("upsampling" or "oversampling"), generally the original data points are still present (although often scaled in value), and additional interpolated points are calculated between the already-existing data points.
In contrast when performing asynchronous interpolation, virtually *all* of the original data points are discarded and replaced with calculated values. In the very best case of a common asynchronous rate of 44,100 to 48,000 samples per second, there would only be one original data point every 147 or 160 samples (depending on the direction one is converting), as those are the least common multiples of the two rates. (In practice this has never been achieved as doing so would require interpolating by either 147 or 160 and subsequently dividing by 160 or 147. The resultant intermediate frequency of 7.056MHz is too great and capable chips priced too high to be available in any known equipment. It is past double-DSD rates but would require 64-bit accumulators rather than the single bit used in DSD.)
Instead what is used is an ASRC (often also used as a "jitter eliminator") wherein the incoming and outgoing rates have no correlation whatsoever. In that case, every single output sample is an *interpolated* value. While each output sample is *related* to the input samples, the relationship is known only to the designer of the particular algorithm used by the ASRC chip.
My personal experience, and what appears to be a broad general consensus, is that synchronous interpolation (wherein the original sample points are retained or scaled) is sonically preferable to asynchronous interpolation (wherein the original sample points are completely replaced by interpolated values).
As always, solely my personal opinions, prone to error, and not necessarily those of my employer or pool-boy.
Maybe the Chord DAC upsamples to the lowest common multiple of the popular audiophile sampling frequencies like the PS Audio DirectStream, e.g. 28.224 MHz.
Here's a question from a curious enthusiast: I thought with interpolation by an integer factor, the original data points will still get modified a bit since there's no such thing as a perfect low pass filter. However, you can use half-band filters to 'retain the original samples', which seems to be the approach used by Schiit. What are the tradeoffs involved between the former and the latter?
Charles, as you correctly surmised, I was addressing synchronous interpolation digital filtering in my upthread comments. I don't disagree with your pointed criticism of ASRC. As I understand it, the main problem with ASRC solutions is the operation of the 'ratio estimator' block. This block is tasked with determining the ratio of the input to output sample rate. The resulting ratio estimation is computed by taking a running average of the two rates, which can vary or drift slightly in value over time and thus provoke artifacts. In addition, very small rate differences, where the input and output rates are nearly, but not quite, the same can also provoke ratio estimator artifacts.Artifacts appear because the programmable filter coefficients (which are derived from the ratio estimator's computations) utilized in the rate conversion/interpolation polyphase FIR filter block are themselves interpolated values. This is done in order to compute coefficients which support arbitrary input/output ratios. One of the problem aspects of ASRC design seems to be insufficient polyphase filter coefficient precision, which is magnified in hardware based IC converters due to limited hardware resources.
Just my very much non-authoritative understanding of the underlying problems. Perhaps, someone with ASRC design expertise will add to, or correct my above basic assessment.
_
Ken Newton
Edits: 06/22/17 06/22/17
"Instead what is used is an ASRC (often also used as a 'jitter eliminator')"ASRC does not eliminate jitter, the input jitter gets encoded as "amplitude" errors in the converted signal. (The converted signal might not necessarily have less jitter.) If the same signal were to be converted in a later trial, a totally different set of "amplitude errors" would be encoded in the converted signal. The converted data in multiple trials would never be numerically identical.
In contrast, synchronous conversion does not introduce "amplitude errors", because timing is not involved in the conversion. Just the values of each input sample. The converted signal in multiple trials would be numerically identical (provided no data is misread from the media).
"My personal experience, and what appears to be a broad general consensus, is that synchronous interpolation (wherein the original sample points are retained or scaled) is sonically preferable to asynchronous interpolation (wherein the original sample points are completely replaced by interpolated values)."
This is because with ASRC, the amplitude errors from the input jitter embedded in the converted signal is essentially noise introduced into the signal. Synchronous conversion does not have this problem.
Edits: 06/21/17
> > ASRC does not eliminate jitter, the input jitter gets encoded as "amplitude" errors in the converted signal. < <
I must admit to never having studied the underlying theory of ASRC's in great depth, simply because I've not liked the sound of any I've tried. Knewton's post just above (in Classic view) has a fair amount of technical information that seems accurate to me.
All I can say is that whatever the (real, perceived, or imagined) shortcomings of ASRC's may be, that they work well enough to fool present day measurement technology. I have performed careful comparisons with and without the ASRC, and an ASRC can easily make any DAC implementation (whether poorly or perfectly executed) and yield essentially perfect results on the JTest as used by both Stereophile and Hi-Fi News (and I believe also at least one German publication).
In my opinion ASRC's are the chief reason that one sees virtually "textbook perfect" result in many JTest measurements for the past few years, even on sub-$300 DACs. It also means that listening is even more important than ever. Before ASRC's became widely available, there used to be at least a weak correlation between the JTest measurement (which could show the quality of implementation of a DAC) and sound quality. Currently there can be an inverse correlation (in my personal opinion), as I've yet to hear an ASRC that sounds musically natural, yet they all measure essentially perfectly with the JTest.
As always, strictly my personal opinion and not necessarily that of my employer or lap dog.
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: