Audio Asylum Thread Printer
Get a view of an entire thread on one page
|For Sale Ads|
In Reply to: RE: Not really. posted by Charles Hansen on June 10, 2017 at 12:02:34
I responded to J. Phelan's first sentence which was a follow-up to the prior posts.
The rest of his post was extraneous.
I parsed your post incorrectly. My mistake.
This thread was about MQA vs. a failed 'higher-rez' format. Any similarities (should be) allowed to come up.
Maybe I didn't clarify "shady science". If 'temporal blurring' exists, do we have any studies, in the past 35 years, that shows this is a problem ? If so -who, before Meridian, tried to address it ? Seems strange they're the only ones, this late in the game.
> > If 'temporal blurring' exists, do we have any studies, in the past 35 years, that shows this is a problem ? If so -who, before Meridian, tried to address it ? < <
To the best of my knowledge, the chronology of "temporal blurring" (MQA's marketing term for a phenomenon referred to by various names over the years) is as follows:
1) It was first identified in 1984 by engineers from Studer and Soundstream (world's first commercially available digital audio recorder with a 50kHz sampling rate - later morphed into a car audio company) in an AES paper, "Dispersive Models for A-to-D and D-to-A Conversion Systems".
2) Wadia was the first (late 1980s) commercial example of a product that traded a small rolloff in the top octave (ie, frequency domain) for dramatically reduced "ringing" (ie, time domain).
3) Pioneer popularized Wadia's approach in the early 1990s and called it Legato Link. Their production volume was so large that soon all Burr-Brown DAC chips (Pioneer's supplier) included a "slow rolloff" digital filter option.
4) Sakura System's "47 Lab" was the first digital product to eliminate the reconstruction filter in the D/A converter altogether. This was first announced in the Japanese DIY magazine "MJ" (roughly equivalent to "The Audio Amateur") in a series of articles in 1996 and 1997. This represented an extreme exploration in the audible effects of digital filtering, at least on the playback side.
5) Ayre was the first audio manufacturer (in 1999) to allow the user to select between the "standard" approach and Wadia's approach to digital filters. There was a rear-panel switch labeled "Listen/Measure", the owner's manual recommended the time-optimized "Listen" position and all known reviews agreed.
6) In the late 1990s Sony and Philips developed the SACD format, specifically designed to replace the expiring patents of the CD format. Perhaps the greatest difference between the DSD format and convention PCM is the elimination of anti-aliasing filters on the A/D side, due to the extremely high sample rate. The D/A side used a 3rd-order (18dB/octave) analog filter at 50kHz to reduce the levels of out-of-band noise that could disrupt or damage downstream equipment including both electronics and transducers. (For comparison purposes the 1st-generation Sony CD player used a 9th-order [54dB/octave] analog reconstruction filter.)
This was the basis for one of the prominent claims for SACD - its "superior" pulse response. In my opinion these claims were misleading as while they accurately showed the superior (narrower) pulse response of the A/D converter, they apparently did not show the broadening of that pulse after it passed through the low-pass filter in the D/A converter. Nonetheless, my personal opinion is that the major sonic differences between DSD and *conventionally-implemented* PCM is due to the filtering used (brickwall anti-aliasing on the A/D side and brickwall reconstruction on the D/A side).
This was another signpost pointing out the important sonic impacts that digital filtering imparts. (As a side note, SACD forbade access to the unencrypted digital data stream - just as with MQA.)
7) In 2004 Peter Craven published an AES paper ("Antialias Filters and System Transient Response at High Sample Rates") on so-called "apodizing" filters, which was commissioned by Meridian. Craven noted that any "pre-ringing" created by a steep linear-phase filter in the A/D converter could be filtered out later in the chain. However at the popular 44.1kHz sampling rate the apodizing filter exhibited about the total energy of the original anti-aliasing filter. The claimed advantage was that by using a minimum-phase filter that the energy in the "pre-ringing" would be shifted to the "post-ringing".
This would seem to be a much less objectionable effect, as all sounds in nature create "post echoes" due to the presence of any reflective surfaces near the sound source. Psychoacoustic testing has confirmed that the ear/brain is distinctly more sensitive to "pre-ringing" than "post-ringing"
8) Meridian introduced products that included apodizing filters in 2008/9, I believe. Around the same time, having read Craven's 2004 paper and conducting further independent research, Ayre released products allowing the user to select between either an apodizing (sharp minimum-phase) filter or a slow rolloff minimum-phase filter.
9) In 2012 Ayre developed the worlds first PCM A/D converter that used a digital filter with no time-domain artifacts whatsoever for dual- and quad-sample rates. Single-sample rates had the minimal amount of time smear, achieved at the cost of a slight rolloff in the top octave. All recordings made with this converter have no "errors" for which MQA could possibly correct.
10) Meridian began working on MQA at least in 2013 and possibly earlier. The original focus appeared to be to reduce the file size of high-res files, largely for storage with portable players. An AES paper on the subject was published in 2014 ("A Hierarchical Approach to Archiving and Distribution").
11) Since the original announcement MQA's goals seemed to have expanded, first to "correct" for errors made by the A/D converter used to create the original files, and later to "compensate" for the signature of the particular digital filter/converter chip (apparently one needs to sign a non-disclosure agreement to know precisely what is involved) in a particular D/A converter. MQA's method of "de-blurring" A/D errors appears to closely resemble Ayre's slow rolloff minimum-phase digital filter from 2009.
The bottom line is that there has been a fairly broad awareness of time-domain issues involved with digital audio for over 30 years. It has been examined in some detail, both in academic journals and by equipment manufacturers. I hope this overview is helpful.
As always, strictly my personal opinions and not necessarily those of my employer or lap-dog.
The term "time-smear" could be said to encompass jitter, too. At one time one thought of jitter in terms of ugly anharmonic tones, but these days some people think of it (timing errors in digital data transmission) as having different, especially spatial effects--smearing images, making them seem less solid. The industry spent a lot of years obsessed by jitter, and even today, some designers (I'm thinking in particular of Ted Smith) talk about it a lot.
Thought of that way, the whole history of digital audio, more or less, could be considered the history of dealing with "time-smear."
> > the whole history of digital audio, more or less, could be considered the history of dealing with "time-smear." < <
Yes. And in an interview I noted that when I look back on my career, even beginning with loudspeaker design various forms of "time smear" have been the fundamental basis for the path I've blazed. Therefore I would take your thought to the next level and remove the qualifier "digital". This only makes sense as the ear/brain is far more sensitive to time-domain information than to amplitude-domain information.
Charles, thanks for this. Makes sense to me--and it's great to have such a perspective on your own career, which, from my perspective--and I say this sincerely, as an audio writer and long-time Ayre owner--is most impressive. I hope I can say that without it seeming weird or forced.
If we can just clarify what 'jitter' is -NOISE.
I've seen measurements show a 10-20db spike in phase noise. Is this gone ? And if so, how ? Outboard clocks, according to Stereophile, actually made timing WORSE (as shown in a JA clock review, about 10 years ago.
There seems to be a general lack of clarity as to the definition of terms that actually have precise technical meanings. I will attempt to clear these up in a straight forward way:
1) Jitter refers to the timing variations in what should be a *perfectly* steady reference timing signal. While all of us can understand the basic concept, it turns out that specifying the jitter level of a device, signal, clock, or whatever is not as useful as one would hope.
For starters jitter specifications are given in units of time (eg, picoseconds). This in itself can be misleading as the absolute amount of jitter (for example, 1 picosecond) can either be a very small percentage of a low-frequency clock (for example, 0.0000000001% of a 1 Hz clock) or a much higher percentage of high frequency clock (for example, 1% of a 10GHz clock.
A second problem is whether the jitter is specified as a peak value or an RMS value. Unfortunately there is no fixed relationship between these two as it depends on the source of the timing variations. For example if the jitter were due solely to power supply ripple affecting the jitter of a ciruit, the peak jitter would be 1.414x the RMS jitter. At the other extreme if random noise effects in the circuit are the source of the jitter, the peak jitter levels will *on-average* be 12x the RMS jitter levels. But as noise is a random event, there will be random times that the peak jitter will be greater that 12x the RMS value and other random times that the peak jitter will be less than 12x the RMS value.
A third problem with jitter is that it tells us nothing of the spectral distribution of the timing errors. Comparing with analog speed (timing) variations, variations is speed below about 5Hz creates a characteristic change in the sound that we describe as "wow". Faster speed variations from 5Hz to 20Hz give a different audible effect and we describe that as "flutter". Even higher speed variations (typically found in analog tape) are called "scrape flutter" and have yet another audible characteristic. Turning back to digital it is also known that the same levels of jitter concentrated at different frequency ranges will also have different audible consequences, but a hitter specification only gives us a single number, with no information as the the spectral distribution of those timing variations.
A fourth problem with jitter is that it does not tell us whether the jitter is correlated with the data or not. For example if one were to look at the spectral distribution when playing back a full-scale tones of various frequencies, if the spectral distribution of the jitter also changed it would be referred to as "correlated" jitter. If the jitter spectrum did not change, it would likely be due to random noise in the clocking circuits and would be referred to as "uncorrelated" jitter.
2) A much more useful way of describing timing errors in a digital system is to use a phase-noise plot (or graph). This shows the jitter levels at various frequency offsets from the main (carrier) frequency of an oscillator, and is typically used to characterize oscillators and (less commonly) phase-lock-loops. (NB: In the latter case the phase-noise plot of a PLL shows the amount of phase noise of the total system of a clock generator - typically some type of oscillator - and the PLL as a combined system.)
Unfortunately at this point in time even phase-noise plots have limitations. Only recently has sufficiently sensitive test equipment been designed to measure phase noise with accuracy, and these complex instruments are too expensive (between $15,000 and $100,000) and complex to operate to be widely used. There is not enough data to provide generally accepted correlations between the spectral distribution of jitter and its audible effects. Nor are oscillator circuits ever measured for phase noise when using anything but the very best laboratory power supplies - which may not reflect the performance of that same oscillator when used in an actual piece of digital audio equipment.
Another advantage of using a phase-noise plot is that by simply integrating the total amount of phase noise over a specified frequency range, one can calculate the amount of jitter that represents. However, the reverse is not true - one cannot determine the spectral distribution of a phase-noise plot from a jitter specification. This is similar to knowing the THD of an analog circuit compared to looking at an FFT of the analog waveform. The FFT will show how much of the distortion is 2nd harmonic, 3rd harmonic, and so on, and it is known that the ear is more sensitive to high-order harmonic distortions than low order ones. There is simply more information in the FFT plot compared to a single THD number.
3) It should be noted that these concepts are apparently too abstract or difficult for all to understand fully. For example in the linked video a digital designer made at least one inaccurate statement regarding jitter. Specifically that jitter generated by imperfect clocks in the A/D converter and embedded in the digital file could be attenuated (or even eliminated?) by using a low-pass digital filter that attenuates the high frequencies encoded in that file. I can't even begin to claim to understand what he was attempting to describe, except that it is not correct. The designer did note that he uses a master oscillator from Silicon Labs. These are interesting designs in that they do not use quartz crystals as their resonant element, but instead tiny strips of silicon etched into the semiconductor wafer itself (MEMS or Micro-ElectroMechanical Systems). This technology does not have the frequency stability required to reach low levels of phase noise at low frequency offsets from the oscillator (carrier) frequency. Again there is no consensus as to the audible effects of this, except to note that crystal-based oscillators have significantly better measured performance in this area.
As always, strictly my personal opinions and not necessarily those of my employer or you, the reader.
Another great piece.
It seems hard to believe that digital, so accurate with a fixed-clock (or re-clocking), has timing errors. I would understand it if we were transferring files, where a read-out will show errors.
My belief that 'jitter is noise', comes from the head of Bel Canto, who, in his interviews, said that "jitter is noise that shows up as timing errors".
And white papers from either TI or Analog Devices, which showed noise-spikes (unwanted voltages) that appeared to affect timing. And this correlates with your piece.
As a computer guy, I used think much the same thing -- how hard is it to get the timing right?
When I was introduced to Ed Meitner, of EMM Labs and Meitner Audio fame, that changed, because when he explained it to me, I figured out it's not only thinking about the data in terms of 1s and 0s, but, rather, how the hardware and software has to decode a 1 or a 0 from the stream. You see, there isn't a 1 or a 0 to flash a light on -- there's a representation of it in an electrical medium and the system has to interpret that. How accurately in terms of timing that interpretation happens affects bit-to-bit jitter.
It's for these reasons that Meitner is 100% against external clocks, as are others, which I can understand -- because where the clocking matters the most is right at where the bitstream enters the DAC, so that's where you want it situated by.
But wouldn't an outboard clock separate phase-noise from sensitive electronics ?
Clock reviews, over the years, reported an improvement. Inc. one a year or so ago on Ultra Audio.
There are many issues with regards to clocks. And preferences among reviewers can vary.
With that in mind, I have not attached an external clock to any DAC I've had as I've never had an external clock that takes one. To-date, though, I'll say that the best DAC I've heard is the new DA2, from EMM Labs. Review coming in July.
Can noise affect timing? I don't know.
Interestingly, numerous recording engineers I know say they have never heard a SOTA DAC sound better with an external clock, only worse.
> > If we can just clarify what 'jitter' is -NOISE. < <
You should watch this video. Sorry, it's not short, and it doesn't answer your questions, but you would benefit from watching it.
The gist: jitter=noise is not current thinking.
Depends what you mean by "noise," I suppose.
"time smear" goes back to the first tape recorders. Wow and flutter. Nothing new.
Many of today's DACs have virtually unmeasurable jitter. There are numerous non proprietary ways to deal with it, including synchronous upsampling like Bryston does.
> > Many of today's DACs have virtually unmeasurable jitter. There are numerous non proprietary ways to deal with it, including synchronous upsampling < <
1) There are only two magazines (both print) of which I am aware that publish jitter tests - Stereophile in the US and Hi-Fi News in the UK.
2) Both have upgraded their measurement equipment at least once, making it impossible to compare earlier tests from their more recent tests. Specifically Stereophile was the first to measure jitter, but using a unit from Ed Meitner. The difficulty with this unit was that it required opening up the unit and connecting test leads to specific pins on the DAC chip itself. This wasn't so bad in the early '90s as all of the parts were large through-hole devices and most of them were R-2R "ladder" DACs that were sensitive to jitter on the word-clock pin.
By the end of the '90s there were new DACs, almost all surface-mount devices that are extremely difficult to probe without damaging the DUT. In addition the ladder DAC chips were changed such that jitter was only important on the bit-clock pin, and new delta-sigma DAC chips were available that are sensitive to jitter on the master-clock input pin (that drives the modulators in the output stage). All of this led Stereophile to switch to a new analyzer, developed by Paul Miller (currently editor of Hi-Fi News). This machine obviated the need to open up the DUT, as it only measures the analog output signals. It is made from plug-in A/D cards made by National Istruments that fit into a desktop computer, along with audio specific test routines developed by Miller. The problem with that early machine was the A/D converters used were only 16-bits and they were inside an extremely noisy environment (a desktop computer with dozens of high-speed clocks and switching power supplies). Jitter tests made with this machine typically show artifacts of the test equipment in addition to actual jitter in the DUT.
Many years ago Stereophile switched to an Audio Precision which has a noise floor low enough to accurately measure audio equipment without the artifacts of the test equipment interfering. Hi-Fi News still uses Paul Miller's equipment, but at some point the National Instruments cards were upgraded to use 24-bit A/D converters and improved shielding. It's unclear if that setup performs equally to Stereophile's Audio Precision, but it likely is quite close.
3) In addition to the constantly improving measurement equipment, there has been a parallel trend in many of the D/A converters that are tested. Specifically they often incorporate Asynchronous Sample Rate Converters (ASRC), either separately or built into many modern DAC chips. When this ASRC is employed, almost any D/A converter will exhibit "textbook perfect" results on the J-TEST jitter test used by both Stereophile and Hi-Fi News. However my personal experience is that ASRC can dramatically improve the measured performance of a converter while at the same time significantly degrading its audible performance. YMMV.
4) Unlike ASRC, so-called "synchronous upsampling" does nothing to reduce jitter (measured or audible). It can affect the sound quality, as it is simply a specific type of digital filter. As such they will all affect the sound quality differently depending on the parameters of the filters used.
The bottom line is that I agree that when comparing jitter measurement from now to 10 (or even 20) years ago that it definitely appears that the equipment has improved. I am much less clear that most equipment has actually improved when it comes to audible jitter, especially when ASRC is employed.
As always, strictly my personal opinion and not necessarily that of my employer or pharmacist.
Very educational thank you.
I stand corrected on synchronous upsampling. Interesting, the Bryston DACs, engaging it makes the sound a bit softer, but very pleasing, and a bit more textured. This is across the board.
The upsampling is user select-able. I think that is a nice idea.
> > Interesting, the Bryston DACs, engaging it makes the sound a bit softer, but very pleasing, and a bit more textured. < <
It appears that you are referring to the Bryston BDA-3. According to the owner's manual, "the internal sample rate converter upsamples incoming 44.1kHz and 88.2kHz digital audio to 176.4kHz. All 48kHz and 96kHz digital audio upsamples to 192kHz. The Upsample feature does not affect HDMI or USB."
That unit uses dual mono AKM AK4490 DAC chips, which have built-in 8x digital "oversampling" filters. (The correct technical term is interpolation filters.) When "upsampling" is engaged (again, the correct technical term is "interpolation"), a separate interpolating digital filter is inserted prior to the interpolating digital filter built into the DAC chip - nothing more and nothing less. There are three things to note in this situation:
1) Concatenating digital filters is done all the time, almost always to save money. Virtually all 8x interpolating digital filters are a concatenation of three 2x interpolating digital filters. Nothing new here.
2) The first digital filter in the chain has the greatest sonic impact. It is possible that the sonic differences are simply due to the different characteristics between the first stages of the external ("upsampling") and internal ("oversampling") interpolating digital filters.
3) The other factor that may come into play is the rate at which the modulators in the DAC chip are operating. Depending on the specific internal architecture of the DAC chip, the modulators may or may not be operating at different rates when presented with different input rates (eg, single-, dual-, or quad-rate signals). I've not seen any that are so affected (the modulators typically operate at the master clock rate set by the local crystal oscillator), but it is conceivable that there are exceptions with which I am unfamiliar.
The bottom line is that the "upsampling" feature inserts an extra digital filter into the signal chain. This is very much the situation with MQA, as well. The degree to which the sound is affected (for better or worse) in either situation is simply due to the effect of the extra digital filter.
Just out of curiosity, how would you describe the magnitude of the sonic difference created by the addition of the "upsampling" digital filter in the Bryston DAC?
As always, strictly my own personal opinions and not necessarily those of my employer or other digital engineers.
Ok, I have no problem admitting that some of the dac signal processing you lay out here is over my head!
But to answer your question, on the BDA-3 DAC, the difference in sound when the upsampling is engaged is pretty stark. It is almost like the signal is being passed through a lush tube buffer. Better? Definitely different.
I also wonder how hardware upsampling like the Bryston scheme differs from upsampling at the server stage, like with Roon, or even HQplayer. Upsampling to DSD was in HQPlayer was a big fad recently.
Now with Roon providing that capability along with the previously discussed powerful DSP tools they provided with the last update, to me it looks like MQA is more obsolete with every week that goes by.
EDIT: I am also reminded of the SONY HAP players, which have a user engagable "Remastering" process. Digital filtering of course.
> > Upsampling to DSD was in HQPlayer was a big fad recently. < <
There is a potential for this to improve the performance of some particular implementations of delta-sigma DAC chips. Depending on the internal architecture, it is possible to run the modulator on the output stage at higher rates, which will definitely change the sound - possibly for the better.
As always, strictly my personal opinion and not necessarily those of my employer or baby-sitter.
My master 7 dac can be run nos up to 8 times oversampling. I prefer nos as it sounds more natural than oversampling
> > I prefer nos as it sounds more natural than oversampling < <
I would suggest that it all boils down to the particular oversampling (interpolation) filter used. A decade ago there were many DACs introduced with NOS (filterless) designs. I was curious and when Ayre developed the ability to create custom digital filters, the first test we tried was NOS. This replaced a combination of an external 4x "upsampling" filter feeding the 8x "oversampling" filter built into the DAC chip.
There were significant improvements in many areas - the midrange in particular was very pure and natural, but the frequency extremes seemed to be not quite up to the performance level of the broad midrange band (~200Hz to ~5kHz). I then went down a rabbit hole of various interpolation rates (4x, 8x, and 16x), window functions (Kaiser, Taylor, Gaussian, etc.), multiple parameters affecting the shape of the rolloff curve, and finally various dithering algorithms.
In the end I felt that we had improved upon all of the sonic advantages of NOS without losing anything. But I would agree that in general NOS (filterless) is an overall improvement over the typical filters built into DAC chips. YMMV.
As always, strictly my personal opinions and not necessarily those of my employer or Pee-Wee Herman.
Ahh...a marketing term. And a problem that seems to be solved !
A great piece, Charles. I'm glad 35,000 people read Asylum. Or so (I think) that many.
Ayre had the most to do with bringing zero-feedback amplifiers to the market. And now, the most important with (using) minimum-phase filters. Because more can work around Ayre's price points (vs. say, a 808-series player).
However, there is no right or wrong, as you have noted. Look at the filter in Chord's latest DAC - a country-mile long.
It is all too obvious what this thread, as a whole, is about but I am not interested in engaging in that issue. My posts were in response to punctate factual matters and, as should be clear, not about the rest.
The only hard-fact you gave was the 2L recordings. Helpful. But we need a few more than that ! It's not nice of Meridian to leave us fending for ourselves. They could do more.
But this point is extraneous -you take things hard, esp. with new audio formats. In defense of audiophiles, didn't they embrace SACD ? They did, with no hard words, for years after it came out.
Then titles-available became an issue, then CD-sound improved (putting the new format in question), then how many were true DSD sources, etc.
SACD titles offer some great music, no question. And it's not a failure -it's still around. We just don't want another SACD-scenerio, that's all...
I grant you that MQA (not Meridian) could and should do more but the rest of your post, again, has nothing to do with anything I have said.
Post a Message!
This post is made possible by the generous support of people like you and our sponsors: