|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
91.229.222.188
In Reply to: RE: Clarification posted by Charles Hansen on May 16, 2017 at 23:15:27
And often below. Genius. Thanks for presenting this. Is there another source for this calculation, for one to corroborate?
Big J
"... only a very few individuals understand as yet that personal salvation is a contradiction in terms."
Follow Ups:
> > Is there another source for this calculation, for one to corroborate? < <
Sure. Please refer to the slides presented to Stereophile in their first article on MQA:
https://www.stereophile.com/content/ive-heard-future-streaming-meridians-mqa
More difficult to find is the presentation Bob Stuart made to the Japanese Audio Society. It contains the slides from the Stereophile article, plus at least one additional one, reproduced here:
This shows the spectrum after the first MQA "fold", from 192kHz to 96kHz. The quad-rate information in region "C" uses lossy compression to store that information in the area with purple squares designated by the arrow labeled "Encapsulation". (Presumably "encapsulation" sounds better to the customer than does "lossy compression".) The grey area directly above is reserved for the double-rate information and is clearly labeled "96k/17.2b".
When the second fold to 48kHz occurs, the quad-rate information is shifted from the dual-rate region to the single-rate (baseband) region, still leaving only 17.2 bits of resolution in the baseband audio. These other slides are shown in the Stereophile link noted above.
Please note that these graphs were all made with a string quartet playing a composition by Ravel. Now we turn to an article by James Boyk, noted pianist and professor at Caltech. His article "There's Life Above 20kHz" is linked in the URL below. Scroll past the 3/4 mark on the page to Table 1 and we can see that a violin has a maximum of 0.04% of its power in the band beyond 20kHz. While he doesn't specifically measure either viola or cello, we can be confident that they don't have *more* energy past 20kHz than a violin. In contrast cymbals have 40% (!) of their total acoustical power above 20kHz. For power this ratio of 1000:1 represents a 30dB difference.
The normally accepted calculation between bits and dB is 6dB per bit. Therefore program material that had a lot of cymbals (almost all modern pop, rock, jazz, plus much orchestral music) would require as much as 5 more bits of uncompressed space in the 24-bit FLAC container.
NB: In my previous post I subtracted 6 bits from the 17.2 bit resolution available with a string quartet to reach a possible minimum resolution of 11.2 bits. This was a typo and should have been 5 bits and 12.2 bits respectively. And as I am writing this, I also now realize that the energy above 20 kHz will be compressed (dual-rate information losslessly, and quad-rate information using lossy techniques). Therefore there may not be a full 5 bit reduction is dynamic range when MQA encoding is applied to music with lots of high-frequency energy (eg, cymbals). But when already reducing 24-bit data to a maximum resolution of 17.2 bits, there isn't much room for further reduction before the decoded MQA file cannot even achieve CD-standard 16-bit resolution.
The fact that MQA specifically chose a musical example with what is likely the least amount of high-frequency energy *possible* is interesting (to say the least). Unfortunately it is extremely difficult to cipher exactly how much resolution is lost for each specific MQA-encoded track. That would require special MQA-encoded test discs and the like.
It's unfortunate that MQA seems to have deliberately obfuscated the true costs of their encoding/decoding process, and only focused on the benefits. I suppose that is only natural for anybody trying to sell a something, but when it comes to physics there is simply no free lunch. In other words, if it sounds too good to be true, it probably is. All engineering is made of a series of compromises. If streaming bandwidth is a critical issue, then in some cases reduction of file size may be worth the cost of reduced resolution. However true high-res 96/24 FLAC files are only about 20% larger than MQA files. Even 192/24 FLAC files are about half the size of streaming video files. Most audiophiles have sufficient internet bandwidth to easily stream true hi-res audio files.
For whatever reason, MQA has chosen to only provide full (192kHz) MQA decoding via hardware. This requires the customer to purchase a new DAC for full decoding. Clearly it is possible to perform the decoding in software, as there are at least two software apps that will fully decode 96kHz files and (I believe) partially decode 192kHz files (to 96kHz).
The ultimate choice is up to each consumer. Is it worth purchasing a new DAC in order to play a few hundred titles that are realistically available from only one streaming service? To answer that question would require a crystal ball. While I have no doubt that it is possible for an MQA file to sound better than a Redbook file (44/16), my experience is that the orginal hi-res file sounds better still. How long before there is a streaming service that offers streaming of true high-res files?
Or before there is a streaming service that uses OraStream's adaptive technology? (If there is sufficient bandwidth at the playback device, OraStream will stream full 192/24 resolution - about 3500kb/s in FLAC. If there isn't enough bandwidth it automatically and seamlessly scales back to 96/24 - about 1800kb/s in FLAC. If less than that is available it scales back to 48/24 - about 1200kB/s in FLAC. The process continues all the way down to bitrates comparable to that used by YouTube, allowing for uninterrupted music playback even when using wireless services. By comparison MQA requires a constant 1500kb/s in FLAC.
Hope this helps. As always the opinions in my posts are my personal ones and do not reflect those of my employers, co-workers, friends, family, or enemies.
Charles--not a "gotcha" question, I assure you. It's quite sincer, and I suspect you'll approve of the spirit of the question (actually, questions):
Have you listened? What do you think?
I hope you're doing well.
Jim
> > Have you listened? < <
Yes, please refer to some listening comments in this post:
https://www.audioasylum.com/audio/digital/messages/18/184097.html
> > What do you think? < <
My most recent technical analysis (with minor corrections from previous posts being noted in the paragraph beginning with "NB") is in this post:
https://www.audioasylum.com/audio/digital/messages/18/184101.html
In that post you say that the loss of resolution is "(to my ears, at least) audible." Also--I THINK this is your own observation--"When listening to the full high-res file on a good system (or headphones), it is quite easy to hear the whispered overdub as a distinctly separate element in the mix. I cannot say the same for the MQA version. To my ears the "whisper overdub" was markedly more difficult to discern. Please try it for yourself and let me know what you think"--again, loss of resolution. Very specific and helpful. I'm traveling now--away from most of my files--but will do this test when I get home.
Others have claimed to hear "artifacts" in MQA; I'm thinking of recording engineer Brian Lucey--but he's not specific. What else do you hear, other than the loss of resolution? Anything good?
Again, I'm not being confrontational. I'm trying to learn.
Thanks,
Jim
> > Others have claimed to hear "artifacts" in MQA; I'm thinking of recording engineer Brian Lucey--but he's not specific. What else do you hear, other than the loss of resolution? Anything good? < <
The only fair way to do a true apples-to-apples comparison test is with a unit that performs MQA decoding. Then one can send it either MQA-processed files for decoding or the straight hi-res files from which the MQA files were derived. That way there are only two variables in the experiment - the first is the digital filter used during the MQA encoding and the second is the digital reconstruction filter used in the DAC. I have performed such listening tests with both a Meridian Explorer2 and a Mytek Brooklyn.
It is essentially impossible to know what the sonic impact of the digital filter used while encoding MQA. However it is much easier to understand the sonic impact of the digital filter used in the playback DAC. When playing an MQA file we know from the MQA patent that the digital filter is a very gentle affair with a very slow rolloff. A 192kHz file is down -3dB at ~38kHz and -10dB at ~50kHz (roughly an octave below the Nyquist frequency limit of 96kHz). When playing non-MQA files in those two DACs the digital filter built into the DAC chips are used. The Mytek Brooklyn uses the exact same ESS DAC chip as both the Pono Player and the Ayre Codex. While both of those devices bypass the internal digital filter and instead use a custom digital filter developed by Ayre, I am also familiar with the sound of the stock digital filter in that chip. This allows me to get at least a rough idea of the differences produced by the playback digital filter.
In addition to the noticeable loss of audible resolution in specific instances (presumably because MQA reduces the bit depth from 24 to a maximum of 17.2), in my experience I could generally hear other differences that were neither "good" nor "bad". Many tracks seemed to have a bass emphasis that was not present in the original high-res file. Whether that is good or bad is a matter of opinion, but it was definitely a (noticeable) change from the original file. I have no definitive understanding of why this would be so, as MQA claims that no EQ is performed and I have no reason to doubt them.
The bottom line is that the digital reconstruction filter used in a DAC will impact the sonic performance of that unit (along with many scores of other variables). It does not surprise me that many listeners prefer the sound of MQA's slow-rolloff digital filter over the digital filter built into the DAC chip of those units. Many other manufacturers have arrived at the same conclusion, starting with Wadia Digital in the late 1980s and following through with Pioneer's "Legato Link" from the early '90s, Ayre beginning with their first digital product, the D-1 DVD/CD player introduced in 1998, and now many other manufacturers (most often using slow-rolloff options available in DAC chips from ESS, Burr-Brown, AKM, and Wolfson).
The improved sound quality of Ayre's custom digital filter is available with any source. *Every* slow-rolloff filter will provide some degree of "de-blurring" (to use MQA's terminology) as the amplitude of the "ringing" introduced by the A/D converter's anti-aliasing filter is reduced. If the slow rolloff is combined with the so-called "apodizing" technique (whereby the ringing introduced by the anti-aliasing filter in the A/D converter is removed by using a lower cutoff frequency in the DAC's reconstruction filter), this so-called "de-blurring" can be taken to any level desired.
Whether it is worth purchasing new hardware to decode a proprietary format that currently is essentially only available from one source is a question that only the purchaser can answer.
As always, this post represents solely my own opinion, and not necessarily that of my employer or my bartender.
Charles, very interesting. A poster on another forum had AMAZINGLY similar listening impressions to yours. (SUBJECTIVE listening comments in your post)____
"I heard a very hi-end demonstration of MQA about a month or so ago in NYC. Peter McGrath did the demo with his own recordings (24/96) that had been MQA processed. We had a chance to hear the original and then the MQA version. The setup was as follows: Wilson Audio Alexx speakers, top-of-the-line VTL preamp and amp. and Meridian DAC (of course). The music wasn't what I usually listen to, however, the difference was very clear to everyone in the room (including Michael Fremer, who was seated next to me). I expected to hear equivalence (i.e. that MQA had done no harm), however, there was clearly a difference. The MQA sounded somewhat brighter and had more presence! It reminded me at the time of the "loudness button" and old amps that I had 30 years ago (I had not seen the May 2017 Stereophile at the time of the demonstration).
I managed to corner one of the MQA guys who accompanied Peter and after some prodding by me he explained that they do DSP of the signal as part of the MQA encoding to "make it sound better". While he did not go into any great detail, he indicated that things are done to try to reduce pre and post ringing that are present in almost all digital audio signals. I can only speculate that this involves some kind of digital filtering of the original signal.
Based on this one demonstration I certainly would not advocate for spending time and money on MQA (and risking falling into the clutches of Meridian). Since my preferred digital is SACD I do not see any need for MQA and I certainly do not want anyone using DSP on my digital data streams to make them "sound better"
If I had been able to vote I would have voted for the following:
I won't use it as it doesn't offer me anything I don't already have
I think we have sufficient formats to manage high quality audio already
I think Meridian are focusing more on creating a revenue stream."
Edits: 05/24/17
Thanks Charles--interesting. I'll add only that, as you know since you've clearly read the patents--some/much of what MQA does is done on the transmission side; coding and decoding work together. My impression from reading the patent--I still lack the expertise to read it well--is that this offers advantages over controlling only the receiving side; is there perhaps an analogy to vinyl/RIAA here?
Thanks again.
Posted by Charles Hansen (M) on May 28, 2017 at 12:17:48
> > is there perhaps an analogy to vinyl/RIAA here? < <
I don't see how there could be. Pre-emphasis and subsequent de-emphasis is virtually mandatory for phonograph because of the underlying physics of the transducers. The Redbook CD specification also allows for pre-emphasis/de-emphasis, and it provides a slight advantage (about 1.5 bits extra resolution in the top octave) due to the typical spectral content of music. It seems that it's more trouble than it's worth, as only a handful of very early CDs employed pre-emphasis.
MQA is claiming to do something completely different - specifically, "correct" for "timing errors" created during the original A/D conversion process (so-called "de-blurring"). These "timing errors" are actually artifacts of the steep anti-aliasing filters with sharp "knees" at the corner frequency. The only thing that can be done to "correct" these "timing errors" ("de-blur") is to filter out the "ringing" created by the filter and *hope* that the new filter doesn't create worse artifacts. There are *many, many* digital filters that reduce the artifacts created on the A/D side, the first being from Wadia in the late 1980s. MQA is far from breaking any new ground in this area.
A fundamentally better approach would be to use an A/D converter that does *not* create artifacts during conversion. All DSD A/D converters are free from this problem, as they do not require any anti-aliasing filters at all. However DSD creates a new problem in that it is impossible to process the signal (change volume levels, mix, EQ, and so forth) without first converting to PCM. Conversion to PCM is done with anti-aliasing filters, so the problem springs back to life (think Whack-a-Mole here). Plus each conversion back to DSD adds additional noise.
I believe the best solution is to use true high-res PCM (trivially easy to post-process), but use digital filters on both ends (anti-aliasing for A/D and reconstruction for D/A) that don't introduce sonic degradation. The Ayre QA-9 ADC does exactly that (for dual- and quad-rates only - the single rate minimizes the artifacts but cannot eliminate them). Many companies have created DACs with special filters designed to minimize playback artifacts and also reduce the artifacts from the A/D converter (beginning with Wadia in the late 1980s). There are many companies that have followed the path that Wadia created, or in a few cases pushed that envelope even further.
As always, my postings reflect only my personal opinions and not necessarily those of my employer or son's pet snake.
Charles, thanks very much for this. MQA claims an end-to-end (which I take to mean encompassing the ADC and the DAC) "impulse response" with a main lobe of duration ~ 6 samples, with "ringing" limited to just one negative-going overshoot (ok, undershoot) "lobe" with no more than 10% of the total area (is that with linear or log scales? I don't remember). As far as I know they've released no evidence that they're actually achieving this (I asked for it once more than a year ago; go the typical audiophile "trust your ears" response, which I think I've heard elsewhere ;-) ), so feel free to comment on this aspect of things. But the main question:
When you say the QA/B-9 combo does NO damage at dual or quad (I assume that's 88/2/96 or 176.4/192?), what do you mean exactly? To adopt the MQA vocabulary, I would take to mean that an impulse equivalent to one sample wide (say, 5 microseconds at 192) would, after ADC/DAC, still have a width of 5 microseconds--is that what you're saying the QA/B chain can achieve? (I realize I so far haven't mentioned the question of the POSITION of the impulse in time--phase if you will--which could have uncertainty even if the width is preserved.)
I'm simply trying to understand what you mean when you say the QA/B chain does no damage, and how to relate that to MQA's claims.
I hope this line of inquiry makes sense.
Thanks much.
Jim
When an impulse passes through any band-limited system, analog or digital (which therefore includes every non-imaginary system), the impulse will be necessarily spread over time. MQA's marketing material shows how air acts as a low-pass filter - the farther the signal travels through the air, the more it is stretched in time. This is equivalent to saying "the farther the signal travels through air, the more the high frequencies are attenuated", yes?
Conversely a wider bandwidth system can pass an impulse with less spreading of impulses. With digital systems, the upper bandwidth is set by the sampling rate. With analog systems the upper limit is the concatenation of the responses of all of the stages in the chain, starting with the recording microphone and ending with the playback loudspeaker. In general analog systems have a wider bandwidth that does single-rate digital - otherwise there would be no need for an anti-aliasing filter in an A/D converter.
The Ayre QA-9 offers several different anti-aliasing filters. The one used in the "Listen" mode at 192kHz has *perfect* response in the time domain - zero overshoot, zero undershoot, and zero ringing. However it is down about -0.5dB at 20 kHz. The primary filter used for MQA playback performs very similarly to that used in the Ayre QA-9. However they set a target of no more than -0.1dB droop at 20kHz. This requires a second digital filter which boosts the treble to compensate for the droop of their primary time-perfect filter.
This second filter introduces some "time blur", which is seen as the one negative-going undershoot you noted in your post. The concatenation of the two filters used by MQA yields a time response more like that of Ayre's "Listen" filter used at the 44kHz sample rate. This is inevitable, as there is no such thing as a "free lunch". It's simply a variation on the old story, "Price, performance, features - pick two." In the case of digital it becomes "Time response, frequency response, file size - pick two."
If one can tolerate the file size of quad-rate sampling, the errors in both time and frequency response are so small as to be negligible. With single-rate sampling, the file size is smaller but one has to choose between audible problems with either the time response or the frequency response. As Wadia showed us in the late 1980s, humans are more sensitive to time-domain errors than frequency-domain errors. Ayre has followed this path beginning with our first digital product nearly 20 years ago, and now MQA apparently concurs with this position. Hope this helps.
As always posts here are my own opinions and not necessarily those of my employer or slaves.
Many answers in link below-
John Atkinson, Stereiohile:
"I tried a variety of sample rates with these LP rips: 44.1kHz was very good, but didn't capture the essence of the original LPs' sounds; 96kHz was better; but there was no doubt that with a 192kHz sample rate I could not distinguish between the LP and the digital rip. And believe me, I tried."
Repeat:
.."there was no doubt that with a 192kHz sample rate I could not distinguish between the LP and the digital rip. And believe me, I tried."
So please, PLEASE tell my why need MQA again?
Charles, thanks again for addressing what so few seem to understand, and it is no wonder, with the confusing marketing MQA employs.Yes, absolutely, they claim to correct errors at the time of digital capture by the original ADC.
But as you mentioned in previous posts, and as I know very well a project may have used MULTIPLE ADC's during the production. Then different ones during mixing and the same applies for mastering.
One thing I would like to point out with DSD, but specifically when remastering classic analog recordings:
There is a very good way to do it with NO additional DSD processing.
And that is do all your EQ and compression settinsg, and what ever
else in ANALOG, then capture that to DSD. Done. Essentially you are
taking a "DSD photo" of what is coming off the mastering chain.That is how Mobile Fidelity and other audiophile labels are doing for their SACDs, and they sound spectacular to me.
The Pyrmaix workstation can edit and master in DSD, however some manipulations require conversion to so called "DXD" first.
Here is a thought, it may be crazy, but since your QA-9 is designed as you say, it would be VERY interesting to do some digital archiving with it and and then compare it an "MQA" processed version. The results would be very revealing, in a number of ways.
Edits: 05/28/17 05/28/17
> > it would be VERY interesting to do some digital archiving with it and and then compare it an "MQA" processed version. The results would be very revealing, in a number of ways. < <
It seems to me that there is a big long chain that starts with the microphone in the recording venue and ends with the speakers in your listening venue. Improving anything at all along the chain improves the sound. Using a better sounding A/D converter would improve the sound for all customers, regardless of their playback system.
Unfortunately there don't seem to be as many "audiophiles" on the recording side as on the playback side. It isn't that easy to compare most pro equipment. Microphones are typically selected for known results ("desired colorations") in specific circumstances. Most gear isn't even compared - can you imagine trying to compare the sound quality of two different 128-channel mixing consoles? Occasionally a studio will re-wire everything with some "audiophile" cabling, but I'm unsure as to how they select which cable to use. It seems that endorsements from highly visible engineers count for as much as anything in the "pro" world.
Every time I have seen a "pro" audio guy evaluate gear, they will set it up in their system so that they can do rapid A/B switching. I have never seen anyone listen for more than 5 or 10 seconds before switching. This is very unfortunate as that type of listening test is extremely sensitive for only one subjective experience - frequency response. It is an exquisitely sensitive way to tell if two DUTs have different frequency responses. It is far, far less sensitive to other "audiophile" parameters such as imaging, soundstage depth, resolution, coherency, and so forth. And it is essentially useless for letting one know which DUT creates a greater sense of emotional connection with the recording artist.
I don't think there's much mystery about the sound quality of MQA. They always start with a high-resolution digital file. Then three processes are applied. One is to reduce the bit depth to ~17 bits with noise-shaped dither. This can never improve the perceived resolution. (NB: Various noise-shaping curves can introduce various sonic signatures. I am told that Sony/Philips listened to many choices before selecting the 7th-order filter used for DSD, and many have noted a similarity in the high-frequency characteristics of all DSD recordings, where the highs tend to sound soft, delicate, and airy, regardless of the program material - I remember having phono cartridges with similar colorations.)
The second process compresses quad-rate audio with lossy techniques, discarding further information. The third process is to add a (digital) filter to filter out some of the artifacts of the original A/D converter. This is also removing information contained in the original file. (It is easy enough to do this without the need for a proprietary system, as first demonstrated by Wadia in the late 1980s.) What sort of differences would you expect to hear from the type of processing that MQA performs?
As always, my posts are strictly my personal opinions and not necessarily those of my employer or voice coach.
"What sort of differences would you expect to hear from the type of processing that MQA performs?"
As you note, all of this can be done without a proprietary "format", god knows I use term oh so loosely, fees, and the need for new software and hard ware.
The extra information is appreciated - thanks for writing it.
We don't half make our hobby complicated by entertaining this type of clever-but-pointless activity. Would much prefer producers to, say, scale back on heavy dynamic compression, as evidenced by the overuse of 'loudness' techniques.
Big J
"... only a very few individuals understand as yet that personal salvation is a contradiction in terms."
What we don't need is a step backwards.
MQA clearly:
-is lossy
-is proprietary
-is a totally necessary reduction in file size
-requires dedicated hardware and softare
thanks but no thanks.
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: