|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
70.168.149.115
In Reply to: RE: On the blurriness of MQA posted by Dave_K on November 02, 2016 at 12:38:32
It seems many have ideological objections to a proprietary encoding format. Fair enough, but please don't second guess the technical aspects to justify the objection. That way madness lies. Bob Stuart's paper shows that the end-to-end dispersion of a 192kHz linear phase system is much more than MQA.
Interesting that MP3 is brought up in this thread. I think of them as similar, in broad terms: MP3 analyses the signal and throws away signal in as harmless a way as possible to fit a given data rate, MQA analyses the signal and throws away only unoccupied dynamic range to reduce the bit-rate. Of course, I might be wrong, but I like that analogy.
Regards
13DoW
Follow Ups:
I edited my post and deleted the last paragraph. It was kind of beside the point.
The main point I really wanted to convey was that when employed as an end to end technology such that the system impulse response described in the Stuart paper is achieved, the result is a blurring of the signal. Conventional anti-aliasing and reconstruction filters do not blur, and when employed at higher samples rates where the cutoff is above the musical spectrum they don't add any ringing either. Using a high enough sample rate eliminates the problem of filter artifacts that plagues Redbook. No Meridian special sauce is required. I find it ironic that Stuart & co. are marketing MQA as an end to end technology that minimizes temporal blur, when the reality is that they have chosen a target impulse response that adds blur.
Hi Dave,
Of everything I understand and have read about MQA the raison d'etre is to have less time dispersion than present day hi-res PCM. I even double-checked the paper, that contains a graph comparing time dispersion of the two systems, before I posted.
Regards
13DoW
I read through Stuart & Craven's AES paper a couple times and don't see anything like that.
The closest I can find is a claim in the text that "the end-to-end response, shown in Figure 15, introduces considerably less blur than transmission at 96 kHz using conventional filters, as shown in Figure 14 below." The term "blur" is not defined. Figures 14 and 15 are showing impulse responses. My guess is that the authors are trying to imply that a longer impulse response = more temporal blur. If so, it's disingenuous. A sinc for example, has an infinite impulse response and introduces zero time dispersion and provides the sharpest transient response possible within a given bandwidth.
The closer you get to a perfect brick wall response, the less time dispersion. The more gentle the filter slope, the more it blunts and smooths over transients, and that's what I would call blur. The fact that their proposed response is a Gaussian, which is a filter type normally used for image blurring, makes them sound silly when they talk about de-blurring. It's like they have the whole idea backwards.
The traditional complaint about brick wall filters in digital audio is based on experience with the 44.1k sampling rate. At that sample rate, the signal content coming off the mic feed usually extends above fs/2 , and therefore there is an interaction between the signal and the anti-aliasing and reconstruction filters. Also the filters are operating around the top of the human hearing range. If you just double the sample rate to 88.2k or better, the filters are operating far away from the human hearing range and in most cases, above the input signal bandwidth too. A brick wall filter at 42-44 will pass the signal with no time dispersion and no ringing. Stuart & Craven's filter will roll it off and smear the transients a little bit.
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: