In Reply to: RE: Absolute Sound on MQA..jaws drop posted by John Atkinson on January 24, 2016 at 09:17:23:
I'm not sure what sort of information you have been party to with NDA's and such, but from a quick look at Archimago's article, it appears to me that the MQA process is using an AI-based algorithm to "interpolate" HF data above the Nyquist in order to improve the impulse responses that are smeared by the band-limiting. Knowing the chain as well as the source material could help the algorithm more accurately select the HF content by windowing and then extrapolating the time series that would necessarily follow the signal. Superposition does seem useful. In this age of very powerful computing, estimating a time-series for a time-varying signal doesn't seem so ridiculous when you can "teach" an algorithm to use pattern recognition on those signals based on a large libray of other music. Give afew Tb's of music, a good machine-learning algorithm can probably choose pretty well most of the time. Encoding that information in some of the extra bits som that it appears as dither except to a decoder is really not a much different appraoch than HDCD. I amy be proven wrong, but that's what it looks like to me...
This post is made possible by the generous support of people like you and our sponsors:
Follow Ups
- It looks like an AI-based HF extension algorithm to me. - PaulN 12:41:03 01/26/16 (0)