|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
31.221.87.71
In Reply to: RE: Why not? posted by Jim Austin on November 02, 2016 at 08:54:13
Hi Jim,Glad we're in general agreement. But what you bring up raises precisely the kinds of questions I asked before. However, before I get to that, know, I didn't see that paper. I'll look for it.
As for the time-smear fix. From what I understand, they're basically correcting the impulse response of the ADC and DAC. But many engineers have asked: "What if multiple ADCs were used to record a single track? And what about cascading DACs?" How can one possibly correct for all of that?
And what does this time-smear fix "sound" like. This is where comparisons are in order. Real comparisons. Plus, why not make a very simple recording using a worst-case scenario ADC that smears time, as MQA puts it. Smear the hell out of it, then "unsmear" it. Exacerbate the problem all you want my smearing certain instruments, even test tones. There are ways, with loudspeakers, you can exacerbate issues with high-order slopes, for example (or even low-order ones). Let's do the same here and prove that it works.
Now to your problem -- my past life installing networks would say: "Ok, that's a problem for you and we might have to reduce the bandwidth - for YOU." (Providing other solutions don't work.) But what about me and countless others who don't have the issue -- why do we want to use a lossy compression scheme when we really don't have to?
Doug
Edits: 11/02/16 11/02/16 11/02/16Follow Ups:
> > But many engineers have asked: "What if multiple ADCs were used to record a single track? And what about cascading DACs?" How can one possibly correct for all of that? < <
A reasonable question, to which I can only provide a schematic (not exact) answer. I suspect a schematic answer is the best you're going to get (not that mine is the best possible). The MQA folks refer to their work on such projects as "white-gloving," the meaning of which is, I think, obvious. Bob and I talked a lot about this, especially in the context of early digital recordings. He said they've been studying a large cache of albums--about 10,000 high-res and about 10,000+ at CD resolution. In this way, they've learned a lot about what typical albums look like from a time-smear perspective and what problems arise. (Bob didn't say, to me, that their algorithm uses "artificial intelligence," although he did use the phrase. He said--I didn't check the transcript, but this is the gist--that it was sort of like artificial intelligence.)
I suspect though that the correct answer is, you can't correct for all of that, but you can correct for some of it. Which is to say, you can create a version of the recording that sounds better, not perfect.
As for your proposed "demo" track: that certainly would be interesting. Way back in February, I requested (not sure who I was communicating with then--possibly Stuart) graphical evidence: Show me what a transient, in real music, looks like before and after. I don't remember what the response was, but I never got the plot. However, something very close to that was published on the Stereophile site in the Q&A with Bob Stuart; look at graphs 8-13 . Those plots are made using a DAC emulator because there's a basic measurement problem: To get a real signal out, you'd need to use an ADC and then reverse its characteristics.
Anyway, maybe we'll see something like that someday, but satisfying the skepticism of a few audio writers probably is not at the top of their to-do list.
> > Now to your problem -- my past life installing networks would say: "Ok, that's a problem for you and we might have to reduce the bandwidth - for YOU." (Providing other solutions don't work.) But what about me and countless others who don't have the issue -- why do we want to use a lossy compression scheme when we really don't have to? < <
Ah, I see your point. You're worried about the fact that it's not lossless, strictly speaking. I think this is a reflection of a shift of emphasis from the technicalities of the format to what's actually happening in the music. To worry about a bit of loss in the compression algorithm is to assume that every bit is equally important. As noted in several MQA documents, above a certain frequency there's no real information anyway--no information related to the music. Compressing that in a lossy way doesn't do a lot of harm. (I recall shaking my head the first time I saw the phrase "partial zero-emission vehicle" on the side of a Subaru. Compression in MQA is kind of like that: Partly lossless, partly lossy.)
Anyway, you can't always get what you want. I'm going to not finish the Stones reference. That would be too cute.
jca
Yes, Doug, we just finished mastering an album by an artist who recorded their album at 5 different studios, with probably half a dozen ADCs, with some songs even bounced to tape for a "sound". What then?
MQA claims to have "artificial intelligence", which Chris Connaker gleefully called "pretty cool" without any basis.
Then how about a very recent high profile vintage multi track remix, bounced to multi track digital, then mastered at another location.
For purist two channel recordings, non of these questions would come up and that may be where MQA belongs, like DXD, and DSD128/256.
And I think this is far more common than audiophiles would think (or like). But it's reality. I am not about to say it's "unfixable"; rather, I'd put this back to those who claim you can: "EXACTLY how can you fix that?" And I mean EXACTLY.
Doug
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: