Home DVD-Audiobahn

New DVD-Audio music releases and talk about the latest players.

Charles' "problem" ...

... is that he never had a point to begin with. All his rants against HDCD are based on a distrust of Pacific Microsonics, who no longer owns the technology. He keeps thinking HDCD is a "product" implemented in the PMD-100. But the truth is, hardly anyone uses the PMD-100 anymore - it is basically obsolete. And it was never a very good filter, I know several designers who avoided HDCD simply because they did not like the sound of the PMD-100. But fortunately, we don't have to use it anymore.

*** Charles' point was that if WMP's HDCD decoder includes a digital filter ***

Actually, if you had read my posts a little more carefully, I doubt that the WMP HDCD implementation has a digital filter. It is possible, but unlikely, since it doesn't upsample, unlike most HDCD implementations.

But the real point is: if the output of an HDCD decoder (even a limited one such as WMP) is different than the input even for the extreme example that Charles mentioned, then HDCD decoding is not a "null" operation, and there is a theoretical "benefit" to HDCD decoding for all HDCD discs, regardless of which optional feature has or hasn't been engaged.

Speaking as someone who has ripped all my HDCDs and "decoded" them using WMP, I am pretty certain this is the case, although I of course have not verified the specific disc that Charles mentioned. But I do have many similar discs - for example, Mike Oldfield's Amarok, which is a 16 bit digital recording, so arguably the benefits of HDCD are tenuous. But it was remastered, and the HDCD decoded output (without Peak Extend) is different from the bits on the disc.

*** Why do you need to attenuate the output of the HDCD decoder by one bit for comparison? ***

Because the WMP HDCD implementation lowers the input prior to decoding in order to allow for peak expansion (otherwise peaks would digitally clip).

For simplicity, WMP HDCD lowers the input by exactly 1 bit (or approximately 6dB). Other implementations, such as the Cirrus Logic one used in my Cary, may allow you to vary the attenuation (in Cary's case, between 0 to -10dB, defaulting to -10dB in the factory settings).

Why allow variable attenuation, and why attenuate more than -6dB? It appears that Cary also wanted to address 0dBFS+ levels (remember our conversation a few months ago?). 0dBFS+ levels are unlikely to exceed +3dB, so an attenuation of -10dB neatly handles both HDCD and 0dBFS+ levels, with 1dB of headroom to "spare". Full marks to Cary.

As you can see from Charles' reply to your post, he *still* does not understand HDCD. Is it likely that Microsoft and Cary/Cirrus Logic would have implemented HDCD incorrectly, or is it more likely Charles did not understand what he read?


This post is made possible by the generous support of people like you and our sponsors:
  The Cable Cooker  


Follow Ups Full Thread
Follow Ups


You can not post to an archived thread.