Home Digital Drive

Upsamplers, DACs, jitter, shakes and analogue withdrawals, this is it.

Huge misconception regarding HDCD

>> HDCD was a way to get 20 bit sound out of a redbook CD. <<

That is what Pacific Microsonics (PM) *claimed* for HDCD. The truth is that was simply marketing hyperbole. PM built an A/D converter designed by Keith Johnson, called the Model One. The later Model Two was similar but added support for both dual- and quad-sampling rates. There were three unique features of the PM A/D converters that comprised the HDCD system:

1) Peak Extend (PE) - was a compansion algorithm that compressed the top 9dB of audio signal during recording into the top 3dB of digital codes on the disc. When played back through an HDCD-enabled DAC or CD player, a "sub-code" that replaced some of the audio signal in the 16th bit (LSB) would instruct the DAC to expand the compressed signal and restore the full dynamic range.

2) Low-Level Extension (LLE) - was a method to automatically boost the gain as the audio signal dropped, starting when the signal level fell to -45dBFS. It was boosted in 0.5dB steps as the level fell, reaching a maximum gain shift of 4dB if the signal ever fell another -18dB to -63dBFS. Again when played back through an HDCD-equipped DAC or CD player, the instructions mixed in the LSB of the audio signal would instruct the DAC to lower the gain (and background noise) by the appropriate amount.

3) Transient Filter (TF) - was a method whereby the A/D converter measured the amount of high-frequency energy in the top octave. When it passed a certain threshold, the HDCD system would select from one of two available anti-aliasing filters (ie, "digital filters"). The original plan was apparently to have a complementary process during playback, but this never materialized. My best guess is that this was because Ed Meitner (then of Museatex) had beaten PM to the punch and already patented a DAC that switched reconstruction filters (ie, "digital filters) during playback, again by sensing the amount of high-frequency energy in the top octave.

The problem is that the claimed 20 bits of resolution is a horribly distorted representation of the truth.. It was one of the greatest marketing misrepresentations in the history of high-end audio. In actuality, both PE and LLE could be *optionally* applied by the mastering engineer, and the instruction manual warned that there were specific reasons for not doing so on certain types of music. Also there never was any way to decode for the TF feature (which was always engaged). However every single CD made with a PM A/D converter would light up the mandatory "HDCD" logo light on a licensed DAC - even when there was no decoding of the disc even possible - apparently in an attempt to scare people into purchasing a new CD player or DAC that had HDCD decoding (and from which PM received royalty payments).

The truth is that PE (*if* engaged by the mastering engineer) could only ever provide a maximum dynamic range increase of 6dB - and even then only if the recorded signal reached 0dBFS. In the very extreme case, this only adds 1 bit of resolution, to 17 bits.

The truth about LLE is even more underwhelming. *If* the mastering engineer chose to engage it, it only became active when the audio signal dropped below -45dBFS. I have analyzed scores of HDCD discs using the tools available in Foobar. For popular music LLE was *only* ever engaged during song fadeouts. It turns out that -45dBFS is an extremely low level, nearly 8 bits below the maximum. Even with classical music recorded using LLE, the gain-shifting only activates infrequently - specifically during very quiet passages when only 1 or 2 instruments are playing. I have never seen an HDCD track ever use the full 4dB range of level shifting, as the signal level would have to fall to -63dBFS, nearly 11 bits below the maximum. The *theoretical* maximum gain shift of 4dB amounts to about another 0.6 bits of dynamic range.

If *both* features were engaged by the mastering engineer, and everything completely optimized in an extremely unlikely real-world scenario, the most that HDCD could boost the dynamic range would be 1.6 bits to 17.6 bits. In more realistic situations, engaging both features would increase the effective bit depth between 0 and roughly 1.2 bits with classical music, and between 0 and roughly 0.9 bits with popular music.

At this date we have all had chances to hear the differences between 44/16 files and 44/24 files. The most common example was the 2009 remaster of The Beatles box set. The CDs were dithered down to 16 bits, while the "green apple" thumb drive contained the original 44/24 files (reduced from the 192/24 tape transfers made with Prism A/D converters). Yes there is a difference in sound, but it is hardly "jaw-dropping" or "transformational". So if adding 8 true bits of resolution only improves the sound slightly, one wonders how much improvement would be heard with only 1 extra bit of resolution - *if* the HDCD features were even engaged by the mastering engineer.

So where did PM come up with the "20 bits of resolution" claim? Simple - they added the extra bits as the A/D converter also had optional dither algorithms. This is where it gets weird. Prior to the PM converters, by far the most common alternative was the Sony PCM-1610. While it did not have any dither built into that converter, the incoming audio signal was always dithered anyway - by the tape hiss present on the analog tape that was being transferred to digital. There is no tape recorder on the planet that has an unweighted S/N ratio greater than 96dB, which is what would be required to create the need for external dither to be added.

The next question is why was HDCD so enthusiastically received by the audio press and many mastering studios? Again the answer is quite simple - it sounded far better than the competing Sony unit. *Not* because of the HDCD features but simply because it was designed to a far higher "audiophile" standard by Keith Johnson, an extremely talented designer.

The A/D converter is simply one box in the chain between the recording microphone and the playback speaker. We have all heard the difference made by replacing (say) a cheap preamplifier made with very old, low cost op-amps, electrolytic coupling capacitors, and low quality parts throughout with a mega-buck preamplifier made by one of the top designers on the planet using fully discrete circuitry, state-of-the art parts throughout, and designed for the absolute maximum performance.

A change like this can completely transform the sound of a home stereo system. And a similar change to the A/D converter can completely transform the sound quality of a CD.

That is the real story of HDCD - a superior sounding product that was sold through deliberately misleading marketing strategies and false comparison setups. For example at the 1997 CES, PM gave out free CDs with "comparison" tracks purporting to show the differences made by HDCD processing. The natural assumption was that the tracks were made with the same converter and simply engaging and disengaging the HDCD processing. But no, instead PM made three tracks with the PM A/D converter and three "comparison" tracks with a Sony PCM-1610 converter.

In addition HDCD was dreamed up to be a money-making machine. The converters were sold to the studios for $20,000 each (I'm unsure if there were licensing costs there.) On the playback side each manufacturer had to pay a $5,000 licensing fee up front (later raised to $10,000), plus purchase a special decoding IC from PM. The IC was priced artificially high so as to constitute an easy-to-track royalty payment for each player sold.

It fooled a lot of people for a long time. There were two separate events that led to the demise of HDCD. The first was that only a couple of years after HDCD was available to the public, both DVD-Audio and SACD offered true high-resolution formats, obviating the need to "hop up" the out-of-date Redbook CD format (by only a single bit of actual resolution). The second was that PM had paid roughly $500,000 to develop their custom decoding IC chip. It was made on a 600 micron process. (By comparison we are now down to the 12 to 16 micron range with semiconductor processes.) By 2002 or so that technology was so out of date that the fabrication house was dismantling the line and halted production. It would have cost another $500,000 to make a new version. There was an aborted attempt to fabricate it as a pre-programmed Motorola (?) DSP chip, but apparently there was only one sample batch ever made before PM sold the entire thing to Microsoft, where it died off fairly quickly.

The only positive note to the whole story is that there are still a good number of mastering houses that still use the PM A/D converters. Even though the Model Two is over 15 years old, there are only a handful of other brands that can compete with it sonically. It is still one of the best sounding A/D converters ever made, just as the Marantz 9 was one of the best sounding power amplifiers ever made. Good sound never goes out of fashion.

As far as any similarities between the 20-year old story of HDCD and the current story of MQA, I will leave that up to the reader to judge.

As always, strictly my own opinions and not necessarily those of my employer or guru.

EDIT: The above post was dashed off quickly and likely contains some minor errors. For example the units used in the discussion of semiconductor fabrication should have been "nanometers" and not "microns". Nevertheless I believe the overall arch is historically accurate. Corrections are highly welcomed.

Edits: 06/13/17 06/14/17

This post is made possible by the generous support of people like you and our sponsors:
  Kimber Kable  

Follow Ups Full Thread
Follow Ups


Post a Message!

Forgot Password?
Moniker (Username):
Password (Optional):
  Remember my Moniker & Password  (What's this?)    Eat Me
E-Mail (Optional):
Message:   (Posts are subject to Content Rules)
Optional Link URL:
Optional Link Title:
Optional Image URL:
Upload Image:
E-mail Replies:  Automagically notify you when someone responds.