Audio Asylum Thread Printer
Get a view of an entire thread on one page
|For Sale Ads|
i KNOW they are quite different but have similarities i am told. without going into the complexity of an instruction manual for an icbm, can someone outline the basics of each and their pluses and minuses?
This should probably be posted in the Digital forum.
HDCD was a way to get 20 bit sound out of a redbook CD. It had be encoded during recording, and to get the full benefits, played on a player with a HDCD filter. See the link below for a better, more detailed explanation.
MQA is a lossy codec that takes 24/192 or 24/176 and takes all the data above 16/44 and folds it in to the unused bits above the 16. It then uses two unfolds, at this time, one using software, as Tidal does, the second unfold is done in the DAC and requires their proprietary filter. Although PS Audio will be releasing a firmware update that will do both unfolds but not the unbluring in their Bridge II. And only because their customers wanted it. They are dead set in not allowing it in their DACs.
One obvious difference is that HDCD was part of the recording process while MQA can be applied to previously recorded music. While HDCD was subtle MQA can seem to be a big improvement. Trouble is nobody I am aware of has made available any before and after files. We are always stuck comparing apples and oranges. I personally believe that a true hi-rez file say 24/192 will be better than the MQA version. There is lots of info out there with much better explanations of what MQA is claimed to be and how it works.
The two are very different, and even though they are still releasing HDCDs it is getting harder to find DACs or CD players that have the Microsonic 100 or 200 filters. I know the older Oppos had them, but I have no idea if they still do. If you have a lot of HDCDs you might want to obtain a player or DAC that will decode them. I always found HDCD very subtle.
> > HDCD was a way to get 20 bit sound out of a redbook CD. < <
That is what Pacific Microsonics (PM) *claimed* for HDCD. The truth is that was simply marketing hyperbole. PM built an A/D converter designed by Keith Johnson, called the Model One. The later Model Two was similar but added support for both dual- and quad-sampling rates. There were three unique features of the PM A/D converters that comprised the HDCD system:
1) Peak Extend (PE) - was a compansion algorithm that compressed the top 9dB of audio signal during recording into the top 3dB of digital codes on the disc. When played back through an HDCD-enabled DAC or CD player, a "sub-code" that replaced some of the audio signal in the 16th bit (LSB) would instruct the DAC to expand the compressed signal and restore the full dynamic range.
2) Low-Level Extension (LLE) - was a method to automatically boost the gain as the audio signal dropped, starting when the signal level fell to -45dBFS. It was boosted in 0.5dB steps as the level fell, reaching a maximum gain shift of 4dB if the signal ever fell another -18dB to -63dBFS. Again when played back through an HDCD-equipped DAC or CD player, the instructions mixed in the LSB of the audio signal would instruct the DAC to lower the gain (and background noise) by the appropriate amount.
3) Transient Filter (TF) - was a method whereby the A/D converter measured the amount of high-frequency energy in the top octave. When it passed a certain threshold, the HDCD system would select from one of two available anti-aliasing filters (ie, "digital filters"). The original plan was apparently to have a complementary process during playback, but this never materialized. My best guess is that this was because Ed Meitner (then of Museatex) had beaten PM to the punch and already patented a DAC that switched reconstruction filters (ie, "digital filters) during playback, again by sensing the amount of high-frequency energy in the top octave.
The problem is that the claimed 20 bits of resolution is a horribly distorted representation of the truth.. It was one of the greatest marketing misrepresentations in the history of high-end audio. In actuality, both PE and LLE could be *optionally* applied by the mastering engineer, and the instruction manual warned that there were specific reasons for not doing so on certain types of music. Also there never was any way to decode for the TF feature (which was always engaged). However every single CD made with a PM A/D converter would light up the mandatory "HDCD" logo light on a licensed DAC - even when there was no decoding of the disc even possible - apparently in an attempt to scare people into purchasing a new CD player or DAC that had HDCD decoding (and from which PM received royalty payments).
The truth is that PE (*if* engaged by the mastering engineer) could only ever provide a maximum dynamic range increase of 6dB - and even then only if the recorded signal reached 0dBFS. In the very extreme case, this only adds 1 bit of resolution, to 17 bits.
The truth about LLE is even more underwhelming. *If* the mastering engineer chose to engage it, it only became active when the audio signal dropped below -45dBFS. I have analyzed scores of HDCD discs using the tools available in Foobar. For popular music LLE was *only* ever engaged during song fadeouts. It turns out that -45dBFS is an extremely low level, nearly 8 bits below the maximum. Even with classical music recorded using LLE, the gain-shifting only activates infrequently - specifically during very quiet passages when only 1 or 2 instruments are playing. I have never seen an HDCD track ever use the full 4dB range of level shifting, as the signal level would have to fall to -63dBFS, nearly 11 bits below the maximum. The *theoretical* maximum gain shift of 4dB amounts to about another 0.6 bits of dynamic range.
If *both* features were engaged by the mastering engineer, and everything completely optimized in an extremely unlikely real-world scenario, the most that HDCD could boost the dynamic range would be 1.6 bits to 17.6 bits. In more realistic situations, engaging both features would increase the effective bit depth between 0 and roughly 1.2 bits with classical music, and between 0 and roughly 0.9 bits with popular music.
At this date we have all had chances to hear the differences between 44/16 files and 44/24 files. The most common example was the 2009 remaster of The Beatles box set. The CDs were dithered down to 16 bits, while the "green apple" thumb drive contained the original 44/24 files (reduced from the 192/24 tape transfers made with Prism A/D converters). Yes there is a difference in sound, but it is hardly "jaw-dropping" or "transformational". So if adding 8 true bits of resolution only improves the sound slightly, one wonders how much improvement would be heard with only 1 extra bit of resolution - *if* the HDCD features were even engaged by the mastering engineer.
So where did PM come up with the "20 bits of resolution" claim? Simple - they added the extra bits as the A/D converter also had optional dither algorithms. This is where it gets weird. Prior to the PM converters, by far the most common alternative was the Sony PCM-1610. While it did not have any dither built into that converter, the incoming audio signal was always dithered anyway - by the tape hiss present on the analog tape that was being transferred to digital. There is no tape recorder on the planet that has an unweighted S/N ratio greater than 96dB, which is what would be required to create the need for external dither to be added.
The next question is why was HDCD so enthusiastically received by the audio press and many mastering studios? Again the answer is quite simple - it sounded far better than the competing Sony unit. *Not* because of the HDCD features but simply because it was designed to a far higher "audiophile" standard by Keith Johnson, an extremely talented designer.
The A/D converter is simply one box in the chain between the recording microphone and the playback speaker. We have all heard the difference made by replacing (say) a cheap preamplifier made with very old, low cost op-amps, electrolytic coupling capacitors, and low quality parts throughout with a mega-buck preamplifier made by one of the top designers on the planet using fully discrete circuitry, state-of-the art parts throughout, and designed for the absolute maximum performance.
A change like this can completely transform the sound of a home stereo system. And a similar change to the A/D converter can completely transform the sound quality of a CD.
That is the real story of HDCD - a superior sounding product that was sold through deliberately misleading marketing strategies and false comparison setups. For example at the 1997 CES, PM gave out free CDs with "comparison" tracks purporting to show the differences made by HDCD processing. The natural assumption was that the tracks were made with the same converter and simply engaging and disengaging the HDCD processing. But no, instead PM made three tracks with the PM A/D converter and three "comparison" tracks with a Sony PCM-1610 converter.
In addition HDCD was dreamed up to be a money-making machine. The converters were sold to the studios for $20,000 each (I'm unsure if there were licensing costs there.) On the playback side each manufacturer had to pay a $5,000 licensing fee up front (later raised to $10,000), plus purchase a special decoding IC from PM. The IC was priced artificially high so as to constitute an easy-to-track royalty payment for each player sold.
It fooled a lot of people for a long time. There were two separate events that led to the demise of HDCD. The first was that only a couple of years after HDCD was available to the public, both DVD-Audio and SACD offered true high-resolution formats, obviating the need to "hop up" the out-of-date Redbook CD format (by only a single bit of actual resolution). The second was that PM had paid roughly $500,000 to develop their custom decoding IC chip. It was made on a 600 micron process. (By comparison we are now down to the 12 to 16 micron range with semiconductor processes.) By 2002 or so that technology was so out of date that the fabrication house was dismantling the line and halted production. It would have cost another $500,000 to make a new version. There was an aborted attempt to fabricate it as a pre-programmed Motorola (?) DSP chip, but apparently there was only one sample batch ever made before PM sold the entire thing to Microsoft, where it died off fairly quickly.
The only positive note to the whole story is that there are still a good number of mastering houses that still use the PM A/D converters. Even though the Model Two is over 15 years old, there are only a handful of other brands that can compete with it sonically. It is still one of the best sounding A/D converters ever made, just as the Marantz 9 was one of the best sounding power amplifiers ever made. Good sound never goes out of fashion.
As far as any similarities between the 20-year old story of HDCD and the current story of MQA, I will leave that up to the reader to judge.
As always, strictly my own opinions and not necessarily those of my employer or guru.
EDIT: The above post was dashed off quickly and likely contains some minor errors. For example the units used in the discussion of semiconductor fabrication should have been "nanometers" and not "microns". Nevertheless I believe the overall arch is historically accurate. Corrections are highly welcomed.
I still see HDCD show up on Grateful Dead releases. My guess is they like the AD processor.
I know there are HDCD lists out there, do any of them show whether the processing was actually used?
Were the PM chips in the DACs as good as the AD processors? If I followed you, it was very expensive to use those filters, although Camelot used them in their Arthur V3 Mk2 and they were not a large company.
I answered the OP when I saw no one else had. I have to admit, my understanding was very basic, based on the propaganda.
So thanks for the detailed info.
Most of my HDCD CD's are Grateful Dead, and they do sound very good on both non-HDCD decoding machines and HDCD decoding machines. However, they sound better, to men on my CD players (NAD & Marantz) with HDCD decoding. Should that be the case given what Charles said?
> > Most of my HDCD CD's are Grateful Dead, and they do sound very good on both non-HDCD decoding machines and HDCD decoding machines. However, they sound better, to men on my CD players (NAD & Marantz) with HDCD decoding. Should that be the case given what Charles said? < <
I'm pretty sure that the Dead were early adopters of HDCD. I don't know how many CDs they released before HDCD came along. I'm pretty sure that current releases still use the PM A/D converter (which will light up the HDCD light) but it's probably been around 10 years since a Dead disc was released that actually could benefit from decoding. So the answer to your question is a big fat "maybe".
It seems there are at least two variables that one would need to have knowledge of to give a definitive answer:
1) Do all of your CDs sound better on the NAD and Marantz players, or just those that light up the "HDCD" light?
2) The fact that the HDCD light turns on simply means that the excellent sounding Pacific Microsonics A/D converter designed by Keith Johnson was used to master the recording. It does not necessarily mean that there is anything to decode. *In general*, the older the disc the more likely it would be to require decoding. Discs released from 1997 through roughly 2003 *usually* had some features engaged, as there were quite a few high-end-only CD players that could decode them. Once the decoding chips were discontinued, most people using the PM A/D converters stopped using the HDCD encoding features, and by 2007 I don't think anybody was using the HDCD encoding features - but those recordings still light up the light on an HDCD-equipped CD player or DAC.
There are two way that I know of to know with *certainty* if either of the HDCD processes were engaged on a particular disc. The free method is to download Foobar and the HDCD decoding module, then cut-and-paste the script I posted on the Head-Fi forum (linked elsewhere in this thread), and then monitor the Status Bar in the Foobar program while playing each disc.
Easier but *much* more expensive is to use the Ayre QX-5 Twenty DAC. When it detects the "sub-code" signifying a recording was made with the PM A/D converter, an LED will illuminate. If any of the HDCD features were used that can be decoded (Peak Expand and/or Low-Level Extension), the unit will decode those properly and the letter "d" will show in the sample-rate display (eg, "d44" instead of just "44") to indicate "decoding" is occurring.
As always, strictly my personal opinion and not necessarily that of my employer or dog-walker.
I am glad you saw my post. I was concerned that it might be too far down the page.
That is a very good point, and I don't know the answer. There is a Marantz universal player hooked to the same system as a the NAD, where I can do the comparison that you suggest. It was may subjective impression that HDCD's were better on the NAD, but I did not do formal listening tests. Can the Foobar program be run on a Mac? I'd love to have the Ayre, but the expense is out of the question at this time. Many of my Dead CD's were remixed when HDCD was still in vogue. They also state that they are HDCD included.
The Marantz is driving my Sennheiser HD555 headphones in the living room. Both the Marantz and Nad are Lapizator favorites, but otherwise, unremarkable in technology and they were not expensive to me. To my vinyl loving ears they sound pretty good.
> > Can the Foobar program be run on a Mac? Many of my Dead CD's were remixed when HDCD was still in vogue. They also state that they are HDCD included. < <
Sadly Foobar is Windows only. Perhaps a friend can help. I also have friend living in a full Apple eco-system with several computers, tablets, and phones. He keeps a Windows machine just for ripping CDs with dBpoweramp, as he (and I) have found that to be the very best ripping program. Perhaps you can pick up an old laptop for cheap just to run Foobar - it's a very "light" program.
If a CD states "HDCD" on the covers or liner notes, chances are extremely high that it uses the HDCD encoding features and will sound better (*if* all else is equal!) if properly decoded. What started most of the internet lists was the fact that *all* discs made with the PM A/D converter will light up the "HDCD" light on playback. The cover and liner notes say nothing about HDCD because the mastering engineer did not engage the features. People mistakenly thought they were stumbling across "hidden treasure". So another good rule of thumb is looking for some "HDCD" marking on the album artwork. If it's there, it almost certainly used the encoding features. If not, it almost certainly used the PM A/D converter but without the HDCD features turned on, and nothing to decode.
Hope this helps. As always, strictly my personal opinions and not necessarily those of my employer or Pacific Microsonics.
Given the sonic superiority of the PMD model 1 and 2 when they came into existence, do you think the the high end audio world would have been better off if there had been a wide spread implementation of the HDCD process in almost all CDs? It seems to me that this would have insured that a superior sounding converter would have been used for the digital process, and that we would all have benefited with better sounding CDs (assuming that someone did not do something else in the mastering chain to muck things up).
> > Given the sonic superiority of the PMD model 1 and 2 when they came into existence, do you think the the high end audio world would have been better off if there had been a wide spread implementation of the HDCD process in almost all CDs? < <
Let's imagine a world where true high-res never came along to disrupt the HDCD business model. I would imagine that over time, more and more mastering engineers would have adopted HDCD. Let's further imagine that it was so successful that HDCD became the ubiquitous de facto standard for CD.
It would be analogous to saying that the Spectral Audio pre-amp was the world's best and it will be the only one allowed to be used by the "music police". No other manufacturer would ever be able to innovate or improve - only engineers who were contracted to Pacific Microsonics (eg, Keith Johnson).
My personal opinion is that this would be an improvement over the previous de facto standard of the Sony A/D converter, but that in the long run innovation, creativity, and improvement would be stifled. What do you think?
As always, solely my personal opinions and not necessarily those of my employer or mastering engineer.
You are right about the need for ongoing improvement in A/D converters!
I do think that if HDCD had caught on big, that Keith Johnson would have also created better A/D converters beyond the model 2, which is already at or close to the best presently. According to your posts, most of the HDCD advantage was in the performance of the A/D converter that was ahead of it's time. Engineers still could have used other newer converters that were non HDCD as things went forward. My point was that a rapid embrace by mastering houses for the PMD converter would have resulted in quite a few better sounding CDs over the years to the present. And I am sure that PMD would have created even better filter chips at the user end.
Oh, by the way. I appreciate your very knowledgeable posts on these topics, and the lovely design work that you are doing with Ayre equipment.
> > I do think that if HDCD had caught on big, that Keith Johnson would have also created better A/D converters beyond the model 2, which is already at or close to the best presently. < <
Are you sure? This is very much like saying we should all use Spectral electronics in our system. Please don't misunderstand - I would *far* rather have Spectral electronics than a Sony receiver. But is Spectral "better" than Audio Research, Conrad-Johnson, Vaccum-State Electronics, Aexthetix, VTL, Convergent Audio Technology, or Ayre? Even if you say "Yes!", will that still hold true five years from now? Ten years from now? Twenty years from now? These are some of the issues that come along with proprietary closed formats.
As always, strictly my personal opinions and not necessarily those of my employer or pool boy.
The PM AD converter did that come in 16 and 24 channel versions? I'm thinking that with all the multi-track recording they would have to use them right from the start.
A friend of mine built a studio in his basement and the 8 channel AD/DA he has is 16/44 or 48, very outdated, but also not impressive at all.
That is one of the things with MQA, they say they have to know what converters were used. I have seen a lot of albums where the recording was done in say like 5 or 6 different studios.
With the Dead CDs, the only consistency was the stuff Jerry did with Grisman, at Grisman's home studio.
I have a Lindemann 825 that does HDCD, when using the built-in transport, but does not on any of it's other inputs.
> > The PM AD converter did that come in 16 and 24 channel versions? I'm thinking that with all the multi-track recording they would have to use them right from the start. < <
No the PM A/D converters were only two channels. They were typically used for transferring stereo analog tape masters to CD format. Multi-channel digital audio workstations (DAWs) were introduced in the early '90s but recordings made with multi-channel digital would not need to go through another round of A/D conversion. I'm guessing that the Dead continued to record to analog tape. Although this practice has become less and less common, some studios (eg, Shangri La, run by famed producer Rick Rubin) still uses this workflow - one final conversion to digital at the very end.
As always, strictly my own opinions, prone to error and not necessarily those of my employer or producer.
Always a pleasure to read your posts, Charles. I could not get the mix of technology discussion and industry history from anybody else.
I still like to understand enough about the technology behind buzzwords to be an informed consumer.
As an occasional consumer of audio gear and a regular consumer of recorded music, I look at new technology like MQA and decide whether it makes sense for me. For HDCD, SACD, DVD-A and now MQA my reaction has been "no sale". Very little of the music that matters to me will ever appear in those formats. Becoming wedded to formats like these also restricts my choices in gear too.
I buy hi-res PCM downloads instead of Redbook versions when I want to performances and the possible improvements in sound quality might be worth a modest premium. Pragmatic decisions based on value for money.
my blog: http://carsmusicandnature.blogspot.com/
"I buy hi-res PCM downloads instead of Redbook versions when I want to performances and the possible improvements in sound quality might be worth a modest premium. Pragmatic decisions based on value for money."
Tough to argue with this approach.
...better and more transparent convertors have come along since then,..and side note..DSD128 is the most analog like format i have heard for archiving. Actually virtually indistinguishable from the source.
My comments about the HDCD A/D converter has no relation to the fact that other excellent converters eventually came out. That is an obvious given. As far as DSD and analog, the debate over which medium people perceive as more desirable is open to debate and opinion. I have heard really good and less good from all of the media formats, and tend to believe that much of the more obvious faults that exist are due to incompetent recording, engineering, mastering, and manufacturing.
fyi, my comment about DSD128 was in relation to archiving...that is the key word..meaning, DSD as the delivery medium is not ideal.
An analog tape archived to DSD128 them mastered to 24 bit PCM is an excellent work flow.
> > I still see HDCD show up on Grateful Dead releases. My guess is they like the AD processor. < <
When it was first released the Pacific Microsonics A/D was dramatically better than the then-ubiquitous Sony - which is unsurprising as not many audiophiles have Sony preamps in their systems. IMO, the PM converters are still one of the top three or four in the world from the standpoint of sonic performance. Keith Johnson is an excellent designer and knows how to make a good sounding piece of equipment. (Think of Keith's other designs for Spectral Audio, for example.)
> > I know there are HDCD lists out there, do any of them show whether the processing was actually used? < <
It is true that there are many "HDCD lists" out on the interwebs. The problem is that the PM A/D converter will light up the "HDCD" light on *any* disc made using it - even if the mastering engineer turned all of the HDCD encoding features off. This is even true for all Reference Recording (Keith Johnson's label) releases after 2009 - they continued to use the PM A/D converter as it sounds very good, but with all the HDCD features turned off as there is almost no way to decode it any more.
The way the world was able to "peek behind the curtain" was when Foobar released their HDCD decoding add-on module. If you are curious just download Foobar for free and click on the link below for commands you can cut-and-paste that will display which HDCD features are actually engaged on any particular disc.
> > Were the PM chips in the DACs as good as the AD processors? < <
In the last millennium, only three high-end manufacturers had the knowledge to build custom digital filters - Wadia, Theta, and almost a decade later, dCS. All other manufacturers used off-the-shelf chips. Of those chips, the Pacific Microsonics combination digital filter/HDCD decoder had a good reputation for sonics for several years. Its digital filter was a conventional brickwall design, but had perhaps 1.5x or 2x more taps than other brands. The problem was that it did not support sample rates above 48kHz. So for about 5 years, the PMD-100 was the "go-to" digital filter chip for high-end manufacturers and even a couple of mid-fi brands - but *never* the mass market. The PMD-100 digital filter only cost about $5 more than a non-HDCD part, but also required the manufacturer to pony up a $5,000 licensing fee, which was later raised to $10,000.
Once true high-resolution became available in the form of DVD in 1997, high-end manufacturers had to choose between high-res and HDCD decoding. I think that maybe one or two products were made (briefly) with two digital filters in order to support both. Finally PM released the PMD-200, which was a pre-programmed Motorola DSP chip that could both decode HDCD and also provide digital filtering for high sample rates. As far as I can tell Motorola only made a sample batch of these parts. They showed up in the Sonic Frontiers DACs for a short time, and then Microsoft purchased HDCD lock, stock, and barrel. For several years in the early 2000s, Windows Media Player would decode HDCD, but I don't know that anybody cared or even really noticed.
By the time that HDCD had died, almost all DAC chips came with their own built-in digital filters. A few had selectable algorithms, but just as with IC op-amps, I don't think that there are people at large semiconductor manufacturing companies sitting around doing listening tests to make the very best sounding filters (or op-amps).
"If you want it done right, do it yourself" is an interesting saying and there are now at least a dozen high-end companies creating their own digital filters. It seems clear that the only reason to do this would be if you really believed it sounded better than what was available off-the-shelf in a DAC chip (although I'm sure there are cynics who will say it all sounds the same and is just an excuse to charge more).
As always, strictly my own opinions and not necessarily those of my employer or Mother Theresa.
Charles, do you happen to have Lucinda Williams "Car Wheels On A Gravel Road"? If so, could you check it to see if it is real or fake HDCD?
I tried doing a search, no luck on that CD, but I found a discussion on HDCD that was quite amusing. See the link, even back then Kal was correcting misinformation.
And for another good laugh, here is a link to Corey Greenberg's review of Howard's book. All from 2005.
> > Charles, do you happen to have Lucinda Williams "Car Wheels On A Gravel Road"? If so, could you check it to see if it is real or fake HDCD? < <
Sorry, I like that album a lot but never bought it. It was released in 1998, so if it lights up the HDCD light, chances are that it did use an HDCD feature that can be decoded but that is just a guess. The PM A/D converter is still one of the best sounding units ever made. Any mastering engineer that cared enough to purchase and use it was clearly dedicated to sound quality. Right off the bat those are two good signs - a great A/D converter and a mastering engineer who cares about sound quality.
As noted in the previous post LLE was not meaningful for popular music. As to whether PE was used would be just a guess without running the ripped file through Foobar. The operator's manual states:
Listening to both undecoded as well as decoded 16-bit playback is important, since HDCD
amplitude encoding effects such as Peak Extension limiting are more audible undecoded.
Limited Dynamic Range Pop or Rock
The best method to record highly compressed, limited dynamic range material depends
greatly on the results that are desired with undecoded playback.
Using Peak Extension allows very high average recording levels without "clipping" or gen-
erating "overs". This approach can be used to get the "hottest" possible sound (almost no
dynamics) during undecoded playback for air play, with decoding restoring normal dynam-
ics for home listening.
However, because Peak Extension limiting has an "easy over" curve that begins to affect
the signal at - 3 dBfs, it usually shouldn't be used with highly compressed source material
that will almost always be in the limiting curve, unless a highly limited or distorted sound
is desired during undecoded playback.
Typically, Peak Extension recordings do not have the "crunch" or "edge" produced by hard
clipping that is sometimes desired for certain types of rock material.
To get a hard "crunch" without any "easy over" limiting, turn Peak Extend off and adjust
DSPGAIN to a level just below full scale, usually - 0.1 dB. The digital input signal level can
then be adjusted using an external device such as a 24-bit editing workstation. This allows
as much clipping as desired without generating any "overs". To eliminate the need for an
external gain adjusting device, the Model Two can be put into a dual output mode with
digital output 2 set to HDCD_24, and digital output 1 set to HDCD_16, and offset -0.1 dB
relative to output 2 using OUT1OFS in the Levels Menu (see page 36). DSPGAIN can then
be adjusted to provide "crunch" on digital output 1 without generating any "overs". Digital
output 2 may then "over", but isn't used.
When a "dry" or "punchy" low level sound is desired with limited dynamic range material
that has little ambient information, Low Level Extension can be turned off.
That gives a "peek behind the curtains" as to the job of the mastering engineer. Clearly they are looking to create a "sound", not necessarily a direct transfer of the microphone signals to the disc.
I read your link to CG's review of Howard Ferstler's book. Pretty funny stuff! And have to admit that as a teenager in the '70s also had a Watts "Dust-Bug"... :-)
I still own the Watts Dust Bug, but haven't used it in 40 years. I wonder if it a collectable?
Around the days of the Watts I and a few friends tried the original Discwasher, and found it to create static. So, for many years I used a 3" Staticmaster, it had a horsehair brush and a strip of polonium. I still use the brush, but the polonium strip is outdated. Also use the AQ record brush.
You're probably right about Lucinda's CD actually using the HDCD settings. That album was recorded, scrapped, recorded again, and then she spent a lot of time adding and removing overdubs. I read that she was obsessed with getting it just right. All the effort paid off as she won Grammys, and lots of critical acclaim. I have found all of her albums to be well recorded. With the "Blessed" deluxe release the included second CD are the demos recorded in her kitchen on a portable Zoom recorder. And, they are surprisingly very listenable.
I've always admired your designs, but I live in Milwaukee, and I have never been aware of a dealer who sold them. So, I have never had an opportunity to hear any of your designs, particularly the amps would have been of interest. I was joking over at PS Audio that with the feedback on the latest firmware update, they should get the bronze busts ready for the Audio Hall Of Fame. I think there would be a place for you, if one existed.
I think everyone appreciates your contributions here.
Another winner -Charles should write a short book on the tech-history of digital, starting with custom filters in the late 1980s.
And HDCD was NOT the only format to fool people. So did Sony, making us think that DSD was a 1-bit system (in practice). But when we add signal processing and dither, it was 5-8 bits from the start. And this means it decimated and oversampled, (2) things we were told it would not do.
DSD was just a low-bit PCM system.
And extreme sampling rates. Whoops ! Mikes don't have sensitivities beyond 25khz, at least those used by audiophile labels.
Sony never provided recorders sampling at 192khz or beyond. (Maybe these exist now -but are rare, I think and expensive).
Even though DSD is mostly gone, we're still suffering (if we trust the theories against low-bit recording). Aren't most ADCs used today lower-bit devices ? Some labels never used these -they stood with 20-bit. But how many are there ? How many, in my view, are doing it right ?
Incredibly important and well written post. This is a must read for all here. Clarity.
"Trouble is nobody I am aware of has made available any before and after files. We are always stuck comparing apples and oranges."
2L Recordings offers free downloads of high rez tracks and their MQA versions.
"I personally believe that a true hi-rez file say 24/192 will be better than the MQA version."
I assume from the rest of your post that you have not made such a comparison yourself.
Kal, in MQA discussions I have asked numerous times if anybody was aware of any identical files, with one using MQA. This the first I have heard of the 2L. It is not a label I normally visit. There is at least one 2L file that I have seen on Tidal and the content was of no interest.
So, no I haven't heard the comparison. I have heard enough of the partially unfolded on Tidal. The 24/192 and DSD files I have heard, both DSD into my Benchmark and converted to 24/176 played through the Lindemann 825 are the best digital I have heard. Yes it is an opinion[I personally believe], as I am using two different players, Tidal's own that I find to vary, better late at night, which I think might have to do with fewer people tapping their resources. Not even an opinion, just speculation. As to the hi-rez files those are being played on JRiver 21. And that could be that I am being influenced by JRiver which I find to be the better player. And that comparison is also based on apples and oranges.
Are you sure that the 2Ls are the same other than one being MQA processed?
I'm not sure what you mean by the rest of my post? I stated an opinion and then went back to talking about HDCD filters.
Seeing as you are aware of these 2L tracks, I am assuming that you have done the comparisons, what is your opinion? I personally can't do a true comparison as I don't own a MQA equipped DAC.
...but Doug Schneider says he and other audio writers have asked Meridian for even-comparisons and got no response.
Shady-science (with "de-blurring"), no mastering equipment more than a year after it was promised and now, head of Ayre says it's just a third-party selling a custom filter.
No wonder the skepticism over MQA.
1. The 2L tracks have been available to the public for a long while.
2. I do not know when or who he asked but when I asked for multichannel files, original and MQA, they were sent.
3. Tirade ignored.
On board eh, Kal?
Is there anything in my post that is not factual?
> > Is there anything in my post that is not factual? < <
Is there anything in my post (linked below) that is not factual? Or is it simply a "tirade"?
I responded to J. Phelan's first sentence which was a follow-up to the prior posts.
The rest of his post was extraneous.
I parsed your post incorrectly. My mistake.
This thread was about MQA vs. a failed 'higher-rez' format. Any similarities (should be) allowed to come up.
Maybe I didn't clarify "shady science". If 'temporal blurring' exists, do we have any studies, in the past 35 years, that shows this is a problem ? If so -who, before Meridian, tried to address it ? Seems strange they're the only ones, this late in the game.
> > If 'temporal blurring' exists, do we have any studies, in the past 35 years, that shows this is a problem ? If so -who, before Meridian, tried to address it ? < <
To the best of my knowledge, the chronology of "temporal blurring" (MQA's marketing term for a phenomenon referred to by various names over the years) is as follows:
1) It was first identified in 1984 by engineers from Studer and Soundstream (world's first commercially available digital audio recorder with a 50kHz sampling rate - later morphed into a car audio company) in an AES paper, "Dispersive Models for A-to-D and D-to-A Conversion Systems".
2) Wadia was the first (late 1980s) commercial example of a product that traded a small rolloff in the top octave (ie, frequency domain) for dramatically reduced "ringing" (ie, time domain).
3) Pioneer popularized Wadia's approach in the early 1990s and called it Legato Link. Their production volume was so large that soon all Burr-Brown DAC chips (Pioneer's supplier) included a "slow rolloff" digital filter option.
4) Sakura System's "47 Lab" was the first digital product to eliminate the reconstruction filter in the D/A converter altogether. This was first announced in the Japanese DIY magazine "MJ" (roughly equivalent to "The Audio Amateur") in a series of articles in 1996 and 1997. This represented an extreme exploration in the audible effects of digital filtering, at least on the playback side.
5) Ayre was the first audio manufacturer (in 1999) to allow the user to select between the "standard" approach and Wadia's approach to digital filters. There was a rear-panel switch labeled "Listen/Measure", the owner's manual recommended the time-optimized "Listen" position and all known reviews agreed.
6) In the late 1990s Sony and Philips developed the SACD format, specifically designed to replace the expiring patents of the CD format. Perhaps the greatest difference between the DSD format and convention PCM is the elimination of anti-aliasing filters on the A/D side, due to the extremely high sample rate. The D/A side used a 3rd-order (18dB/octave) analog filter at 50kHz to reduce the levels of out-of-band noise that could disrupt or damage downstream equipment including both electronics and transducers. (For comparison purposes the 1st-generation Sony CD player used a 9th-order [54dB/octave] analog reconstruction filter.)
This was the basis for one of the prominent claims for SACD - its "superior" pulse response. In my opinion these claims were misleading as while they accurately showed the superior (narrower) pulse response of the A/D converter, they apparently did not show the broadening of that pulse after it passed through the low-pass filter in the D/A converter. Nonetheless, my personal opinion is that the major sonic differences between DSD and *conventionally-implemented* PCM is due to the filtering used (brickwall anti-aliasing on the A/D side and brickwall reconstruction on the D/A side).
This was another signpost pointing out the important sonic impacts that digital filtering imparts. (As a side note, SACD forbade access to the unencrypted digital data stream - just as with MQA.)
7) In 2004 Peter Craven published an AES paper ("Antialias Filters and System Transient Response at High Sample Rates") on so-called "apodizing" filters, which was commissioned by Meridian. Craven noted that any "pre-ringing" created by a steep linear-phase filter in the A/D converter could be filtered out later in the chain. However at the popular 44.1kHz sampling rate the apodizing filter exhibited about the total energy of the original anti-aliasing filter. The claimed advantage was that by using a minimum-phase filter that the energy in the "pre-ringing" would be shifted to the "post-ringing".
This would seem to be a much less objectionable effect, as all sounds in nature create "post echoes" due to the presence of any reflective surfaces near the sound source. Psychoacoustic testing has confirmed that the ear/brain is distinctly more sensitive to "pre-ringing" than "post-ringing"
8) Meridian introduced products that included apodizing filters in 2008/9, I believe. Around the same time, having read Craven's 2004 paper and conducting further independent research, Ayre released products allowing the user to select between either an apodizing (sharp minimum-phase) filter or a slow rolloff minimum-phase filter.
9) In 2012 Ayre developed the worlds first PCM A/D converter that used a digital filter with no time-domain artifacts whatsoever for dual- and quad-sample rates. Single-sample rates had the minimal amount of time smear, achieved at the cost of a slight rolloff in the top octave. All recordings made with this converter have no "errors" for which MQA could possibly correct.
10) Meridian began working on MQA at least in 2013 and possibly earlier. The original focus appeared to be to reduce the file size of high-res files, largely for storage with portable players. An AES paper on the subject was published in 2014 ("A Hierarchical Approach to Archiving and Distribution").
11) Since the original announcement MQA's goals seemed to have expanded, first to "correct" for errors made by the A/D converter used to create the original files, and later to "compensate" for the signature of the particular digital filter/converter chip (apparently one needs to sign a non-disclosure agreement to know precisely what is involved) in a particular D/A converter. MQA's method of "de-blurring" A/D errors appears to closely resemble Ayre's slow rolloff minimum-phase digital filter from 2009.
The bottom line is that there has been a fairly broad awareness of time-domain issues involved with digital audio for over 30 years. It has been examined in some detail, both in academic journals and by equipment manufacturers. I hope this overview is helpful.
As always, strictly my personal opinions and not necessarily those of my employer or lap-dog.
The term "time-smear" could be said to encompass jitter, too. At one time one thought of jitter in terms of ugly anharmonic tones, but these days some people think of it (timing errors in digital data transmission) as having different, especially spatial effects--smearing images, making them seem less solid. The industry spent a lot of years obsessed by jitter, and even today, some designers (I'm thinking in particular of Ted Smith) talk about it a lot.
Thought of that way, the whole history of digital audio, more or less, could be considered the history of dealing with "time-smear."
> > the whole history of digital audio, more or less, could be considered the history of dealing with "time-smear." < <
Yes. And in an interview I noted that when I look back on my career, even beginning with loudspeaker design various forms of "time smear" have been the fundamental basis for the path I've blazed. Therefore I would take your thought to the next level and remove the qualifier "digital". This only makes sense as the ear/brain is far more sensitive to time-domain information than to amplitude-domain information.
Charles, thanks for this. Makes sense to me--and it's great to have such a perspective on your own career, which, from my perspective--and I say this sincerely, as an audio writer and long-time Ayre owner--is most impressive. I hope I can say that without it seeming weird or forced.
If we can just clarify what 'jitter' is -NOISE.
I've seen measurements show a 10-20db spike in phase noise. Is this gone ? And if so, how ? Outboard clocks, according to Stereophile, actually made timing WORSE (as shown in a JA clock review, about 10 years ago.
There seems to be a general lack of clarity as to the definition of terms that actually have precise technical meanings. I will attempt to clear these up in a straight forward way:
1) Jitter refers to the timing variations in what should be a *perfectly* steady reference timing signal. While all of us can understand the basic concept, it turns out that specifying the jitter level of a device, signal, clock, or whatever is not as useful as one would hope.
For starters jitter specifications are given in units of time (eg, picoseconds). This in itself can be misleading as the absolute amount of jitter (for example, 1 picosecond) can either be a very small percentage of a low-frequency clock (for example, 0.0000000001% of a 1 Hz clock) or a much higher percentage of high frequency clock (for example, 1% of a 10GHz clock.
A second problem is whether the jitter is specified as a peak value or an RMS value. Unfortunately there is no fixed relationship between these two as it depends on the source of the timing variations. For example if the jitter were due solely to power supply ripple affecting the jitter of a ciruit, the peak jitter would be 1.414x the RMS jitter. At the other extreme if random noise effects in the circuit are the source of the jitter, the peak jitter levels will *on-average* be 12x the RMS jitter levels. But as noise is a random event, there will be random times that the peak jitter will be greater that 12x the RMS value and other random times that the peak jitter will be less than 12x the RMS value.
A third problem with jitter is that it tells us nothing of the spectral distribution of the timing errors. Comparing with analog speed (timing) variations, variations is speed below about 5Hz creates a characteristic change in the sound that we describe as "wow". Faster speed variations from 5Hz to 20Hz give a different audible effect and we describe that as "flutter". Even higher speed variations (typically found in analog tape) are called "scrape flutter" and have yet another audible characteristic. Turning back to digital it is also known that the same levels of jitter concentrated at different frequency ranges will also have different audible consequences, but a hitter specification only gives us a single number, with no information as the the spectral distribution of those timing variations.
A fourth problem with jitter is that it does not tell us whether the jitter is correlated with the data or not. For example if one were to look at the spectral distribution when playing back a full-scale tones of various frequencies, if the spectral distribution of the jitter also changed it would be referred to as "correlated" jitter. If the jitter spectrum did not change, it would likely be due to random noise in the clocking circuits and would be referred to as "uncorrelated" jitter.
2) A much more useful way of describing timing errors in a digital system is to use a phase-noise plot (or graph). This shows the jitter levels at various frequency offsets from the main (carrier) frequency of an oscillator, and is typically used to characterize oscillators and (less commonly) phase-lock-loops. (NB: In the latter case the phase-noise plot of a PLL shows the amount of phase noise of the total system of a clock generator - typically some type of oscillator - and the PLL as a combined system.)
Unfortunately at this point in time even phase-noise plots have limitations. Only recently has sufficiently sensitive test equipment been designed to measure phase noise with accuracy, and these complex instruments are too expensive (between $15,000 and $100,000) and complex to operate to be widely used. There is not enough data to provide generally accepted correlations between the spectral distribution of jitter and its audible effects. Nor are oscillator circuits ever measured for phase noise when using anything but the very best laboratory power supplies - which may not reflect the performance of that same oscillator when used in an actual piece of digital audio equipment.
Another advantage of using a phase-noise plot is that by simply integrating the total amount of phase noise over a specified frequency range, one can calculate the amount of jitter that represents. However, the reverse is not true - one cannot determine the spectral distribution of a phase-noise plot from a jitter specification. This is similar to knowing the THD of an analog circuit compared to looking at an FFT of the analog waveform. The FFT will show how much of the distortion is 2nd harmonic, 3rd harmonic, and so on, and it is known that the ear is more sensitive to high-order harmonic distortions than low order ones. There is simply more information in the FFT plot compared to a single THD number.
3) It should be noted that these concepts are apparently too abstract or difficult for all to understand fully. For example in the linked video a digital designer made at least one inaccurate statement regarding jitter. Specifically that jitter generated by imperfect clocks in the A/D converter and embedded in the digital file could be attenuated (or even eliminated?) by using a low-pass digital filter that attenuates the high frequencies encoded in that file. I can't even begin to claim to understand what he was attempting to describe, except that it is not correct. The designer did note that he uses a master oscillator from Silicon Labs. These are interesting designs in that they do not use quartz crystals as their resonant element, but instead tiny strips of silicon etched into the semiconductor wafer itself (MEMS or Micro-ElectroMechanical Systems). This technology does not have the frequency stability required to reach low levels of phase noise at low frequency offsets from the oscillator (carrier) frequency. Again there is no consensus as to the audible effects of this, except to note that crystal-based oscillators have significantly better measured performance in this area.
As always, strictly my personal opinions and not necessarily those of my employer or you, the reader.
Another great piece.
It seems hard to believe that digital, so accurate with a fixed-clock (or re-clocking), has timing errors. I would understand it if we were transferring files, where a read-out will show errors.
My belief that 'jitter is noise', comes from the head of Bel Canto, who, in his interviews, said that "jitter is noise that shows up as timing errors".
And white papers from either TI or Analog Devices, which showed noise-spikes (unwanted voltages) that appeared to affect timing. And this correlates with your piece.
As a computer guy, I used think much the same thing -- how hard is it to get the timing right?
When I was introduced to Ed Meitner, of EMM Labs and Meitner Audio fame, that changed, because when he explained it to me, I figured out it's not only thinking about the data in terms of 1s and 0s, but, rather, how the hardware and software has to decode a 1 or a 0 from the stream. You see, there isn't a 1 or a 0 to flash a light on -- there's a representation of it in an electrical medium and the system has to interpret that. How accurately in terms of timing that interpretation happens affects bit-to-bit jitter.
It's for these reasons that Meitner is 100% against external clocks, as are others, which I can understand -- because where the clocking matters the most is right at where the bitstream enters the DAC, so that's where you want it situated by.
But wouldn't an outboard clock separate phase-noise from sensitive electronics ?
Clock reviews, over the years, reported an improvement. Inc. one a year or so ago on Ultra Audio.
There are many issues with regards to clocks. And preferences among reviewers can vary.
With that in mind, I have not attached an external clock to any DAC I've had as I've never had an external clock that takes one. To-date, though, I'll say that the best DAC I've heard is the new DA2, from EMM Labs. Review coming in July.
Can noise affect timing? I don't know.
Interestingly, numerous recording engineers I know say they have never heard a SOTA DAC sound better with an external clock, only worse.
> > If we can just clarify what 'jitter' is -NOISE. < <
You should watch this video. Sorry, it's not short, and it doesn't answer your questions, but you would benefit from watching it.
The gist: jitter=noise is not current thinking.
Depends what you mean by "noise," I suppose.
"time smear" goes back to the first tape recorders. Wow and flutter. Nothing new.
Many of today's DACs have virtually unmeasurable jitter. There are numerous non proprietary ways to deal with it, including synchronous upsampling like Bryston does.
> > Many of today's DACs have virtually unmeasurable jitter. There are numerous non proprietary ways to deal with it, including synchronous upsampling < <
1) There are only two magazines (both print) of which I am aware that publish jitter tests - Stereophile in the US and Hi-Fi News in the UK.
2) Both have upgraded their measurement equipment at least once, making it impossible to compare earlier tests from their more recent tests. Specifically Stereophile was the first to measure jitter, but using a unit from Ed Meitner. The difficulty with this unit was that it required opening up the unit and connecting test leads to specific pins on the DAC chip itself. This wasn't so bad in the early '90s as all of the parts were large through-hole devices and most of them were R-2R "ladder" DACs that were sensitive to jitter on the word-clock pin.
By the end of the '90s there were new DACs, almost all surface-mount devices that are extremely difficult to probe without damaging the DUT. In addition the ladder DAC chips were changed such that jitter was only important on the bit-clock pin, and new delta-sigma DAC chips were available that are sensitive to jitter on the master-clock input pin (that drives the modulators in the output stage). All of this led Stereophile to switch to a new analyzer, developed by Paul Miller (currently editor of Hi-Fi News). This machine obviated the need to open up the DUT, as it only measures the analog output signals. It is made from plug-in A/D cards made by National Istruments that fit into a desktop computer, along with audio specific test routines developed by Miller. The problem with that early machine was the A/D converters used were only 16-bits and they were inside an extremely noisy environment (a desktop computer with dozens of high-speed clocks and switching power supplies). Jitter tests made with this machine typically show artifacts of the test equipment in addition to actual jitter in the DUT.
Many years ago Stereophile switched to an Audio Precision which has a noise floor low enough to accurately measure audio equipment without the artifacts of the test equipment interfering. Hi-Fi News still uses Paul Miller's equipment, but at some point the National Instruments cards were upgraded to use 24-bit A/D converters and improved shielding. It's unclear if that setup performs equally to Stereophile's Audio Precision, but it likely is quite close.
3) In addition to the constantly improving measurement equipment, there has been a parallel trend in many of the D/A converters that are tested. Specifically they often incorporate Asynchronous Sample Rate Converters (ASRC), either separately or built into many modern DAC chips. When this ASRC is employed, almost any D/A converter will exhibit "textbook perfect" results on the J-TEST jitter test used by both Stereophile and Hi-Fi News. However my personal experience is that ASRC can dramatically improve the measured performance of a converter while at the same time significantly degrading its audible performance. YMMV.
4) Unlike ASRC, so-called "synchronous upsampling" does nothing to reduce jitter (measured or audible). It can affect the sound quality, as it is simply a specific type of digital filter. As such they will all affect the sound quality differently depending on the parameters of the filters used.
The bottom line is that I agree that when comparing jitter measurement from now to 10 (or even 20) years ago that it definitely appears that the equipment has improved. I am much less clear that most equipment has actually improved when it comes to audible jitter, especially when ASRC is employed.
As always, strictly my personal opinion and not necessarily that of my employer or pharmacist.
Very educational thank you.
I stand corrected on synchronous upsampling. Interesting, the Bryston DACs, engaging it makes the sound a bit softer, but very pleasing, and a bit more textured. This is across the board.
The upsampling is user select-able. I think that is a nice idea.
> > Interesting, the Bryston DACs, engaging it makes the sound a bit softer, but very pleasing, and a bit more textured. < <
It appears that you are referring to the Bryston BDA-3. According to the owner's manual, "the internal sample rate converter upsamples incoming 44.1kHz and 88.2kHz digital audio to 176.4kHz. All 48kHz and 96kHz digital audio upsamples to 192kHz. The Upsample feature does not affect HDMI or USB."
That unit uses dual mono AKM AK4490 DAC chips, which have built-in 8x digital "oversampling" filters. (The correct technical term is interpolation filters.) When "upsampling" is engaged (again, the correct technical term is "interpolation"), a separate interpolating digital filter is inserted prior to the interpolating digital filter built into the DAC chip - nothing more and nothing less. There are three things to note in this situation:
1) Concatenating digital filters is done all the time, almost always to save money. Virtually all 8x interpolating digital filters are a concatenation of three 2x interpolating digital filters. Nothing new here.
2) The first digital filter in the chain has the greatest sonic impact. It is possible that the sonic differences are simply due to the different characteristics between the first stages of the external ("upsampling") and internal ("oversampling") interpolating digital filters.
3) The other factor that may come into play is the rate at which the modulators in the DAC chip are operating. Depending on the specific internal architecture of the DAC chip, the modulators may or may not be operating at different rates when presented with different input rates (eg, single-, dual-, or quad-rate signals). I've not seen any that are so affected (the modulators typically operate at the master clock rate set by the local crystal oscillator), but it is conceivable that there are exceptions with which I am unfamiliar.
The bottom line is that the "upsampling" feature inserts an extra digital filter into the signal chain. This is very much the situation with MQA, as well. The degree to which the sound is affected (for better or worse) in either situation is simply due to the effect of the extra digital filter.
Just out of curiosity, how would you describe the magnitude of the sonic difference created by the addition of the "upsampling" digital filter in the Bryston DAC?
As always, strictly my own personal opinions and not necessarily those of my employer or other digital engineers.
Ok, I have no problem admitting that some of the dac signal processing you lay out here is over my head!
But to answer your question, on the BDA-3 DAC, the difference in sound when the upsampling is engaged is pretty stark. It is almost like the signal is being passed through a lush tube buffer. Better? Definitely different.
I also wonder how hardware upsampling like the Bryston scheme differs from upsampling at the server stage, like with Roon, or even HQplayer. Upsampling to DSD was in HQPlayer was a big fad recently.
Now with Roon providing that capability along with the previously discussed powerful DSP tools they provided with the last update, to me it looks like MQA is more obsolete with every week that goes by.
EDIT: I am also reminded of the SONY HAP players, which have a user engagable "Remastering" process. Digital filtering of course.
> > Upsampling to DSD was in HQPlayer was a big fad recently. < <
There is a potential for this to improve the performance of some particular implementations of delta-sigma DAC chips. Depending on the internal architecture, it is possible to run the modulator on the output stage at higher rates, which will definitely change the sound - possibly for the better.
As always, strictly my personal opinion and not necessarily those of my employer or baby-sitter.
My master 7 dac can be run nos up to 8 times oversampling. I prefer nos as it sounds more natural than oversampling
> > I prefer nos as it sounds more natural than oversampling < <
I would suggest that it all boils down to the particular oversampling (interpolation) filter used. A decade ago there were many DACs introduced with NOS (filterless) designs. I was curious and when Ayre developed the ability to create custom digital filters, the first test we tried was NOS. This replaced a combination of an external 4x "upsampling" filter feeding the 8x "oversampling" filter built into the DAC chip.
There were significant improvements in many areas - the midrange in particular was very pure and natural, but the frequency extremes seemed to be not quite up to the performance level of the broad midrange band (~200Hz to ~5kHz). I then went down a rabbit hole of various interpolation rates (4x, 8x, and 16x), window functions (Kaiser, Taylor, Gaussian, etc.), multiple parameters affecting the shape of the rolloff curve, and finally various dithering algorithms.
In the end I felt that we had improved upon all of the sonic advantages of NOS without losing anything. But I would agree that in general NOS (filterless) is an overall improvement over the typical filters built into DAC chips. YMMV.
As always, strictly my personal opinions and not necessarily those of my employer or Pee-Wee Herman.
Ahh...a marketing term. And a problem that seems to be solved !
A great piece, Charles. I'm glad 35,000 people read Asylum. Or so (I think) that many.
Ayre had the most to do with bringing zero-feedback amplifiers to the market. And now, the most important with (using) minimum-phase filters. Because more can work around Ayre's price points (vs. say, a 808-series player).
However, there is no right or wrong, as you have noted. Look at the filter in Chord's latest DAC - a country-mile long.
It is all too obvious what this thread, as a whole, is about but I am not interested in engaging in that issue. My posts were in response to punctate factual matters and, as should be clear, not about the rest.
The only hard-fact you gave was the 2L recordings. Helpful. But we need a few more than that ! It's not nice of Meridian to leave us fending for ourselves. They could do more.
But this point is extraneous -you take things hard, esp. with new audio formats. In defense of audiophiles, didn't they embrace SACD ? They did, with no hard words, for years after it came out.
Then titles-available became an issue, then CD-sound improved (putting the new format in question), then how many were true DSD sources, etc.
SACD titles offer some great music, no question. And it's not a failure -it's still around. We just don't want another SACD-scenerio, that's all...
I grant you that MQA (not Meridian) could and should do more but the rest of your post, again, has nothing to do with anything I have said.
Hi Charles. I'll take this, speaking for myself only. You wrote in that other post,
> > To the degree that MQA changes the sound of an existing digital file only depends on two things - the digital filters used and the dither algorithm used. Both of these have direct audible consequences on digital replay. < <
I think it's pretty obvious--I doubt you'd disagree--that, potentially, it depends on other things--specifically, all the things done to the file at the sending end. I don't know if this is relevant to the McGrath recordings used in the demo, but in some cases (like the 2L Nielsen), MQA does something akin to remastering. Likely more relevant is that MQA claims to do things involving digital filters at the sending end as well. This too--and not just the dither and DAC-side filters--could also affect the sound. Make the send-side and receive-side filters complementary--something that's not possible in the wild--and it's plausible that you can do things you can't do if you only control the receiving end. Exactly what, I'm not sure. It's plausible.
Edited for clarity.
Good description. These 2 were more 'talked about' than fully-developed systems.
Other formats promised better sound and made it past the 'talk' phase:
-Open-reel tape -late 1950s. Better sounding than LP, but weak market demand =not many issued recordings. Playback equipment was costly. Gone by the late 70s.
-DAT -late 1980s. Timed just right with LP fading (in mass-retail), demand still wasn't there. And it wasn't much better sounding than LP, the standard at the time.
-DSD -as heard on SACD. Lasted 17 years, so far, but most issues are sourced from high-bit PCM files. Even in DSD, sound quality was not an improvement over CD (although at first, with cheaper players, it seemed to be better).
Then, for more than a decade after its release, SACD-layers couldn't be ripped to file format. There was never a 'killer app' and with file sizes 30 times larger than 16/44, would never be offered as download or stream.
-High-rez downloads. Not an actual format but taken as such. Many of these were simply upsampled CD. But even the real ones struggled to sound better than CD. Some were clearly better -but this was likely due to a new mastering. And especially if the CD version was not mastered correctly itself.
..but it's the same thing. SACD could not be ripped to (hard-drive) servers. It can be now...
"......... with file sizes 30 times larger than 16/44, would never be offered as download or stream."
Well, they are available for download.
Also, reel-to-reel tape and machines was available long before the 1970s.
I just thought reel-to-reel was promoted more in the early 70s, as perfectionist-audio was growing (fast). But another (slight) edit.
Actually, my first impressive exposure to stereo was on reel-to-reel tape in the 1950s.
I wasn't born until the early 70s. I just thought tape was esoteric and rare until the hi-fi boom of the early 70s. I'm sorry it never took off - J. Gordon Holt's reference !
....seems to me you have no clue what you are talking about.
DAT better sound than vinyl? Try again.
LP was never a great source. Digital-tape was a clear improvement over analog cassette of the 70s. Esp. with Japanese-made tape decks.
Only my opinion, of course....
... MQA can encode any PCM rate because it's hierarchical, not just 24/192 or 24/176 only. They can even support DXD rates of 24/352.8k. In fact 2L's MQA test files and "Masters" album on Tidal are 352.8/24 encodings.
... PS Audio is not doing any MQA code work themselves at all. ALL MQA work it is being done by the South Korean OEM supplier they buy the Bridge2 board from; Conversdigital. In fact, ConvesDigital joined the MQA bandwagon and PS Audio merely hopped on along for the ride after the fact!... Don't believe any of the hyperbole put out by Paul McGowan.
I am not sure what "hyperbole" you're referring to.
But you are correct. The core module around which the Bridge II is built is from a South Korean company Converse digital. That's no secret. Never has been. We place that module at the core of the Bridge and then populate the balance of the PCB with memory and a Digital Lens to output to the DAC through I2S.
Converse, with our help, has implemented a number of features including Tidal, MQA, etc. We work directly with MQA on "tuning" the specific firmware version we use in the Converse module found in Bridge II because MQA requires it. They have a DirectStream in Cambridge and use their tuning methods to match DirectStream's impulse response. Converse then programs that into the module.
If by hyperbole you're referring to my many posts of how our programming efforts releasing Huron, the new operating system for DirectStream was held up by MQA programming, that is true. Our side is required as well as Converse. Remember, the Bridge has to communicate with the DAC and has to report its status to our front panel touch screen.
I am sorry you seem to feel there's some kind of BS or conspiracy to defraud our customers—which is simply not true. Hopefully this helps you understand what's going on and ferrets out the truth. I am always available via email for anyone with specific questions or needing help.
Thanks for the opportunity to respond.
For the record, I don't believe there is any BS or conspiracy.
What folks need to get through their heads, especially the MQA brigade, is that if you are going to support Tidal streaming, you are by default going to support MQA. It is a no brainer.
One comment if I may, I do find it a little off putting the number of firmware updates you have instituted. It seems excessive to me, and tells me that the final sound is a moving target. Just my two cents.
Your two cents are always welcome. The last firmware update we did, Torreys, was a year ago. Ted's been working hard for the last year making improvements and figuring out new ways to get better sound.
Without question you are correct. It is a moving target as are all DACs, which is why manufacturers that keep up with technology change models. The beauty of our system is that owners of our DAC get to upgrade for free.
But, it's clearly not for everyone. And, to be sure, not everyone upgrades either.
Post a Message!
This post is made possible by the generous support of people like you and our sponsors: