|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
65.19.76.104
In Reply to: RE: The beans have been spilled! posted by Bibo01 on April 28, 2012 at 21:10:33
What I figured. After forming my opinion by listening I did a simple test to see if the files were different, I used FLAC compression and noticed a slightly different file length in the compressed files. This peaked my curiosity so I did a null test with Soundforge and uncovered the differences.Interestingly, the last time I tried listening to differences in dither at the 24 bit level (-139 dB) I was unable to hear them, but respected the reports, however implausible, by people like Charles Hansen and Romy the Cat that these differences were audible. Now with new speakers and amplifiers it was possible to hear differences at this level, but I doubt very much it would have been possible for me to pass an ABX test.
I preferred A over B. After playing A (and recognizing the recording as familiar) I played B and experienced the "entire soundstage collapsing," the "recording being completely trashed," etc.. Unfortunately, after switching back and forth a few times (as would have been necessary to obtain statistical significance in a blind test) everything started sounding the same, although perhaps the cymbal crash was still different. I have found this effect every time when doing repeated AB comparisons. I suspect it's a case of acoustic memory filling in the differences. (A few years ago one of the dCS people discussed the memory effect and its effect on blind tests at an AES seminar in London that had been on the web for a while.)
I am not convinced that I and others are hearing differences in sound at the -139 dB level. IMO it is more likely that one is hearing differences in artifacts created by one's DAC, which may create greater differences in the analog output. If one wanted to prove this speculation it would be possible (but difficult) to do this by using two DACs and mixing their analog output, feeding one with the original music for both A and B while feeding the other with zeros or the dither signal as appropriate. Both types of DAC (ladder and sigma-delta) have artifacts. Ladder DACs magnify small differences if their resistor values are not exactly correct. Sigma Delta DACs are affected by the chaotic operation of their modulators, which exhibits sensitive dependence on the input. It may be that the average differences are, indeed, at -139 dB but it does not follow that there won't be short term variations at much higher values. If this test proved that people could hear differences a further test would be required using completely separate DACs, amplifiers and speakers for the music and dither signals before one could reasonably conclude that people were hearing these differences instead of system artifacts.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Edits: 04/29/12Follow Ups:
The above is a NTD (Null Test Difference) on the DAC analog output by my friend Tom Gefrusti. The graph is very zoommed in order to show the difference.
According to it (him), harmonic distribution on the DAC outputs changes continuously, even more than the change between A and B.
In practice, there is a randomization of the harmonic content.
So much so that the difference can be greater from A to A again.
http://uptiki.altervista.org/viewer.php?file=yjrw04w7q9nmymricrjy.jpg
Odd...I can still see the image in the above message...
Why does a difference in the digital file that's bounded at -138.47 dB below 0 dBfs manage to affect a graph around -100 dB?
Perhaps I misread or misunderstood the graph, but I believe this is the nub of the issue. I doubt seriously that people can hear changes in noise that is -139 below peak levels, but I know from personal experience that I can just barely hear sound that's at -97 dB below 0 dB FS, and that's using monitor gain settings that I actually use for one or two recordings in my library (both Mahler symphonies). For solo harpsichord recordings the monitor gain would be turned down another 15 dB and this noise would be inaudible to me in my room.
I suspect some artifact of the DAC is magnifying the small difference and turning it into slightly different noise. (Technically, this is called noise modulation.) If my suspicion is correct, this is a serious indictment of the DACs in question, including mine (as I heard a difference). This calls for a new test of DAC performance.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
This depends from NTD measurement.
If you are thinking in terms of audibility of what's happening, you should not look at Y scale.
If you look at the frequency of the segment analized (around 18kHz), that track at those frequencies has a signal level close to -93db. That's why, together with other external factors, some small DAC distortions. As you know, altought it's 24bit, at -93dB the waveform is not perfectly symmetrical. So, all considered the measurement went even pretty well...
The interesting difference is the difference between A and B. This is what the listener might hear. If the DAC had complete linearity, this difference would be at -138 dB, and would not easily be detected by the measurement setup that was used, which as not so good as -100 dB. The measurement setup is not adequate to measuring the actual differences in the two signals, as can be seen by seeing how little difference there are between the curves. (Also, without knowing the details of the FFT, e.g. the window size/shape, it is not possible to correctly interpret levels in frequency spectrums. This can easily be seen by running a spectrum plot on a given waveform using different FFT sizes and windows.)
If one wants to see the magnitude of difference in the analog output of the DAC with two waveforms it will be difficult to measure differences that are well below the noise floor of the system, e.g. 38 dB below -100 dB. While this is difficult, it is not impossible, as it is possible to take multiple measurements and average over them, but this will require careful design and calibration of test equipment, as the averaging has to be synchronous. Null test measurements tend to be unstable, because they are a difference of large quantities. (This is easily seen when doing null-test differences of sample rate converters where sub-sample offset errors often prevent deep nulls.)
In general, the test equipment needs to be ten times better than the equipment under test, otherwise evaluation of experimental results are difficult. This is a problem when evaluating "state of the art" cost no object equipment because the device under test is going to be as good as the measurement equipment. For measuring high quality equipment specialized test procedures are required. Note only is it difficult to do these measurements, we don't understand what the differences are that ought to be measurement. We are looking for a needle in a haystack and all we have is a dim light.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Just to give you an example, for this optimized NTD a series of double acquisitions were made in the analog domain with signals at:
1) -90db
2) -95db
3) -100db
4) -110db
5) -115db
6) -120db
7) optional...in the digital domain (the bottom one).
Each one has received a small modification in the spectrum, precisely at 1000Hz and 10.000Hz.
Even at this low level NTD perfectly shows the modified segment at 1000Hz and 10.000Hz. From the picture we can evaluate difference at least up to -117db...in Analog.
Sorry, I'm not able to figure out from the description what the graphs mean and how they were obtained. Without this understanding, I'm afraid I can't make an intelligent comment.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
What I posted had no direct reference to the test of the 2 files.
It was to show ADC quality of the NTD system and its capability to "select" signals.
Link to image was busted...Your friend is right. With most, if not all, DACs the differences from moment to moment are likely to be much greater than the difference between A and B. However, over a period of time these differences average out and it would certainly be possible for analog measurement apparatus to detect the differences by using sophisticated averaging techniques. What's rather more amazing is that some audiophiles can do this by ear.
If you do null tests in the analog domain you will get different results each time because the digitization of the analog output happens with a separate clock. The relative phase of the the two clocks constantly changes, invalidating the null tests. It may be possible to slave the two clocks, but jitter noise will still be different.
With ladder DACs the waveform output by a DAC will not vary from play to play, except for noise, voltage and temperature changes. With a delta-sigma DAC there is a new source of randomness, the state of the delta-sigma modulator, which is a pseudo-random number generator. If the modulator is given the same set of input bits on two separate occasions it will not produce the same set of output bits to convert to analog. That's because the modulator is a digital feedback circuit which retains a history of it's previous activity. This is easily shown to be the case if one experiments with these modulators with software. One gets a completely different pattern of output noise when feeding them two constant input signals that differ by one in the least significant bit. The ability to hear these differences (especially in the DC case) must be taken as a fault in the DAC, not a "resolving" feature.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Edits: 04/29/12
Hello, everybody.
First, Thanks to the OP for this excercise. It is very important to do such things in order to understand, where we are on the hi-fi scale, where the general quality of reproductioon is concerned.
Next, when I was performing this AB, I was using a cue, I have created with just these two files, going on and on. I started in the early morning with very good power and a fresh head. In the middle of the test my monitor - an old crt display - I used for cmp, went dead, and though I have a working machine close by, I was too lazy to re-plug another monitor to the cmp machine, so the test went almost blind. I marked the files in my mind as Good and Bad, went to make coffee, take a shower, all sorts of things, so I could not sometimes hear and follow exactly what file was playing after a time. I was using 6" two-way very cheap powered monitors, if it matters, but every time, when I came back, after a cycle of playback has been completed, I heard the difference, and pretty soon could tell, Yeah, this one is the good fellow. When I finally, switched on another monitor, I WAS RIGHT.
So this only means that after some training our mind LEARNS to discriminate between very subtle things. Also, it shows well, that our cmp (tweaked one) is so resolving, as to unveil such differences easily.
Serge.
I agree that the memory thing can be misleading and tends to fog the things, but not to a degree, when You can't discern the obvious, when You try to be really attentive and place sonic "hooks" in Your mind on what to pay attention to next time.
Also, as somebody, who spent a LOT of time with graphics, I know about different dithering algorythms and how discernible they can get, especially in grayscale to monochrome conversion, so technically, there is nothing wrong with this test, especially as I think, our ears are LESS forgiving than our eyes.
Perhaps the difference was only half-obvious to me because I was using only half of the secret sauce, i.e. cPlay alone. :-)
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: