|
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
199.46.198.231
In Reply to: RE: Measuring digital audio qualities of bit-perfect playback with Diffmaker's correlation depth posted by Windows X on July 10, 2016 at 09:52:00
2. Prepare aligned master files with silence added. For basic demonstration, I'll make 5 samples of aligned/before/after wav files with Audacity at 24/96 format (10ms latency).
Without more explanation, I don't fully understand what you are doing in this step. Why do you need to add silence to the test signals for this kind of test? Are you taking the same recording and just adding 5 different amounts of silence?
Aligned master
parameters: -3.5sec, 0.000dB (L), 0.000dB (R)..Corr Depth: 175.6 dB (L), 174.0 dB (R)
parameters: -4.5sec, 0.000dB (L), 0.000dB (R)..Corr Depth: 168.5 dB (L), 168.6 dB (R)
parameters: -5.5sec, 0.000dB (L), 0.000dB (R)..Corr Depth: 167.4 dB (L), 167.5 dB (R)
parameters: -6.5sec, 0.000dB (L), 0.000dB (R)..Corr Depth: 166.3 dB (L), 167.0 dB (R)
parameters: -7.5sec, 0.000dB (L), 0.000dB (R)..Corr Depth: 172.5 dB (L), 176.1 dB (R)
Average: 0.000dB (0.000-0.000)..Corr Depth: 170.35 dB (166.3-176.1)
Median: 0.000dB..Corr Depth: 168.55 dB
Dropped to nearly 50% of perfect data but still above 150 dB. With 9.8 dB swing range, it's safe to assume about 5% threshold for evaluation.
I guess you're comparing the original master file with the "aligned master" file, which is identical to the original master but with silence added? Do you have time alignment and/or gain alignment enabled in Audio Diffmaker? Looking at the parameters output, I'm guessing time alignment is enabled but gain alignment is not. The strange thing is that Audio Diffmaker's time alignment usually doesn't product nice round numbers like that.
Before Fidelizer
parameters: -1.581sec, 0.001dB (L), 0.001dB (R)..Corr Depth: 90.6 dB (L), 91.5 dB (R)
parameters: -1.184sec, 0.001dB (L), 0.001dB (R)..Corr Depth: 87.2 dB (L), 87.3 dB (R)
parameters: -1.018sec, 0.001dB (L), 0.001dB (R)..Corr Depth: 88.1 dB (L), 88.1 dB (R)
parameters: -946.4msec, 0.001dB (L), 0.001dB (R)..Corr Depth: 88.3 dB (L), 86.3 dB (R)
parameters: -686.3msec, 0.001dB (L), 0.001dB (R)..Corr Depth: 90.2 dB (L), 87.6 dB (R)
Average: 0.001dB (0.001-0.001)..Corr Depth: 88.52 dB (86.3-91.5)
Median: 0.001dB..Corr Depth: 88.1 dB
Real world result arrived with quite narrowed range. It's only 5.2 dB between min/max of correlation depth. At least it's more reliable than aligned result.
After Fidelizer
parameters: -563.4msec, 0.001dB (L), 0.001dB (R)..Corr Depth: 104.0 dB (L), 95.9 dB (R)
parameters: -1.025sec, 0.001dB (L), 0.001dB (R)..Corr Depth: 93.5 dB (L), 94.0 dB (R)
parameters: -1.286sec, 0.001dB (L), 0.001dB (R)..Corr Depth: 87.2 dB (L), 87.3 dB (R)
parameters: -1.025sec, 0.001dB (L), 0.001dB (R)..Corr Depth: 88.1 dB (L), 88.2 dB (R)
parameters: -856.4msec, 0.001dB (L), 0.001dB (R)..Corr Depth: 90.4 dB (L), 87.6 dB (R)
Average: 0.001dB (0.001-0.001)..Corr Depth: 91.62 dB (87.2-104.0)
Median: 0.001dB..Corr Depth: 89.3 dB
It started great with over 100 dB but the rest seems to wear down over time a bit because I also opened Chrome to chat in Facebook while during the experiment for daily usage tests. Strict tests for high quality result may lead to faking data abuse from people who can't do a proper job.
With Fidelizer's optimizations, we detected 3.1 dB increment of average and 12.5 db increment of maximum correlation depth with general improvements on other metrics too. I shall conclude that there's measurable improvement with bit-perfect playback in digital audio.
If your test configuration really is a software loopback, and both the playback and recording side is bit perfect, then the only thing that can vary from trial to trial is the amount of digital silence at the beginning and end of the recordings. And any effect that Fidelizer has on the quality of your computer's digital output couldn't be tested in this way.
What I find interesting is that the correlation depth went from >165dB to <100dB with the inclusion of the recording & playback loop. I can think of two possible reasons for this:
a. Either the playback or recording chain is not bit perfect. You could spot check this by comparing some number of actual sample values in the original and recorded wav files.
b. Audio DiffMaker's time alignment algorithm is fairly sensitive to the amount of zero padding at the beginning and/or end of files. You could check this by manually trimming the files. Assuming this is the case, your test is really just characterizing the accuracy of the time alignment algorithm.
Follow Ups:
Dave_K is right.Obtaining a Correlated Depth of ~90dB is typically what one sees with DiffMaker's time alignment algorithm even when I use my standard test signal using a decent DAC/ADC with bitperfect playback.
For example, testing WAVE/FLAC/AIFF:
Do Lossless Compressed Audio Sound The Same?I am frankly a bit surprised that with a software loopback the results are so low! But a Corr. Depth difference of 3dB is well within ~4dB standard deviation I found, again even with an actual DAC/ADC measurement:
The DiffMaker Audio CompositeBTW: What piece of music did you use for this test???
-------
Archimago's Musings: A 'more objective' audiophile blog.
Edits: 07/17/16
''I'm not Diffmaker's developer so I can't answer all those questions so I replied to only what I 'really' know and 'tried'''
This seems to sum up his post on the merits of his 'optimisations'.
Pure software 'war' games on audio quality without identifying real transfer functions across hardware components just lead to 'soft' conclusions that can be argued from dusk till dawn..
All hardware runs from software so person with enough knowledge can understand the importance of pure software environment test. Well, I don't mind if you want to try Diffmaker with actual hardware and share your result.The most wasted effort in science is raising argument and assumption without adequate knowledge or action to back it up. Ah, I don't include being a liar and imposter as a part of science. ;)
Regards,
Keetakawee
Edits: 07/12/16 07/12/16 07/12/16 07/12/16
this simple question from TL
''1. How many times did you repeat each experimental condition, and if you did so, how consistent were the null depths?''
How valid are your assertions?
How valid are your assertions?
Sorry to be pedantic but, if you read Tony's question carefully, it's meaningless. One can guess his meaning but guessing is not a recommended approach (outside of climate science).
I say this because the question gave me the impression that it was less about advancing the discussion, more about putting the OP in his place. Hopefully, you can reassure me I'm wrong despite all my years in this ward of the asylum. Perhaps I'm becoming institutionalised.
has explained adequately below.
The original post appears to be a smokescreen for promoting a product - it
was not comprehensible as a rigorous test of audio performance. If it was, then the software basically changed the OS's audio stack behaviour IN the DIGITAL domain.
Do the rules allow manufacturers to start threads?
promoter
1) The O.P. wasn't promoting his product.
2) It's getting tiresome watching the mess you leave behind in this forum with your unwarranted attempts to moderate what others post. We have a real moderator for that role.
so soon after the warning!
Anyone who can not understand the reason behind my question is categorically unqualified to reach any conclusion from using Diffmaker or any other numeric based tools to conduct any physically based experiments.I learned as much in high school physics labs. Repeat the same experiment and you will get different results each time. These are caused by random noise or other things that are not understood. If comparing different test conditions are not hugely different from comparing the same test conditions multiple times, then noise or other defects in the experimental test setup will make it impossible to reach any valid conclusions.
In general, even if you do dozens of tests with condition A and dozens with condition B and there is nothing close to an overlap of data points between the two tests, you still do not know for sure that this is because the two conditions are really different, unless you have blinded your test apparatus from the two test conditions. So, for example, if you run all the multiple tests on condition A first and then run all the multiple tests on condition B second, even if all the results are -90 dB +-1 dB for condition A and all the results are -100 dB +- dB for condition B you still can not reach a valid conclusion that the choice of A or B affected the result, because it could be that something else in your system or environment had changed between when you did the testing of condition A and the testing of condition B. (For example, a neighbor might have switched off a noisy electrical appliance.)
I am not going to waste any more time explaining this. It is either crystal clear, or people need to seriously rethink their understanding of scientific experiments. A final point: anyone who uses any experimental apparatus without a complete understanding of how it works and what its limitations are is a fool. This was another lesson from my high school physical lab work. We used an electric stop watch to time a pendulum so as to measure the acceleration of gravity. My lab partner and I got inconsistent results (and incorrect results) because it turned out that we were actually using the pendulum to measure the power line frequency. We went with the instructor to the local power company and looked at their charts of power frequency and saw that it varied by several percent around the nominal 60 Hz frequency, mostly as a result of varying load over different times of day. We were able to correlate our raw experimental data with the power company data, and ended up with a grade of A+ on the final result.
Years later, my cynicism came to the fore, when I realized that many of the other students could not possibly have gotten the correct results, and therefore they were probably cheating. With the benefit of hindsight, there would probably have been lab data to prove that these kids had been cheaters and back then (late 1950's and early 1960's) they might even have been thrown out of school. Now, I have little confidence in any "scientific data" since most "scientists" today are as biased by getting good funding grants as school kids were to get good grades. This covers "climate science" and big pharma's "science".
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Edits: 07/12/16
Thanks, Mr. Lauck.
Wish there were more like that!
Without more explanation, I don't fully understand what you are doing in this step. Why do you need to add silence to the test signals for this kind of test? Are you taking the same recording and just adding 5 different amounts of silence?
: To see the results of added silence with Diffmaker's default configuration so I can see its variations.
I guess you're comparing the original master file with the "aligned master" file, which is identical to the original master but with silence added? Do you have time alignment and/or gain alignment enabled in Audio Diffmaker? Looking at the parameters output, I'm guessing time alignment is enabled but gain alignment is not. The strange thing is that Audio Diffmaker's time alignment usually doesn't product nice round numbers like that.
: As I'd like to make things simpler for other people to test themselves, I'm using Diffmaker's default configuration as I stated before. Default configuration has both Time and Gain alignment.
If your test configuration really is a software loopback, and both the playback and recording side is bit perfect, then the only thing that can vary from trial to trial is the amount of digital silence at the beginning and end of the recordings. And any effect that Fidelizer has on the quality of your computer's digital output couldn't be tested in this way.
: I'm not really sure. Why don't you give it a try? You can try improving this test by running with different settings and share your result here. :)
Regards,
Keetakawee
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: