![]() ![]() |
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
80.66.130.220
In Reply to: Re: You might be interested in this ... posted by Christine Tham on March 2, 2007 at 16:58:49:
"Generate a 16-bit test waveform (mine was two sine waves sine-modulated in frequency and amplitude) in 44.1, upsample to 96, then downsample to 44.1."If you did so in 16 bit then indeed it is no wonder you got bad results. Now please try again in 32 bit.
"I am happy to be proved wrong, but I suspect it's not as feasible as you think."
At least one such has been inexistence for more than four years now. I think this proves feasibility.
As for building one myself, I'd love to. Who provides the funding?
But I agree we've had a disconnect re speed/real-time/batch processing. I don't care a jot about real-time processing, especially not SRC. This indeed deviates somewhat from the thread's beginning, but then, the distinction real-time <> not-real-time has nothing to do with the core of the problem: how to do SRC properly.
Follow Ups:
*** If you did so in 16 bit then indeed it is no wonder you got bad results. Now please try again in 32 bit. ***Again, please refer to the original context of this post - using ASRC to reduce jitter in a DAC. The signals usually being processed *are* 16-bit.
*** At least one such has been inexistence for more than four years now. ***
Details? And is it actually used as an ASRC in a DAC? Remember - the question of feasibility is in the context of reducing jitter in a DAC, mere existence is not sufficient proof.
*** the core of the problem: how to do SRC properly. ***
No - if you refer to the original context of the discussion - the problem is not "how to do SRC properly". This has never been the topic of discussion (and I thought we were all agreeing that theoretically there should be no issues with SRC done right) - the issue was ASRC as a method of reducing jitter. What is being traded off here is accuracy, as the ASRC needs to happen in real time, and in a way that doesn't inject too much noise back into the circuit. Not an easy problem to solve.
Some people believe that the penalty that is paid in terms of loss of accuracy through ASRC is too high to pay for the jitter reduction benefit. The examples on src.infinitewave.ca give an idea of the magnitude of potential inaccuracies.
Personally I can see both sides of the argument. Although I would prefer a "bit perfect" DAC - in reality, examples like the Benchmark DAC and the Lavry DAC show that the use of ASRC can result in good sound (hearsay - I haven't actually personally heard either of these).
![]()
(I wrote up something lengthier, and then Firefox froze ... oh well, here's the short version.)"The signals usually being processed *are* 16-bit."
You force Audition to work with 16b internal precision. Craps out.
Read file, convert to 32b internal, the do all SRC, then back to
16b. Artefacts gone. Try it."Details? And is it actually used as an ASRC in a DAC? Remember - the question of feasibility is in the context of reducing jitter in a DAC, mere existence is not sufficient proof."
I you read closer you'll know by now that *I* never discussed here anything in the context of jitter or jitter reduction (not one of my pet obsessions, have too many others ;-). Another disconnect? Maybe.
We talked a lot about SRC and concluded that many implementations, even those for non-real-time apps, are utterly faulty. This is worrying. It is also good, because it indicates fields for future improvements.I referred to the existing FPGA as it proves the feasibility of 40b+ accuracy (in fact its 64b internally) audio FIR filtering with 1000+ taps. I never claimed otherwise. Yes, it is only used in a DAC as SSRC. I feel confident that using same technology (trust me, I know more than a thing or two about ASIC and FPGA design, and image processing, for the matter) can yield an ASRC that runs rings around something like the AD1896. Luckily, as it would cost a couple of orders of magnitude more.
"the problem is not "how to do SRC properly". This has never been the topic of discussion "
Well, since you pointed to src.infinitewave.ca and since that site indicates that SRC is almost never done right, methinks it became part of the discussion, not?
*** You force Audition to work with 16b internal precision. Craps out.
Read file, convert to 32b internal, the do all SRC, then back to
16b. Artefacts gone. Try it. ***I'm sorry, but this is irrelevant, and also incorrect. I didn't "force" audition to work with 16bit internal precision (I can't anyway, because Audition always works internally at 32-bit fp precision. This has been true since Cool Edit days).
Remember: the original context was you claimed you have never seen SRC artefacts in Audition. I gave an example in which you can experience SRC artefacts in Audition. The example is also valid in the context of the topic of discussion.
Rather than exhorting me to "try it", why don't *you* try it, and show us the results. That way, *you* can make sure you are not "forcing" Audition to do anything.
*** I you read closer you'll know by now that *I* never discussed here anything in the context of jitter or jitter reduction (not one of my pet obsessions, have too many others ***
I'm sorry, but what you are obsessed about or not is irrelevant. You joined a topic about ASRC in DACs, and usually it's polite to try and stay within topic. If you want to talk about something else, start a new topic.
*** I referred to the existing FPGA ***
You still haven't provided any details. Which existing FPGA?
*** We talked a lot about SRC and concluded that many implementations, even those for non-real-time apps, are utterly faulty. ***
No, you concluded that they were faulty. I drew a different conclusion.
*** Well, since you pointed to src.infinitewave.ca and since that site indicates that SRC is almost never done right, methinks it became part of the discussion, not? ***
Can you show me exactly where on that site does it state that "SRC is almost never done right"?
That was an extrapolation and speculation made by you. All the site did was provide graphs. My view is that the graphs are consistent with the hypothesis that SRC implementations are a trade off between accuracy and speed. This trade off is important in the context of the topic of discussion (ASRC in DACs). If you want to discuss a different topic, such as "faulty" SRC implementations and the competency of the implementers, start a new thread :-) I may even join you there!
![]()
This post is made possible by the generous support of people like you and our sponsors: