Home Computer Audio Asylum

Music servers and other computer based digital audio technologies.

RE: Food for thought

JE,

Would it be fair to say that recording in DSD, or at the highest possible sample rate, would produce an orders of magnitude more accurate digital master than the alternative, and that (given the correct equipment) D/A conversion of such recordings will produce a much more accurate analogue output?

Presuming you concur...

Any analogue signal is continuous, but the digital signal from which it was made is not, leaving gaps between samples which should be observable on a spectrograph with sufficient resolution. At any sample rate which is below the speed of analogue, these gaps will be transferred to the analogue as silences (or noise/whatever).

Because the analogue signal is travelling at something approaching the speed of light and because of the listener's auditory capability/limitations of his equipment, none of these gaps will be heard by the listener, unless transferred as noise and left unfiltered (if transferred as noise, one would presume that they would be of a constant nature and therefore identifiable and susceptible to filtering, leaving the gaps as silence).

What this means is that the speaker is playing a broken signal. Where there is a silence, the speaker merely waits an imperceivable amount of time (0.000021secs for 48kHz)[ยน] for the next bit of playable signal to arrive.

However, where there is a more accurate analog signal, the gap will be much shorter and will be followed by a signal that would not have been reproduced with a lower sample rate.

Similarly, a RedBook CD pressed from a High-Res master will contain different information to a RedBook Cd made from a Lower-Res master of the same track.

There will be audible and measurable differences between higher and lower-res music, even if you aren't actually listening/playing back at a higher resolution.

Note 1:

Whether this is actually an imperceptible silence should easily be tested with the correct software, introducing the same gap into a constant tone. My calculations a few posts up suggest that 0.00005 is the shortest perceivable time for a 20kHz signal, and 0.001 for a 1kHz signal, the inverse should also hold with the equation 1/xHz providing the minimum audible time for a given frequency. i.e. 0.000021secs of silence is less than half of what is needed to produce an audible change at 20kHz, and is thousands of times shorter than can be perceived at 1kHz.



Edits: 09/14/14 09/14/14 09/14/14 09/14/14 09/14/14 09/14/14

This post is made possible by the generous support of people like you and our sponsors:
  Kimber Kable  


Follow Ups Full Thread
Follow Ups

FAQ

Post a Message!

Forgot Password?
Moniker (Username):
Password (Optional):
  Remember my Moniker & Password  (What's this?)    Eat Me
E-Mail (Optional):
Subject:
Message:   (Posts are subject to Content Rules)
Optional Link URL:
Optional Link Title:
Optional Image URL:
Upload Image:
E-mail Replies:  Automagically notify you when someone responds.