![]() ![]() |
General Asylum General audio topics that don't fit into specific categories. |
For Sale Ads |
Use this form to submit comments directly to the Asylum moderators for this forum. We're particularly interested in truly outstanding posts that might be added to our FAQs.You may also use this form to provide feedback or to call attention to messages that may be in violation of our content rules.
Original Message
BITS ARE BITS! - look elsewhere
Posted by petehh on June 7, 2008 at 12:37:19:
There's no arguing with the fact that bits are bits and that digital is self timing - you just spit the bits out at a uniform rate and you're done. The high end people are at fault for grasping at bogus explanations for what is wrong with CD. There are lots of reasons why CD isn't perfect, even though bits are bits.
For example:
a) imperfect microphones and microphone placement
b) A/D converters
d) D/A converters
e) too few bits (e.g. the brick wall effect)
d) misreading bits - I'm told this happens though it's hard to believe. My PC never misrepresents a dollar figure which it would do every so often if it were misreading bits. Here's what I don't understand. Why doesn't someone put out a player that shows how many times error correction is used during playback? The would be extremely enlightening.
e) Sampling rate inadequate
f) lack of a smooth gradation between levels - This is the interesting one. I'm not an engineer so maybe someone who is can help me out. Consider this: because the loudness is a function of the amplitude fo the wave, in a sense analog permits an infinite number of gradations within its dynamic range, whereas digital limits you to 2 to the 16th or whatnot. Possible the ear can detect the step-function quality of digital. I wonder if there's a way of testing this, e.g. by making an analog recording that emulates the step-function effect of digital.
Thanks for feedback.
- PH