Home Computer Audio Asylum

Music servers and other computer based digital audio technologies.

RAM playback and Latency settings

Why are we very strange? (my Japanese is non-existant)

To be a RAM player wouldnt you want more samples [in] RAM as supposed to less?

All samples are in RAM as per wav (or flac or other) content file. For a CD wav that would be ~600MB loaded in RAM.

Foobar does playback via sound buffers (or chunks) which are prepared in RAM. This allows for DSP processing like SRC upsampling. The buffer size is determined in foobar's settings (preferences > output > buffer length). Buffer size can be specified in time, samples or bytes (foobar uses time).

Samples are streamed to the soundcard from this sound buffer - for audio, 2 or more are used. While one buffer is being played, another is being readied. Sound buffers are prepared (during playback) from wav (or other content) file samples and processed by SRC upsampler to the new sample rate (in my case 24/192). This processing is not time critical and with enough CPU capability, sound buffers are prepared ahead of time.

For example, a 100ms foobar buffer setting at 24/192 will require 19200 samples (each sample contains both L & R signal amplitudes). Alternatively, that would be 300KB (19200 x 2 x 8) - foobar processes in double precision (thats 8+8 bytes per sample). It's very important when using SRC and no other DSP, that you set foobar's buffer to the minimum of 100ms as more would cause heavier CPU demand and potentially prevent buffers from being prepared ahead of time. Wav file at 16/44.1 would provide 4410 samples for each 100ms buffer.

I know in the recording industry they want very low or no latency on music coming in....but for music going out we would want to use RAM to buffer the music by increasing the samples...yes/no? maybeso???

48 samples would require less time than 64 samples, but .25ms is more time than .33ms...


This is very important: both Recording and Playback are realtime events.

All playback occurs from RAM. Consider a 100ms foobar sound buffer being played. Samples converted to 24 bits (or 32 bits depending on soundcard driver) from this buffer is streamed in real-time to the soundcard. This is where latencies come in. At 24/192, 48 samples give 0.25ms (48/192000) latency and 64 gives 0.33ms (64/192000) which is longer. You must see latencies as the amount of sound data being transfered. So why is less better? For me, its two-fold:

  1. PCI: See P18 of AOB Computer Transports v0.3. PCI prefers high volumes of much smaller payloads (data transfers). (Note, newer mobos run USB and Ethernet outside of PCI bus but still share traffic with North/South chipset so traffic sharing here must be avoided).
  2. ASIO: Also prefers lots of small payloads.


Hence, there's excellent synergy between soundcards and ASIO. Most optimal setting is with smallest stable latency as this utilizes both PCI and ASIO to its respective strengths, ie. lots of small data transfers. This lays the foundation for real-time bit-perfect delivery and reduces jitter. See Bit Perfect Measurement & Analysis for cMP at 24/96 output using SPDIF coax. How many transports have you seen capable of 24 bits resolution (technically 23.5+ bits due to rounding error in measuring)?

In the measurement performed, source is set to 32 samples latency (RME 9652) and target to 48 samples (Juli@). Both machines configured to full cMP specifications with target running Steinberg's Cubase LE recording sw.


This post is made possible by the generous support of people like you and our sponsors:
  Parts Connexion  


Follow Ups Full Thread
Follow Ups

FAQ

Post a Message!

Forgot Password?
Moniker (Username):
Password (Optional):
  Remember my Moniker & Password  (What's this?)    Eat Me
E-Mail (Optional):
Subject:
Message:   (Posts are subject to Content Rules)
Optional Link URL:
Optional Link Title:
Optional Image URL:
Upload Image:
E-mail Replies:  Automagically notify you when someone responds.