![]() ![]() |
Audio Asylum Thread Printer Get a view of an entire thread on one page |
For Sale Ads |
190.247.38.37
I have my system set with a Squeezebox Touch, which I really like, but I’m having continual dropout problems. I’ve been troubleshooting it, but cannot get rid of them. I’m convinced the Touch is not the problem, but the “system”, so instead of attempting to make marginal improvements, I’m interested in identifying “the best” setup.
Currently have LMS running on my daughter’s Win 7 PC, connected thru ethernet to the wireless router, wirelessly to a wi-fi bridge which connects thru ethernet to Touch, and using analog outs into my pre. I have Soundcheck Toolbox 3 installed, so WAV and FLAC files are not compressed when sent over the network to the Touch, demanding more from the network – a likely contributor to the dropouts. My daughter uses that computer for many things when I’m listening (streaming video from internet), another likely contributor to dropouts, but even when that computer is doing nothing and there are no other users on the wi-fi network, I still get dropouts. While I have a laptop only I use, it’s really the work laptop and I cannot install non-work related software.
Here are some options I’m considering, and willing to consider other options you propose:
1) Hardwire an Ethernet network that between the SB Touch and the LMS server. I cannot do this from my daughter’s PC given location/cable routing, but taking wifi potential issues out of the equation seems sensible.
2) Use my wife’s laptop as LMS server. It’s a new Sony Vaio with 64-bit Win 7. Music library on an external USB hard drive. Connect it the ethernet in 1). The Touch would get internet access through the laptop accessing internet through wifi. Using my wife’s laptop will have a cost, if you know what I mean, so from that point of view I’d rather go with 3).
3) Use an old Sony Vaio PC as LMS server: 2 GHz Pentium 4 with only 480 MB of RAM, running Win XP. Music library on an external USB hard drive. While it’s a very old PC, I can strip all non-essential software/functions and set it up to do only the server task. Connect it the ethernet in 1). Or even PC through CAT5 cable directly to Touch, without router in between? Internet access would be like in 2).
4) I’ve seen comments about using a Vortexbox, but I’m hesitant to buy yet another box only to try out. Maybe instead of buying a Vortexbox I would be better off just getting a new laptop for me…but I start down a slippery slope: with a laptop, do I need a SB Touch? Anyway, don’t need to go there now.
Your input very welcomed!
Follow Ups:
Yes you should be able to use that old computer as a dedicated server, the memory is a little low, but as long as you don't have other programs starting at boot time you should be OK. You want to make sure you don't have other things running stealing memory.
Yes you can run with just a wire between the LMS and the Touch, it will work. You will most likely have some weird messages showing up because it cannot connect over the internet to mysqueezebox.com. It tries at boot time and every so often to "phone home", as long as the firmware on the Touch matches the server version you won't need to connect. A simple cable is all that is required.
You will need to set static IP addresses on the computer and the Touch. Its best to do this on the computer first. Its a good idea to set it to the same subnet as your existing network but outside of the DHCP range of your router. Then when you plug in the Touch and turn it on it will eventually bring up a window to set a static IP address. For example if your existing network uses 192.168.1.100-200 as its DHCP range you could use say 192.168.1.75 for the server and 192.168.1.80 for the touch.
Another option which will get around these issues is to combine the two setups, hardwire the server to the Touch, but also connect it to your wireless bridge so you are still connected to your existing network for DHCP and internet connectivity. The data still goes over the wire between the server and the Touch, but you can still get to mysqueezebox.com when needed.
In this scenario both the dedicated server and the Touch use wires to plug into the wirless bridge, which acts as a switch between them and also connects them to the existing network. Both the server and the Touch get internet and DHCP over the wireless. Everything is all connected together but you don't need a wire between the server and the rest of the network. (I presume that is the big issue, that the listening location is a long way away from the rest of the wired network)
It should give you the best of both worlds.
John S.
As a rule, it's better to have a dedicated server than a non-dedicated one, if that's what you're asking. But it won't make your wireless network any more reliable, and that's the mostly likely problem. A "65%" signal isn't very good.
There's very little you can do in software to make the wireless network perform better, but I would recommend disabling the TT software modifications by doing a factory reset on the device. Hold down the reset button for about 15 seconds or so until you see a message on screen about restoring factory settings. Then report back about how well it works. You could also go back to streaming FLAC instead of uncompressed data.
After that, either make your wireless network more reliable or switch to a wired network connection. Doing the latter will almost certainly fix the problem. To make the wireless signal stronger you might try repositioning the wireless router in your home or else replacing it with a different router. You want a stronger signal, which will be both faster (less susceptible to problems because the buffer can refill faster) and less susceptible to interference from microwave ovens and other outside sources.
Thanks for the answers. I value you providing help - I really do. However, I've said it on my OP and then at least once: I'm going to put down a wired ethernet, so please stop convincing me wireless is an issue. I'm convinced already! :-) That was my 1) on my original post.
I still have to try changing the alsa buffer size, which I'll do as soon as I start hitting dropouts again, just to make sure.
So I guess now it boils down to 2) or 3): new laptop non-dedicated as server, old PC dedicated as server, or other?
Also: can I connect a CAT5 cable directly from LMS server to SB Touch, without router in between? I believe there is a different RJ45 configuration to do this.
Thanks!
OK, you're going to switch to a wired ethernet connection for the Touch, which we're both convinced should fix the dropout problems.
> So I guess now it boils down to 2) or 3): new laptop non-dedicated as
> server, old PC dedicated as server, or other?
What "it" are you talking about? Apparently not the dropouts, as you've already solved that problem.
Like I said, a dedicated server is always best. You stated that you didn't want to spend any more on the system. In that regard, the old PC would be the cheapest route. Best may be "other" - buy or build a fairly inexpensive server. Easily doable for $200-300 plus the cost of hard drives.
> Also: can I connect a CAT5 cable directly from LMS server to SB Touch,
> without router in between? I believe there is a different RJ45
> configuration to do this.
Yes, you can do that. The wiring is no different, only the networking configuration on the computer and Touch.
But why would you want to? First, it means that the server would be in the listening room. Unless you buy, build or already own a _silent_ computer, that would be a bad idea. Secondly, it suggests that only those two devices would be on the network, which means that you would not have network access to the PC for doing things like putting new music on its hard drives or for listening to internet radio or music services.
Well, on the original post I also said I couldn't hard wired an ethernet from the existing server . It's just not practical. So with the wired setup also comes a different server. That is "it".
The hard drive I already have: 1 TB iomega self-powered, usb.
The "new" server would sit on the room beside, so no noise issue to worry about (from the PC or HD). That I can easily wire.
The server would connect to the Touch through wired ethernet and to internet through the bridge> wireless> router.
What do you propose as server?
I think I have a similar set-up as yours and problems that may be similar or different from yours (from what I can gather).
Connection as follows: Win7 (LMS) > (cat5)> Cisco Wifi Router > (Wireless)> TP Link Client Router > Cat 5> SBT.
I also have an extension cable for the TP-link's antenna such that it can be placed a good location. (The SBT used to be cramped there before this hybrid wireless set-up, where it showed 100% signal strength) I wanted to do the hybrid setup for a few reasons: 1. put the SBT at a better isolated location (it was amidst a bunch of power cords and stuff), 2. hopefully take the wireless interference and load out of the box and 3. the wireless N router would be more stable and have better performance than the SBT's built-in wireless N.
The new set-up does sound a bit better (I also have Soundcheck's TT3.0 installed, now with WLAN defeated). On network test, I can turn it up to 10000kbps and still get a green bar all the way (which should mean that the signal is mighty strong and stable). Playing music (decoded to WAV at server side) is flawless most of the time.
However, every now and then, I get a drop out problem that seems quite different from the drop out problems I had before with the SBT. Instead of a few seconds of rebuffering and coming back on, play a while and rebuffer again (as with an iffy wifi signal), it would instead go to rebuffering but I could hardly see the % go up, as if there is no connection. Sometimes it would even stop playing. I couldn't even do network test.
I don't know if this syndrome was waited out or if my attempts to turn off and on the routers and/or SBT solved the problem, it would suddenly be fine and be flawless again.
Perhaps it is a sudden burst of interference somewhere or whatever. I have not done enough testing to nail the problem yet.
Sorry for not helping at all, but I just want to share my experience to see if others can shed some light on my problem which I am not sure if is exactly like yours.
Cheers.
Eduardoo,
Your setup is almost identical to mine:
Yours: Win7 (LMS) > (cat5)> Cisco Wifi Router > (Wireless)> TP Link Client Router > Cat 5> SBT
Mine: Win7 (LMS) > (cat5)> Cisco Wifi Router > (Wireless)> Cisco wireless bridge> Cat 5> SBT.
I too have the TT 3.0 installed with WLAN disabled, WAV files decoded at the server, and the bridge is placed 4-5 feet away from the Touch.
Actually my dropout problems sound similar to yours: music plays for a while, maybe even an hour or more, then stops and screen shows various rebuffering messages, then song shows up as paused. Upon network check, it's sometimes bad and sometimes ok, but even when ok, the server diagnose shows some lines that are not ok. Next time it happens I will take detailed notes of what the screen displays. Unfortunately for me this issue is erratic: yesterday played almost throughout the day with just one short dropout. Saturday I had to give up listening at night because I just couldn't make it work - which triggered my posting of this thread!
I wonder why nobody has commented further on the notion of a dedicated server with the PC I described, connected through ethernet to Touch. Bad idea? Waist of time? Because the machine is just way too old?
I keep coming back to this alternative because I could place it next room (no fan noise issues) and connect it through LAN and be done with wireless issues...
Thank you all for the input!
In general, if Wi-Fi signal strength is anything less than the maximum there will be the occasional lost packet. What happens then is anyone's guess, as the computers will probably execute code paths that have been inadequately tested, assuming that the basic protocol design is sound. A correct protocol for this application is impossible (for the reason described below). The conclusion is simple: if your wi-fi network is dropping packets then it must be replaced by a network that does not have this problem. Most likely, if the endpoints are far enough away that running a wire is impractical then wireless reception is not likely to be reliable. Go with a wired network. Wireless networks are appropriate only for portable devices where there is no alternative.
Music playback is real-time. It can be proven by means of logic that it is impossible to for any protocol to take an unreliable communications media that loses packets and convert it into a real-time media that is completely reliable. If one wants complete reliability one must occasionally have arbitrarily long waits, something not possible in a real time system. Therefore, the best that a real-time protocol can do is to eliminate the effect of some lost packets. It can never give a result equivalent to a loss-free network.
Tony Lauck
"Diversity is the law of nature; no two entities in this universe are uniform." - P.R. Sarkar
Your wifi signal from the router to the bridge is probably the root cause of your dropouts. Cordless phones, microwaves, neighbor's wifi broadcasting on the same channel are just some of the causes for your signal to get weak and drop out.
If your house's electrical system is in good shape try a powerline networking kit. This technology has become perfected in the last few years and if both circuits are on the same breaker box you should get plenty of bandwidth. The 200mbps kits have come way down in price lately and is more than enough to support audio (and video) streaming. You can get a mainstream brand kit for under $100.
Also, if you have the opportunity to setup a dedicated Squeezebox server for the Touch this is preferable. You don't need a lot of processing power or memory (mine runs on a Linux-based NAS with 800MHz CPU and 128MB RAM), but make sure it's hard wired to the network and all power management, screen savers, unnecessary processes are disabled. Taking these steps will ensure a stable audio streaming environment.
Nick,
Thanks for the idea of using a powerline. While the electrical system is in very good shape, I would rather go a different route. If I need to install something to operate as ethernet, even if it means buying simple plug in devices, I might as well just put an actual etehrnet in place and be done.
My #3 is indeed the concept of a dedicated LMS server. I see yours has even less power than the computer I was thinking of using. Yet yours is on Linux and I was thinking Win XP. Do you think my computer would still be suitable?
Thanks!
If it was that easy to run Ethernet why didnt you do that in the first place? People use wireless bridges and powerline networking because it is too costly or not possible to run Ethernet in the walls.
I guess you didn't read the thread. It got kind of long and probably you missed the couple of times I explained this. Nevermind: from the existing PC used as server I cannot run ethernet, hence the bridge. In light of the difficulties I've been having I decided to bite the bullet and put down an ethernet, but using a different computer as server so I could place it in a more convenient place (an old PC to be used as dedicated server or a new laptop to be used for other stuff while also streaming music to the Touch).
BTW, I increased the Alsa buffer some (to 5000). Did not hear a change in sound quality and can't really say if things improved: it played flawlessly during 2 days, and an hour ago it decided it will not work anymore today...It has to be something other than the buffer. Interestingly, it fails at night. It's past 11 PM now, so it's not microwaves as someone suggested. Could internet traffic outside my house jam the Touch somehow?
I'm convinced all this will go away with the ethernet, but getting to that point will take some time. I need to buy a new computer and free up the old PC from the little, but crucial, service it's performing now. Upon further thought I decided my wife's laptop is not a good option because of the ethernet wiring it would entail.
So my frustration continues, but I have a better map of where I want to go.
On the Win7 server, right click the icon in the tray and select "Open Control Panel". Choose the "Information" tab, scroll down and look for "Wireless Signal Strength". Waddaya get?
My Touch is located in a garage closet about sixty feet away from my office where the server and main access point live. I have a second access point about halfway that serves as a repeater. With between (only) 25%-30%, I get no dropouts. I also use a laptop and iPhone there for various wireless duties including of course use of the iPeng remote control app.
When my Touch drops, it means the main access point is down. I usually reset the access points about once a day. It is their fault, not the Touch.
Signal strength on the Win 7 server is a non-issue: it's connected to the router through ethernet. Signal at the SB Touch used to be around 65% when I was running it wirelessly, that is before I added the bridge. I don't think signal strength is the issue.
Signal at the SB Touch used to be around 65% when I was running it wirelessly, that is before I added the bridge.
So, adding the bridge created the problem? If I understand your topology, you are still dependent upon wireless signal strength and consistency.
"So, adding the bridge created the problem? If I understand your topology, you are still dependent upon wireless signal strength and consistency".
No, the problem was there before and the bridge was an attempt to fix it.
Yes, I still have a wireless leg from router to bridge. I'm kind of trying to put wifi behind me by installing an ethernet, though.
Given what you just said, I still think you were mistaken when you earlier wrote:
"I don't think signal strength is the issue."
I think it continues to be the issue. Wired is better, but wireless is doable - perhaps with some changes.
The problem is the wireless. I too was using a wireless bridge and then hardwire to the Touch, but it didn't matter. The deficiency of wireless still comes into play. I didn't have dropouts THAT often, but it always seemed to coincide with dinner time (when either my wife or neighbors were using Microwave Ovens). I believe Microwave Ovens also use 2.4GHz bandwidth.Anyway, when I switched to all hardwired that completely solved all the dropouts. In fact, it has not suffered a dropout once since the switch. And it sounds better too!
BTW, I also was using Vortexbox software, but switched to Windows 7. On the same PC (Hardware) the Windows 7 is significantly better sounding than Vortexbox. (After properly tweaking Win 7).
Also, it has nothing to do with Soundcheck's buffer size mod (as Phofman suggests). I have that set to 3200.
Edits: 04/02/12
not an answer to op, but answering what is the largest buffer size ?
I have been working on a large buffer size settings based on TT3.0 to reduce the distinctive TT3.0 harsh sound. Using ethernet and playing wav files, I use an alsa buffer size of 99999999, requires small tweak to buffer size range in TT file. Have made loads of other tweaks to kernel and tcp settings and sound is now very good.
============
not an answer to op, but answering what is the largest buffer size ?
I have been working on a large buffer size settings based on TT3.0 to reduce the distinctive TT3.0 harsh sound. Using ethernet and playing wav files, I use an alsa buffer size of 99999999
============
Every audio hardware supports some range of buffer size (size of the DMA region in RAM) and period size (how long region is read before the audio hardware throws IRQ). These values are hard-coded in the driver which in alsa tells this information to user space via hw_params fields.
Any player with correclty written alsa support will align the requested buffer size to the buffer size actually supported by the device by calling alsa-lib function snd_pcm_hw_params_set_buffer_size_near http://www.alsa-project.org/alsa-doc/alsa-lib/group_h_w___params.html#g7e68162163fb155262b021d48a93bdc1 or its equivalents. As a result you can put any number into your config but the nearest supported value will be used instead.
The actual buffer and period size is listed with playback running in /proc/asound/YOURSOUNDCARD/pcmXp/subY/hw_params:
pavel@nahore:~$ cat /proc/asound/Quartet/pcm0p/sub0/hw_params
access: MMAP_INTERLEAVED
format: S32_LE
subformat: STD
channels: 2
rate: 192000 (192000/1)
period_size: 8192
buffer_size: 32768
The values of period_size and buffer_size is in frames, i.e. 2 x 32 bits = 8 bytes for this specific case.
thanks, I was not aware of that. So max buffer is 50000 which equates to 2048
/proc/asound$ cat /proc/asound/card0/pcm0p/sub0/hw_params
access: MMAP_INTERLEAVED
format: S24_LE
subformat: STD
channels: 2
rate: 96000 (96000/1)
period_size: 1024
buffer_size: 2048
=========
thanks, I was not aware of that. So max buffer is 50000 which equates to 2048
=========
Hmm, that is some interesting transformation. For 24LE there are 6 bytes per stereo sample, or 48bits. None transforms 50000 to 2048. Buffer size of 2048 is rather small, only some 20ms at 96kHz. Plus it yields IRQ every 10ms, quite a load on the CPU. I would be surprised if the hardware could not accept a larger buffer. Perhaps that number 50000 configures something different. Do you see changing the buffer size in proc when changing that config number?
I see, the number is microseconds, i.e. buffer_time. If that figure was 20000 and SBT used newer alsa with S24_3LE for 3 bytes and S24_LE for 3 significant bytes enclosed in 4 bytes, then 20000us with help of snd_pcm_hw_params_set_buffer_time_near as in http://forums.slimdevices.com/showpost.php?p=584047&postcount=7 could translate to 2048 samples.
I wonder what changing the buffer time to some large value and period count in a buffer to e.g. 4 would do. I have never seen such experimentation, perhaps I have missed some posts. IMO for maximum playback reliability the buffer size should be as large as possible (hundreds of milliseconds, if possible), period count perhaps a bit above the current 2 (fixed ratio in asio) - 4 (I think mplayer), 8 (newer versions of audacious)?
set the alsa playbackperiodcount to 4 and managed to get 4096 buffer size (up from 2048), so thanks for that tip. When set to 8 the period size goes down to 512 so buffer size of 4096 must be the max. think it sounds better too.
/etc/squeezeplay/userpath/settings$ cat /proc/asound/card0/pcm0p/sub0/hw_params
access: MMAP_INTERLEAVED
format: S24_LE
subformat: STD
channels: 2
rate: 44100 (44100/1)
period_size: 1024
buffer_size: 4096
/etc/squeezeplay/userpath/settings$ cat /proc/asound/card0/pcm0p/sub0/hw_params
access: MMAP_INTERLEAVED
format: S24_LE
subformat: STD
channels: 2
rate: 96000 (96000/1)
period_size: 1024
buffer_size: 4096
Sorry to be off topic, but to answer phofman
buffer size period/buffer size
3200 71/142 @ 44.1 154/308 @ 96
20000 441/882 @ 44.1 960/1920 @ 96
50000 1024/2048 @ 44.1 and @ 96
just found out that setting the buffer above 50000 does have an effect, only on 16/44.1 though. At 50000 it sets the period to 551 and the buffer size to 2204, at 99999999 it sets it to 1024 period and 4096 buffer size. So large alsa buffer sizes it is, i knew it was making a difference and the difference is to 16/44.1 music.
SBGK, thanks for the info. 2048 samples really seems to be the upper hardware limit since it does not grow with rising samplerate. For 44.1kHz that figure translates to 46ms, i.e. those 50.000 microseconds adjusted by snd_pcm_..._near. For 96kHz it is 21ms buffer time, 10 ms period time (IRQ rate).
Thanks, just to close this off - I am trying to find the optimum max_user_freq for the 50000 buffer setting, is it correct to deduce that if the period time is 10ms then the best max_user_freq would be 100 ? does setting it higher mean higher resolution or lead to more noise or doesn't it matter ? If I set it to 1024, 2048 or 3072 there is a perceived increase in resolution as it get's higher, but not sure if this is due to irq induced noise.
after a bit of trial and error, 4095 is the maximum and sounds fantastic.
=====
Thanks, just to close this off - I am trying to find the optimum max_user_freq for the 50000 buffer setting, is it correct to deduce that if the period time is 10ms then the best max_user_freq would be 100 ? does setting it higher mean higher resolution or lead to more noise or doesn't it matter ? If I set it to 1024, 2048 or 3072 there is a perceived increase in resolution as it get's higher, but not sure if this is due to irq induced noise.
after a bit of trial and error, 4095 is the maximum and sounds fantastic.
=====
That setup just changes the maximum frequency limit a user space application can ask the RTC to provide. I have no idea how it could affect the audio playback chain. Did you do a simple blind listening test to confirm the sound change?
yes, have tried it at 1, very smooth playback lower detail, 64 - Touch default, 2048 TT3.0 default, 3072 and 4095 - the higher the figure the more perceived detail, but it does sound digital and is quite wearing after a while. that's why I wanted the smooth sound of 1 but with more detail. Am trying 100 at the moment and it sounds ok. don't do blind tests, just a-b. others have noticed the difference also, not least the people who have tried TT3.0 and found it too forward and thin sounding.
Edits: 04/04/12
So you were able to correctly distinguish between to default setup of linux kernel (64) and your changed value, without actually knowing what was the current case? How did you perform your test?
ah, see what your saying there. I didn't say I noticed a difference between 64 and 100. I said that increasing it to 2048, 3072 and 4095 produce a more forward and detailed though digital sound (jitter ?) than my previous setting of 1. obviously I have listened to 64 in the past before applying the TT3.0 settings. I picked 100 because it was the reciprocal of .01s, so far it sounds better than the higher settings, but have not compared it to 1 for detail, yet. suspect the best setting will be higher than 100, but was interested to know if there was a theoretical best setting.
=========
suspect the best setting will be higher than 100, but was interested to know if there was a theoretical best setting.
========
I think the theoretical best setting should be based on source code analysis. I downloaded the SqueezeOS 7.7 sources ( http://wiki.slimdevices.com/index.php/SqueezeOS_Build_Instructions ) as specifed http://wiki.slimdevices.com/index.php/Hardware_comparison . After a few hours of fixing their build process, I finally managed to get hopefully all the sources the firmware uses:
pavel@sara:~/tmp$ ls -1
alsa-lib-1.0.18
arm-2010q1
autoconf-2.61
automake-1.10.2
config
curl-7.18.0
desktop-file-utils-0.15
expat-2.0.0
file-4.18
flac-1.2.1
freetype-2.4.2
gettext-0.14.1
git
git-1.5.2.3
glib-2.18.1
jpeg-8b
libmad-0.15.1b
libpng-1.2.43
libtool-2.2.6
lua-5.1.1
lzo-2.02
module-init-tools-3.2.2
m4-1.4.12
ncurses-5.4
openssl-0.9.8g
pax-utils-0.1.19
pkg-config-0.23
quilt-0.47
SDL_gfx-2.0.15
SDL_image-1.2.5
SDL_ttf-2.0.8
SDL-1.2.13
squeezeplay
s3c2412
tolua++-1.0.92
Tremor
zlib-1.2.3
Actually quite large - 1.3GB of source code:
pavel@sara:~/tmp$ du -sh
1,3G .
Now let's analyze:
pavel@sara:~/tmp$ rgrep -n max_user_freq * | grep -v .svn
s3c2412/linux-2.6.22/include/linux/rtc.h:158: int max_user_freq;
s3c2412/linux-2.6.22/Documentation/rtc.txt:183: structure. Also make sure you set the max_user_freq member in your
s3c2412/linux-2.6.22/drivers/rtc/rtc-dev.c:231: if (arg > rtc-> max_user_freq && !capable(CAP_SYS_RESOURCE))
s3c2412/linux-2.6.22/drivers/rtc/class.c:147: rtc-> max_user_freq = 64;
s3c2412/linux-2.6.22/drivers/rtc/rtc-s3c.c:539: rtc-> max_user_freq = 128;
s3c2412/linux-2.6.22/drivers/rtc/rtc-bfin.c:398: rtc-> rtc_dev-> max_user_freq = (2 < < 16); /* stopwatch is an unsigned 16 bit reg */
s3c2412/linux-2.6.22/drivers/char/rtc.c:194:static unsigned long rtc_max_user_freq = 64; /* > this, need CAP_SYS_RESOURCE */
s3c2412/linux-2.6.22/drivers/char/rtc.c:287: .data = &rtc_max_user_freq,
s3c2412/linux-2.6.22/drivers/char/rtc.c:449: if (!kernel && (rtc_freq > rtc_max_user_freq) &&
s3c2412/linux-2.6.22/drivers/char/rtc.c:652: if (!kernel && (arg > rtc_max_user_freq) && (!capable(CAP_SYS_RESOURCE)))
OK, as expected the max_user_freq string is only in the linux kernel sources. Out of those occurences, the value is actually used only in rtc-dev.c and rtc.c:
static int rtc_dev_ioctl(struct inode *inode, struct file *file,
unsigned int cmd, unsigned long arg)
...
case RTC_IRQP_SET: /* Set periodic IRQ rate. */
{
/*
* We don't really want Joe User generating more
* than 64Hz of interrupts on a multi-user machine.
*/
if (!kernel && (arg > rtc_max_user_freq) && (!capable(CAP_SYS_RESOURCE)))
return -EACCES;
...
rtc_freq = arg;
Clearly the max_user_freq setting is an upper limit used in check when setting a realtime clock (RTC) device frequency via ioctl. This device is used in userspace applications which require specific timing/wakeups.
The ioctl call is called RTC_IRQP_SET. Does it get actually used in linux applications? A simple google search http://www.google.com/search?client=ubuntu&channel=fs&q=ioctl+RTC_IRQP_SET&ie=utf-8&oe=utf-8 reveals it does, e.g. in mplayer http://repo.or.cz/w/mplayer.git/blob/HEAD:/mplayer.c#l4128 :
unsigned long irqp = 1024; /* 512 seemed OK. 128 is jerky. */
if (ioctl(rtc_fd, RTC_IRQP_SET, irqp) < 0) {
mp_tmsg(MSGT_CPLAYER, MSGL_WARN, "Linux RTC init error in "
"ioctl (rtc_irqp_set %lu): %s\n",
irqp, strerror(errno));
mp_tmsg(MSGT_CPLAYER, MSGL_HINT, "Try adding \"echo %lu > /proc/sys/dev/rtc/max-user-freq\" to your system startup scripts.\n", irqp);
close(rtc_fd);
rtc_fd = -1;
} else if (ioctl(rtc_fd, RTC_PIE_ON, 0) < 0) {
...
This code allows mplayer to use a hardware timer for its wakeups every 1ms - 1024Hz. For most people it will throw an error and mplayer will use a software timer instead - for details see the source code.
So where is the ioctl RTC_IRQP_SET used in the SqueezeOS source code:
pavel@sara:~/tmp$ rgrep -n RTC_IRQP_SET * | grep -v .svn
arm-2010q1/arm-none-linux-gnueabi/libc/usr/include/linux/rtc.h:84:#define RTC_IRQP_SET _IOW('p', 0x0c, unsigned long) /* Set IRQ rate */
s3c2412/linux-2.6.22/include/linux/rtc.h:84:#define RTC_IRQP_SET _IOW('p', 0x0c, unsigned long) /* Set IRQ rate */
s3c2412/linux-2.6.22/sound/core/rtctimer.c:96: rtc_control(rtc, RTC_IRQP_SET, rtctimer_freq);
s3c2412/linux-2.6.22/fs/compat_ioctl.c:2357:#define RTC_IRQP_SET32 _IOW('p', 0x0c, compat_ulong_t)
s3c2412/linux-2.6.22/fs/compat_ioctl.c:2380: case RTC_IRQP_SET32:
s3c2412/linux-2.6.22/fs/compat_ioctl.c:2381: return sys_ioctl(fd, RTC_IRQP_SET, arg);
s3c2412/linux-2.6.22/fs/compat_ioctl.c:3461:HANDLE_IOCTL(RTC_IRQP_SET32, rtc_ioctl)
s3c2412/linux-2.6.22/Documentation/rtc.txt:161: * RTC_PIE_ON, RTC_PIE_OFF, RTC_IRQP_SET, RTC_IRQP_READ ... another
s3c2412/linux-2.6.22/Documentation/rtc.txt:180: * RTC_IRQP_SET, RTC_IRQP_READ: the irq_set_freq function will be called
s3c2412/linux-2.6.22/Documentation/rtc.txt:401: retval = ioctl(fd, RTC_IRQP_SET, tmp);
s3c2412/linux-2.6.22/Documentation/rtc.txt:409: perror("RTC_IRQP_SET ioctl");
s3c2412/linux-2.6.22/drivers/input/misc/hp_sdc_rtc.c:568: case RTC_IRQP_SET: /* Set periodic IRQ rate. */
s3c2412/linux-2.6.22/drivers/rtc/rtc-dev.c:230: case RTC_IRQP_SET:
s3c2412/linux-2.6.22/drivers/rtc/rtc-dev.c:242: if (cmd == RTC_PIE_ON || cmd == RTC_PIE_OFF || cmd == RTC_IRQP_SET) {
s3c2412/linux-2.6.22/drivers/rtc/rtc-dev.c:346: case RTC_IRQP_SET:
s3c2412/linux-2.6.22/drivers/rtc/rtc-s3c.c:323: case RTC_IRQP_SET:
s3c2412/linux-2.6.22/drivers/rtc/rtc-vr41xx.c:262: case RTC_IRQP_SET:
s3c2412/linux-2.6.22/drivers/rtc/rtc-at91rm9200.c:216: case RTC_IRQP_SET: /* set periodic alarm frequency */
s3c2412/linux-2.6.22/drivers/rtc/rtc-sa1100.c:238: case RTC_IRQP_SET:
s3c2412/linux-2.6.22/drivers/char/rtc.c:412: case RTC_IRQP_SET:
s3c2412/linux-2.6.22/drivers/char/rtc.c:637: case RTC_IRQP_SET: /* Set periodic IRQ rate. */
s3c2412/linux-2.6.22/drivers/char/rtc.c:878: if (cmd != RTC_PIE_ON && cmd != RTC_PIE_OFF && cmd != RTC_IRQP_SET)
s3c2412/linux-2.6.22/drivers/char/efirtc.c:171: case RTC_IRQP_SET:
s3c2412/linux-2.6.22/drivers/sbus/char/rtc.c:58:#define RTC_IRQP_SET _IOW('p', 0x0c, unsigned long) /* Set IRQ rate */
s3c2412/linux-2.6.22/drivers/sbus/char/rtc.c:171: case RTC_IRQP_SET:
Oops, only the arm base system and the linux kernel tree we talked about above. None other occurence of this string. IMO it makes sense, the SB Touch does not need to use a hardware timer, the sound stream is timed by clocks of the sound card.
I do not think this parameter is used at all anywhere in the SBT. The recommendation to change this parameter was copied from other places on internet which were also copied from others, mostly not knowing what it actually does. Yet it is a important linux audio setup parameter, e.g. for midi synthesis as in archlinux pro-audio recommendations https://wiki.archlinux.org/index.php/Pro_Audio#System_Configuration . It makes perfect sense there. Unlike in SBT.
I may be wrong, perhaps not all source code was actually downloaded, there are a few native pre-compiled libraries in the arm package. But I did my best. That is why I asked about results of a blind test.
wow, thanks for your efforts in answering the question. In the end I took an empirical view that as nothing was breaking when max_user_freq is set to 1 then it probably isn't critical to music playback and was just adding jitter which was being perceived as more detail. As latency is not critical to the Touch I am trying to reduce anything which produces jitter.
I wonder if I can ask a supplementary question. The final thing I am trying to understand is what the best buffer time would be for JIVE_ALSA used in the Squeezebox Touch. Above a threshold the values of cat /proc/asound/card0/pcm0p/sub0/hw_params don't change and yet I think the sound does change, say if I use a buffer time of 5000000. From what I have read this should not be possible, do you have an understanding of what happens when the a large buffer time is used and what could be causing the change in sound ? (the change is that the sound gets more analogue like. Others have tried the large buffer sizes and found their music stops playing after a while if set too high (but they like the effect), so sounds like some resource is being allocated, but I don't know what it is or how to monitor it.
Would appreciate any thoughts you have on this and thanks again for answering the max_user_freq question.
/usr/bin$ cat /proc/asound/card0/pcm0p/sub0/hw_params
access: MMAP_INTERLEAVED
format: S24_LE
subformat: STD
channels: 2
rate: 44100 (44100/1)
period_size: 1021
buffer_size: 4084
========
In the end I took an empirical view that as nothing was breaking when max_user_freq is set to 1 then it probably isn't critical to music playback and was just adding jitter which was being perceived as more detail.
========
Well, this is why I am asking for the blind tests. If it does not get used, it cannot add jitter either.
=======
From what I have read this should not be possible, do you have an understanding of what happens when the a large buffer time is used and what could be causing the change in sound ? (the change is that the sound gets more analogue like. Others have tried the large buffer sizes and found their music stops playing after a while if set too high (but they like the effect), so sounds like some resource is being allocated, but I don't know what it is or how to monitor it.
========
I already answered that.
Look at http://svn.slimdevices.com/repos/jive/7.7/trunk/squeezeplay/src/squeezeplay/src/audio/decode/decode_alsa.c
The lua options you are changing are read here:
lua_getfield(L, 2, "alsaPlaybackBufferTime");
buffer_time = luaL_optinteger(L, -1, ALSA_DEFAULT_BUFFER_TIME);
lua_getfield(L, 2, "alsaPlaybackPeriodCount");
period_count = luaL_optinteger(L, -1, ALSA_DEFAULT_PERIOD_COUNT);
lua_pop(L, 2);
In method decode_alsa_fork they are unchaged passed to the jive_alsa binary as parameters -b and -p.
Now let's move to the jive_alsa code - http://svn.slimdevices.com/repos/jive/7.7/trunk/squeezeplay/src/squeezeplay/src/audio/decode/decode_alsa_backend.c
In the main method the parameters are stored into state.buffer_time and state.period_count. If you search that file for another use of state.buffer_time, you will find the only place is where alsa is asked to use a supported value nearest to buffer_time:
val = state-> buffer_time;
dir = 1;
if ((err = snd_pcm_hw_params_set_buffer_time_near(*pcmp, hw_params, &val, &dir)) < 0) {
LOG_ERROR("Unable to set buffer time %s", snd_strerror(err));
return err;
}
Therefore, however large number you configure, in the end alsa will always use the same largest possible value. Your huge value does not get used at all, it does not allocate any resource either. You cannot hear any difference between 1000000 and 9999999, both numbers configure the largest buffer time/size allowed by the soundcard driver (somewhere above 4000 frames). If you hear a difference you talk about, it is psychoacoustics. While I was not 100% sure with the max_user_freq (only about 99.9% :) ), here I am absolutely certain a blind listening test would reveal no difference since your /proc/asound/card0/pcm0p/sub0/hw_params will be identical for the two large-values. In fact I think the blind test would reveal no sound difference even for cases of really different buffer sizes but I will not go into this here as nobody will provide any blind listening test evidence supporting the perceived difference anyway.
Thanks for that, I'll have a read, hope your time wasn't wasted, I have no more questions.
Fortunately, my aim is to reduce jitter induced hf noise, so it generally only takes a few minutes to discover whether I can live with a tweak or not.
If 50% of my tweaks are psychoacoustics then that is still a 50% success rate and I have learned a lot by asking these questions. The max_user_freq setting is definately a repeatable effect, so am not concerned about your comments on that one. if professional musicians can't identify a stradivarius, then I don't think I have much chance in blind tests especially when humans are better adapted to a-b tests.
http://www.npr.org/blogs/deceptivecadence/2012/01/02/144482863/double-blind-violin-test-can-you-pick-the-strad
Do you experience the dropouts with the buffer size mod disabled? That hack reduces the amount of audio data your SB caches, raising the chances of dropouts. In fact I would try to increase it (50000 max?), not reduce instead.
The TT buffer mod only affects the ALSA buffer, not the squeezeplay buffer which stays very large (it can be a couple minutes at 16/44.1) so it has relatively little to do with dropouts etc.
You can try increasing the TT buffer to say 5000 or 10000 and see if that affects the dropouts but my guess is that it will not.
John S.
=======
The TT buffer mod only affects the ALSA buffer, not the squeezeplay buffer which stays very large (it can be a couple minutes at 16/44.1) so it has relatively little to do with dropouts etc.
=========
Well, if the default input buffer is that large (tens of seconds), it should filter out any irregularities in data feed (seconds at most), if the communication protocol is correctly designed to maintain the input buffer as full as possible. If it is the case, I would search for problems further down the stream, which is the alsa buffer and timely delivery of audio data to the DMA region. Just my guesses, I have never touched the Touch :)
> Well, if the default input buffer is that large (tens of seconds), it
> should filter out any irregularities in data feed (seconds at most), if
> the communication protocol is correctly designed to maintain the input
> buffer as full as possible.
It certainly helps, and with a fairly healthy wireless connection, you should get no dropouts, but if the connection is slow enough, or if there is enough interference, then you can get buffer underruns no matter how large the buffer. One thing that may contribute a little is that playback is begun before the buffer is filled completely. It usually only a moment to fill the buffer to a point where playback can safely begin, but on a bad network, the buffer may never fill to the point where it can sustain a several second lapse in the network stream.
OK, I'll try this out as soon as I start experiencing dropouts. It's set at 3400, per the toolbox. If this is the cause it would be great news: fixing it is free and no hassle!
Hi,
did it work before you installed TT3.0 ?
have you tried it with smaller files eg mp3s ?
you might want to try commenting out the tcp section settings in the TT file (the lines between tcp () { and } ) so that the default Touch ones are used.
or
You might want to try enabling windows scaling and timestamps in the TT tcp section as without this the receive window is 65Kb, with windows scaling it will be 12Mb, might help as there will be a lot fewer acknowledgements flying about.
echo "1" > /proc/sys/net/ipv4/tcp_window_scaling
echo "1" > /proc/sys/net/ipv4/tcp_timestamps
Post a Followup:
FAQ |
Post a Message! |
Forgot Password? |
|
||||||||||||||
|
This post is made possible by the generous support of people like you and our sponsors: