Jump to content

cache_dirs - an attempt to keep directory entries in RAM to prevent disk spin-up


Recommended Posts

Unraid 6 with Xen, barely any plugins (just apcupsd and swap), min 1GB ram, max 2GB

 

I am getting the following when I try to run cache_dirs

 

ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
/boot/cache_dirs: xrealloc: ./parse.y:3246: cannot allocate 512 bytes (98304 bytes allocated)

 

I even set up a 4GB swap file that is pretty much unused

 

Any ideas?

 

Thanks

Link to comment

Unraid 6 with Xen, barely any plugins (just apcupsd and swap), min 1GB ram, max 2GB

 

I am getting the following when I try to run cache_dirs

 

ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
/boot/cache_dirs: xrealloc: ./parse.y:3246: cannot allocate 512 bytes (98304 bytes allocated)

 

I even set up a 4GB swap file that is pretty much unused

 

Any ideas?

 

Thanks

 

I modified the script. Changed ulimit to 30000 now it works

Link to comment
  • 2 weeks later...

Unraid 6 with Xen, barely any plugins (just apcupsd and swap), min 1GB ram, max 2GB

 

I am getting the following when I try to run cache_dirs

 

ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
/boot/cache_dirs: xrealloc: ./parse.y:3246: cannot allocate 512 bytes (98304 bytes allocated)

 

I even set up a 4GB swap file that is pretty much unused

 

Any ideas?

 

Thanks

 

I modified the script. Changed ulimit to 30000 now it works

 

Joe I can confirm this behaviour with v6b5a.

 

cache_dirs -F -w -i "movies.*"
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory
./cache_dirs: xrealloc: ./parse.y:3246: cannot allocate 512 bytes (98304 bytes allocated)

 

I would suggest this can be considered a bug.

Link to comment

Joe looks good.

 

Out of interest does the 30k come from anywhere other than it being bigger?

it is what others have said has worked for them.

 

You can also set the ulimit to whatever you like via a new "-U NNNNNNN" option.  If you do not specify a -U value, the program will now detect if you are on the 64 bit OS and use the 30k value instead of the lower value used on the 32 bit OS.

 

Joe L.

Link to comment

Do we have any thoughts on how we can derive the number on a system by system basis. For instance one of my machines has 16GB of RAM now so I dont want to hobble the system artificially.

 

On a happy note it is nice have cache_dirs back. My move to v6 had been seemless but the lack of cache_dirs made a tremendous difference to the real world "responsiveness" of the v6 system. Now I have cache_dirs again subjectively it is even better than v5.

 

Everyone should run this.

Link to comment

I can't get it to start in Unraid 6, nothing in logs about it from what I can tell. I tried the old version, and the new version just posted. I have 3 servers I attempted to get it to work on and it was working fine before the upgrade.

 

I put cache_dirs on the flash drive(s), and I set this line in go script:

/boot/cache_dirs -w

 

EDIT:

If I manually telnet into my servers and invoke the process manually, it works. Something not quite right, doesn't seem to load correctly from the go file.

 

My exact "go" file:

# Start the Management Utility

/usr/local/sbin/emhttp &

echo nameserver 192.168.0.1 >/etc/resolv.conf

echo 192.168.0.102 UNRAID2 >>/etc/hosts

sleep 30; blockdev --setra 2048 /dev/md*

/boot/cache_dirs -w

syslog.txt

Link to comment

I can't get it to start in Unraid 6, nothing in logs about it from what I can tell. I tried the old version, and the new version just posted. I have 3 servers I attempted to get it to work on and it was working fine before the upgrade.

 

I put cache_dirs on the flash drive(s), and I set this line in go script:

/boot/cache_dirs -w

 

EDIT:

If I manually telnet into my servers and invoke the process manually, it works. Something not quite right, doesn't seem to load correctly from the go file.

 

I am experiencing exactly the same.  What appears in the ps -eaf list is :

root      2679  1003  0 09:57 ?        00:00:00 /bin/bash /var/tmp/go
root      2680  2679  0 09:57 ?        00:00:00 /usr/local/sbin/emhttp
root      2681  2679 97 09:57 ?        01:17:28 /bin/bash -c /boot/packages/cache_dirs

 

Note that there are no command parameters appended to the cache_dirs command, and the go process doesn't terminate.

 

 

EDIT:

 

I note that the go script is not run directly from the /boot drive, but from a copy in /var/tmp

 

I'm not quite sure why, but the plain

/boot/packages/cache_dirs -w -B -m 15 -M 30 -d 6 -i "Movies" -i "Music" -i "Series" -i "Videos" -i "xbmc" -i "Torrents" -i "ReadyTorrents" -i "Downloaded" -i "Photos" -i "Maildir"

command is actually invoked as:

/bin/bash -c /boot/packages/cache_dirs

 

The -c option on the /bin/bash command says 'read commands from the following string (variable)'.  The problem is that '/boot/packages/cache_dirs' isn't a string variable, so I'm not sure why the -c is getting added to the command line.  Some experimentation is required - perhaps we can work around this in some way ... by including the /bin/bash in the line in the go script, or setting up the command in a variable.  This needs to wait for a reboot ...

Link to comment

Thank you Joe, it seems to be working.

What makes me wonder is that it is constantly consuming 100% CPU...that may be the desired way it operates, to be honest i cant remember the CPU usage at unRAID 5.x times....

I have attached a screenshot of "top"

 

Edit: I invoke cache_dirs via the go script as /boot/cache_dirs -w -i "movies"

cache_dirs.JPG.9130a0e958a9ac50e6a4dedf0a488ef5.JPG

Link to comment

Thank you Joe, it seems to be working.

What makes me wonder is that it is constantly consuming 100% CPU...that may be the desired way it operates, to be honest i cant remember the CPU usage at unRAID 5.x times....

I have attached a screenshot of "top"

 

Edit: I invoke cache_dirs via the go script as /boot/cache_dirs -w -i "movies"

For it to be using 100% of a CPU it would need to be in a tight loop.    That is not expected.

You might want to run it in the foreground to see what it is doing on your server.

 

kill the currently running process, run it with "-F -v" from the command line.

 

Joe L.

Link to comment

Okay, i started cache_dirs via the console while the server was running. Before that i killed the old 100 percent CPU cache_dirs process.

The CPU usage of the newly started process is at 1 percent when it is running and otherwise i cant see it via top, so it has to sleep i guess. Thats good news :)

 

The output is:

 

root@conchulio:/boot# cache_dirs -w -i "movies" -i "tv" -F -v
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 35.216451 seconds, weighted avg=35.216451 seconds, now sleeping 5 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.177511 seconds, weighted avg=11.857158 seconds, now sleeping 6 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.178924 seconds, weighted avg=6.018041 seconds, now sleeping 7 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.172183 seconds, weighted avg=3.679698 seconds, now sleeping 8 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.242335 seconds, weighted avg=2.533910 seconds, now sleeping 9 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.174541 seconds, weighted avg=1.859805 seconds, now sleeping 10 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.196911 seconds, weighted avg=1.444081 seconds, now sleeping 10 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.155507 seconds, weighted avg=1.157731 seconds, now sleeping 10 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.184505 seconds, weighted avg=0.963086 seconds, now sleeping 10 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.178713 seconds, weighted avg=0.820473 seconds, now sleeping 10 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.182364 seconds, weighted avg=0.714121 seconds, now sleeping 10 seconds
Executing find /mnt/disk1/movies -noleaf
Executing find /mnt/disk2/movies -noleaf
Executing find /mnt/disk3/movies -noleaf
Executing find /mnt/disk4/movies -noleaf
Executing find /mnt/disk5/movies -noleaf
Executing find /mnt/disk6/movies -noleaf
Executing find /mnt/cache/movies -noleaf
Executing find /mnt/disk1/tv -noleaf
Executing find /mnt/disk2/tv -noleaf
Executing find /mnt/disk3/tv -noleaf
Executing find /mnt/disk4/tv -noleaf
Executing find /mnt/disk5/tv -noleaf
Executing find /mnt/disk6/tv -noleaf
Executing find /mnt/cache/tv -noleaf
Executed find in 0.195693 seconds, weighted avg=0.634363 seconds, now sleeping 10 seconds

 

 

But after a reboot and invoking cache_dirs via the go script i am again up at 100 percent CPU for cache_dirs.

I added the -F -v command but there is no mention of cache_dirs or find in the syslog.

Strange.

 

Syslog attached

syslog.txt

Link to comment

I too had the exact same issue as peterb post #635, with all flags being ignored when started via go script, so there is def something odd going on when started via go script.

 

Sent from my Nexus 7 using Tapatalk

then don't do that ...  8)

 

 

It almost sounds as if the first time it goes looking for directories to subsequently scan, they are not yet there, so it subsequently scans nothing (but does it very quickly)

 

I'd add a few lines like this AFTER invoking enhttp, but before doing anything else in the config/go file.

 

#!/bin/bash

# Start the Management Utility

/usr/local/sbin/emhttp &

 

#Wait for the md devices to come online.

while [[ ${LOOP:=30} -gt 1 && ! -b /dev/md1 ]]

do

        (( LOOP=LOOP-1 ))

        echo "Waiting for /dev/md1 to come online ($LOOP)"

        sleep 1

done

sleep 5

/boot/cache_dirs -w

Link to comment
  • 2 weeks later...

I can confirm that when I start it from the go file, all flags are ignored and it is using 99% cpu. (unraid 6 beta 5a)

 

I instead start it from the command line manually. It is fine because I rarely reboot the server.

 

 

I do have a general question though. I realize most people use this script so the drives aren't woken up unless a specific file is accessed.

 

But if one is not putting any of the drives to sleep, in other words if they are always spinning, is there still a benefit to this script?

 

I noticed that with or without, it still takes windows explorer 5-10 seconds to show the full listing of my movies folder through samba (little over 1000 sub-folders). It starts with a list of 125, and adds another 125 every half or a full second or so.

 

Thanks

 

 

EDIT: Holy crap, I just discovered something. In windows explorer, if I go to \\tower\Movies it takes 10 seconds to display, but if I go to \\192.168.x.x\Movies it is instant. Is that a windows explorer glitch?

Link to comment

EDIT: Holy crap, I just discovered something. In windows explorer, if I go to \\tower\Movies it takes 10 seconds to display, but if I go to \\192.168.x.x\Movies it is instant. Is that a windows explorer glitch?

Sounds like Windows is scanning the folder for thumbnails and such but with the IP address it isn't.  Of course if you used the IP immediately after using the name the contents were cached on your windows box and that is why it was instant.  As confirmation wait a while or browse lots of other directories so that the cached info is discarded then use the IP first.  Is it still instant?  If so then windows is setup not to scan with an IP address but is with a name.  Interested to know as I will have to try it myself when I get home.
Link to comment

Surely the slow browsing is just due to NetBIOS lookup? Add tower to your windows hosts file if you still want to use tower without the performance hit, or better still setup your own name server.

 

Sent from my Nexus 7 using Tapatalk

 

Link to comment

Surely the slow browsing is just due to NetBIOS lookup? Add tower to your windows hosts file if you still want to use tower without the performance hit, or better still setup your own name server.

 

Sent from my Nexus 7 using Tapatalk

 

How does the netbios lookup work exactly? I thought it did an initial lookup to find the ip address and then it used the ip for the rest of the transactions for the same machine

 

I first browse to \\tower and everything shows up instantly (not too many shares). I would assume netbios lookup is done at this step.

Then I click on the share called "Movies", and it takes a while. Is it doing netbios lookups for each folder listed under the share? That would be kinda crazy.

 

I retried the experiment, this time with the ip version first, and then the netbios name version and the results are the same. I don't think it's a caching issue.

Link to comment

The issue with cache_dirs not woking properly when invoked from the go file on unRAID v6 is due to the first few lines in the cache_dirs script:

#!/bin/bash
if test "$SHELL" = "/bin/sh" && test -x /bin/bash; then
    exec /bin/bash -c "$0" "$@"
fi

The SHELL environment variable starts out as "/bin/sh" and does not change when executing /bin/bash.  So it gets stuck in a loop.

 

When invoked from a standard shell, the SHELL environment variable is "/bin/bash" and it works just fine.

 

The test can be modified like this:

if test "$BASH"x = "x" && test -x /bin/bash; then

This works on unRAID 6.0-beta5a booted with Xen.  I was thinking of other required test cases, but I'm not sure what this chunk of code is supposed to be doing.  It appears to ensure the script is invoked with bash, but doesn't the first line (#!/bin/bash) do that on its own?  I would think the other three lines could just be removed.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...