cache_dirs - an attempt to keep directory entries in RAM to prevent disk spin-up


Recommended Posts

i noticed my windows file history backups has over 400k files and lots of folders so i added -a '-noleaf \( -name Data -prune -o -name data -prune \) -o -print'

the folder is /mnt/user/backup/username/folder/Data  since i did this i see its running but any time i browse any cached dirs it still wakes up all the drives and takes time so i think its not working correct. Am i using the commands in a wrong way? is there a way to see the status of what is has cached or what it is doing?

 

Jun 12 11:59:59 Tower cache_dirs: ==============================================

Jun 12 11:59:59 Tower cache_dirs: command-args=-w -B -a -noleaf ( -name Data -prune -o -name data -prune ) -o -print

Jun 12 11:59:59 Tower cache_dirs: vfs_cache_pressure=10

Jun 12 11:59:59 Tower cache_dirs: max_seconds=10, min_seconds=1

Jun 12 11:59:59 Tower cache_dirs: max_depth=9999

Jun 12 11:59:59 Tower cache_dirs: command=find -noleaf ( -name Data -prune -o -name data -prune ) -o -print

Jun 12 11:59:59 Tower cache_dirs: version=1.6.7

Link to comment

all i can say is it works perceptively fine for me

 

You have it running under 6b6?

If so, what did you have to do to get it working?

I would love to know this too please.

 

I have tried launching it from telnet after the server has booted as suggested by others however it says:

ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

./cache_dirs: xmalloc: make_cmd.c:100: cannot allocate 365 bytes (94208 bytes allocated)

 

My go script is:

#!/bin/bash

# Start the Management Utility

/usr/local/sbin/emhttp &

/boot/unmenu/uu

 

cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c

 

/boot/cache_dirs  -d  5  -m  3  -M  5  -w

 

What should our go scripts say now on 6.0-beta6 please?

 

Or if it won't work from go what should we type at the telnet prompt?

Link to comment

I would comment out the line that tries to load from /boot/packages.    I would guess that you are loading a package that is not 64-bit compatible and is messing up the system.

The /boot/packages folder contains unmenu items.

 

Can't remember the specific reason it's in there.

 

I've just noticed there's a newer unMENU 1.6. Updated it but I doubt that's related to cache-dirs.

 

What are people typing that have it working on 6.0-beta6 please?

 

EDIT: Removed the boot/packages line and rebooted the server and unmenu wouldn't load. Re-added it and unmenu loaded again.

Link to comment

The error message you are getting suggests something has overwritten the libc library.  That is why I am suggesting you should try with no plugins loaded.

Removed both unmenu lines from the go file and rebooted however trying to manually start cache_dirs from telnet gives the same result:

 

root@Tower:/boot# cache_dirs

ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

./cache_dirs: xrealloc: ./parse.y:3246: cannot allocate 512 bytes (94208 bytes allocated)

Link to comment

The error message you are getting suggests something has overwritten the libc library.  That is why I am suggesting you should try with no plugins loaded.

Removed both unmenu lines from the go file and rebooted however trying to manually start cache_dirs from telnet gives the same result:

 

root@Tower:/boot# cache_dirs

ps: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

sed: error while loading shared libraries: libc.so.6: failed to map segment from shared object: Cannot allocate memory

./cache_dirs: xrealloc: ./parse.y:3246: cannot allocate 512 bytes (94208 bytes allocated)

 

Try this...

 

In cache_dir script, find "ulimit -v 5000" and change it to "ulimit -v 15360"  I just downloaded the latest script and tried running it and got the same messages as you.  But I saved my old version and did an sdiff and the only difference was the change above for the shell, and the ulimit.  Changing that allowed it to run for me.

 

 

EDIT

Well cache_dir's is running BUT while still connected to my server via ssh I started getting:

 

awk: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory

awk: error while loading shared libraries: libdl.so.2: failed to map segment from shared object: Cannot allocate memory

 

 

So I'm guessing my fix isn't a complete fix.

 

 

 

Link to comment
  • 2 weeks later...

Hello, everyone!

 

I have the same problem with cache_dirs on beta5 of unraid v6. It seems modifying ulimit back to 15360 doesn't help a lot. 'Cannot allocate memory' displays after free memory of dom0 is low (about 25-30 Mb). But I don't know who is actually eating so much memory.

 

P.S. Btw, my dom0_memory is 2048M.

 

Updated:

 

Just commented out the ulimit string and limit the depth of scan by using "-d" flag. Now everything works fine.

 

 

Link to comment

Is it normal for cache_dirs to show up as 2 PID's in the "top" listing? (see pic)

 

When I start it from telnet, I see 2 of them show up.

If I kill the one I started with "-q" it still has another 1 running.

I can run the -q option again and it states no cache_dirs is running.

 

If I stop the array the other one that was running is then cleared and no longer shows up.

 

Once I restart the array it is also still not running.

 

However once I start it in telnet 2 of them show up with different PID's.

 

I am using v1.6.8 Unraid 6.0b6

cache.png.b462fded64e30ac40ce8c8b9b2ee340d.png

Link to comment
  • 2 weeks later...

Is it normal for cache_dirs to show up as 2 PID's in the "top" listing? (see pic)

 

When I start it from telnet, I see 2 of them show up.

If I kill the one I started with "-q" it still has another 1 running.

I can run the -q option again and it states no cache_dirs is running.

 

If I stop the array the other one that was running is then cleared and no longer shows up.

 

Once I restart the array it is also still not running.

 

However once I start it in telnet 2 of them show up with different PID's.

 

I am using v1.6.8 Unraid 6.0b6

I'm confused...

 

You do not "start" cache_dirs with the "-q" option.  That is how you "stop" a cache_dirs process that is already running.

( the -q option has it look for a file the running process had created containing its process ID, the "-q" process then kills that original process via the ID it found.)

 

Cache_dirs in many cases DOES create one child process per mounted disk share.  (unless you supply an argument to cache_dirs to NOT perform that function)

These child processes are to purposely cause the mounted devices to be detected as "busy" when you attempt to stop the array. 

It then allows cache-dirs to detect those "busy" file-system messages and suspend itself when you are attempting to stop the array so the array can be stopped cleanly.

 

The major issue people have had in the 64 bit world is the ulimit is set too low for many of them to fit their directory structure in memory.

You can fix that issue by simply commenting out the lines in the script that set "ulimit". (or deleting them entirely)

 

One person reported the newer version of the shell no longer sets $SHELL as /bin/sh when invoked as /bin/bash

so therefore, cache_dirs could not be started from the "go" script.  To fix that, delete the lines at the top of the script as shown in "red" here:

 

#!/bin/bash

if test "$SHELL" = "/bin/sh" && test -x /bin/bash; then

    exec /bin/bash -c "$0" "$@"

fi

 

 

Joe L.

 

 

Link to comment

Is it normal for cache_dirs to show up as 2 PID's in the "top" listing? (see pic)

 

When I start it from telnet, I see 2 of them show up.

If I kill the one I started with "-q" it still has another 1 running.

I can run the -q option again and it states no cache_dirs is running.

 

If I stop the array the other one that was running is then cleared and no longer shows up.

 

Once I restart the array it is also still not running.

 

However once I start it in telnet 2 of them show up with different PID's.

 

I am using v1.6.8 Unraid 6.0b6

I'm confused...

 

You do not "start" cache_dirs with the "-q" option.  That is how you "stop" a cache_dirs process that is already running.

( the -q option has it look for a file the running process had created containing its process ID, the "-q" process then kills that original process via the ID it found.)

 

Cache_dirs in many cases DOES create one child process per mounted disk share.  (unless you supply an argument to cache_dirs to NOT perform that function)

These child processes are to purposely cause the mounted devices to be detected as "busy" when you attempt to stop the array. 

It then allows cache-dirs to detect those "busy" file-system messages and suspend itself when you are attempting to stop the array so the array can be stopped cleanly.

 

The major issue people have had in the 64 bit world is the ulimit is set too low for many of them to fit their directory structure in memory.

You can fix that issue by simply commenting out the lines in the script that set "ulimit". (or deleting them entirely)

 

One person reported the newer version of the shell no longer sets $SHELL as /bin/sh when invoked as /bin/bash

so therefore, cache_dirs could not be started from the "go" script.  To fix that, delete the lines at the top of the script as shown in "red" here:

 

#!/bin/bash

if test "$SHELL" = "/bin/sh" && test -x /bin/bash; then

    exec /bin/bash -c "$0" "$@"

fi

 

 

Joe L.

I was not starting it with "-q" I was just saying that is how I was stopping it, but still seeing another running.

I was starting it just by running "cache_dirs".

 

Thanks for the input, and the fix for using it with the Go script.

I will add that in to my setup soon.

 

Link to comment

Glad you cleared up my confusion.  I've attached a new updated version  in the first post in this thread.  Hopefully it will work for you.

You can probably invoke it without any special options, but if it still causes issues with it setting the ulimit on your system (it sets it to 50000 now on a 64 bit OS)

you can set it to anything you like with

-U NNNNNNN

Setting it to zero is special, in that case, no ulimit command is executed at all, you'll inherit the systems default setting.  ( "cache_dirs -w -U 0" )

 

The new version is 64bit aware and also has the three lines near the top of the file (attempting to exec /bin/bash if $SHELL is not set ) removed. 

( They were needed long ago in an older version of unRAID. )

 

Joe L.

Link to comment

I could not live without this script it makes a fundamental difference to unRAID real world usabilty.

 

And why the hell isnt this thread sticky... done :)

 

Agreed, this is a must-have in my humble opinion!

Now, one quick question, how do you get the new version of the cache_dirs to work with the plugin?

(or is there an updated version of the cache_dirs plugin?)

Link to comment

Agreed, this is a must-have in my humble opinion!

Now, one quick question, how do you get the new version of the cache_dirs to work with the plugin?

(or is there an updated version of the cache_dirs plugin?)

There is no supported cache_dirs plugin available at the moment.    The one that was available at one point was withdrawn at the request of Joe L. (the author of cache-dirs.).

Link to comment

Agreed, this is a must-have in my humble opinion!

Now, one quick question, how do you get the new version of the cache_dirs to work with the plugin?

(or is there an updated version of the cache_dirs plugin?)

There is no supported cache_dirs plugin available at the moment.    The one that was available at one point was withdrawn at the request of Joe L. (the author of cache-dirs.).

 

I actually just re-read that in the back posts... didn't mean to stir anything up...

Link to comment

I could not live without this script it makes a fundamental difference to unRAID real world usabilty.

 

And why the hell isnt this thread sticky... done :)

 

but for a lot of people all this does is make your unraid crash due to out of memory...

Link to comment

I could not live without this script it makes a fundamental difference to unRAID real world usabilty.

 

And why the hell isnt this thread sticky... done :)

 

but for a lot of people all this does is make your unraid crash due to out of memory...

Then the solution is simple... either have it cache less (by restricting the depth of the scan, or the restricting the shares it caches)

or add more memory, or if running 32 bit unRAID, upgrade to 64 bit unRAID where low-memory is not limited.

 

It has the options for just about any situation.  On my older server it runs in 512 meg of ram... but I keep it out of the "data" directory and just run it on the "movies" and "music" shares.    All the media I have is cached within the memory available.

 

Joe L.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.