cache_dirs - an attempt to keep directory entries in RAM to prevent disk spin-up


Recommended Posts

A couple questions ...

 

(a)  How many files do you have?    [ballpark]

 

(b)  How can I tell what the current cache pressure setting is?  I read somewhere that UnRAID defaults this to 60 ... is that no longer true?

 

 

 

I had the following distribution of files in a system with 8GB of ram.


root@unRAID:/mnt/disk1/cache/flocate# wc -l disk*.filelist 
   146485 disk1.filelist
     4979 disk10.filelist
     4270 disk11.filelist
     3013 disk12.filelist
     2377 disk13.filelist
     2530 disk14.filelist
    63326 disk15.filelist
  4761906 disk2.filelist
     7141 disk3.filelist
  2704215 disk4.filelist
   169797 disk5.filelist
        0 disk6.filelist
      610 disk7.filelist
     2647 disk8.filelist
     1601 disk9.filelist
  7874897 total

 

 

I don't remember the default for vfs_cache_pressure.

I thought cache_dirs changed it to 100 to keep dentries around.

Link to comment

Wow -- 7 million + files is definitely a bunch  :)

 

One of my servers has 270,128 (w/8GB RAM); the other 134,063 (w/4GB RAM).

 

The first is the one I'm testing with Cache_Dirs right now ... the other one is my older v4.7 server that's got all my media files on it => I plan on testing it overnight tonight if the first one shows no notable impact from Cache_Dirs.

 

Link to comment

Okay, the parity check took 7:43:59 => almost exactly 3 minutes longer than without Cache_Dirs.  [My last two checks took 7:41:01 and 7:41:00 ... so it's VERY consistent)

 

I'm impressed that there was so little difference ... although 3 minutes is 180,000 ms, so that's probably 2-3 thousand "thrash and cache" cycles.    Nevertheless, it's less difference than I anticipated -- clearly it's not a big deal whether or not Cache_Dirs suspends itself during parity checks with the file mix I have  (those with more files would probably get a lot more benefit from this modification).

 

 

Link to comment
that's probably 2-3 thousand "thrash and cache" cycles.

 

 

"Thrash and cache" is an untrue statement now that two people have proven that there is really no thrashing going on.

In fact, it's doing exactly what it was designed to do. Walk through every single dentry in the kernel's tables to keep the entry from being expired.

Link to comment

 

"Thrash and cache" is an untrue statement now that two people have proven that there is really no thrashing going on.

In fact, it's doing exactly what it was designed to do. Walk through every single dentry in the kernel's tables to keep the entry from being expired.

 

Well, it's doing something that occupies an extra 180,000 ms  :)

I'd think anything done entirely in memory would not be adding time (thus Robj's results) ... so the extra time would seem to be from actual disk accesses from Find commands.

 

I agree, however, that 3 extra seconds is much less impact than I expected (My guess was that it'd probably take 15-30 minutes more).    I'm far less concerned about having Cache_Dirs suspend itself now ... still think it'd be on balance a good thing; but clearly it's not as much of an impact as I had assumed.

 

Based on Robj's results and mine, I'd guess he has somewhat fewer files than the 270,000+ my system does; and that up to some point there's virtually no impact [Clearly where this point is depends on more than just the # of files ... i.e. amount of memory, size of other assigned buffers, etc.]

 

In any event, I'm very pleased with the results.

 

Link to comment

I estimate the files that I have cached are just under 6000, a lot to me but obviously trivial compared with others here.

 

My results :

 

* 25247 seconds - parity check with CacheDirs, network disconnected

* 25462 seconds - parity check without CacheDirs, network disconnected

* 25532 seconds - parity check without CacheDirs, network connected, 2 CPU stalls

* 25335 seconds - parity check with CacheDirs, network connected

* 25740 seconds - parity check without CacheDirs, network connected, 3 CPU stalls, backup to server ran

 

Clearly, almost insignificant differences, but I cannot see any thrashing impact from CacheDirs running.  A little thrashing may be evident in the final result when a 4am backup to my UnRAID server was run.  When the network is connected, there were 3 to 5 refreshes of UnRAID web page, plus the constant (5 minute) polling of server folders by SageTV, which may explain the slightly longer times for network connected results.

 

An additional note, after the second test and after all drives had spun down, with no CacheDirs running, I reconnected the network cable, partly expecting to see the drives spin up once SageTV checked all of its server folders.  None of them did.  So I forced SageTV to do extra checking for any changes in video folders, and again not one drive spun up.  That tells me that the entire parity check process had not caused the loss of a single dentry, even without CacheDirs to protect them.  I had to tell SageTV to view a video (just to make sure it was working!), and one drive spun up.

 

Another note, rather odd, I have never before seen a single CPU stall error with call trace before.  Several happened on both runs (all on CPU #1, all within the first 20 minutes only) with network connected (obviously normal!) but no CacheDirs (not normal for me).  I always have CacheDirs running.  It is tempting to think that CacheDirs not only protects my dentries, but also protects me from CPU stalls!  I have no idea what to conclude from that.

Link to comment

It is tempting to think that CacheDirs not only protects my dentries, but also protects me from CPU stalls!  I have no idea what to conclude from that.

 

Definitely strange ... perhaps the every-10-seconds CPU activity to do its buffer checks is enough "extra" activity to prevent whatever is causing those stalls.

 

With only 6,000 files cached, I'm much less surprised at your results.

 

Link to comment

What is being described here, is not necessarily thrashing

i.e. until Rob J's backup starts to interfere with the ability of what cache_dirs is doing.

 

In computer science, thrashing occurs when a computer's virtual memory subsystem is in a constant state of paging, rapidly exchanging data in memory for data on disk, to the exclusion of most application-level processing.[1] This causes the performance of the computer to degrade or collapse. The situation may continue indefinitely until the underlying cause is addressed.

 

In my system, cache_dirs created an environment that was wildly out of control and useless.

Now that's thrashing.

 

What may serve everyone with various needs is to have cache_dirs check for the existence of a flag file.

If this flag file exists, the find is skipped until the next interval.

 

If this flag file's size is greater then 0, read it and use it as an argument to sleep.

This way we can set an estimate of how long it should pause.

 

i.e. here is some example scriplets.

 

#!/bin/bash

[ ${DEBUG:=0} -gt 0 ] && set -x -v


FLAGFILE=/tmp/cache_dirs.sleepflag
NORMAL_INTERVAL=${NORMAL_INTERVAL:=60}

if [ -s ${FLAGFILE} ]
   then sleep $(<${FLAGFILE})
   else if [ -e ${FLAGFILE} ]
           then sleep ${NORMAL_INTERVAL}
        fi
fi

 

if flag exists with no size.

root@rgclws:/home/rcotrone/sh # DEBUG=3 NORMAL_INTERVAL=10 ./sleepflag.sh       


FLAGFILE=/tmp/cache_dirs.sleepflag
+ FLAGFILE=/tmp/cache_dirs.sleepflag
NORMAL_INTERVAL=${NORMAL_INTERVAL:=60}
+ NORMAL_INTERVAL=10

if [ -s ${FLAGFILE} ]
   then sleep $(<${FLAGFILE})
   else if [ -e ${FLAGFILE} ]
           then sleep ${NORMAL_INTERVAL}
        fi
fi
+ '[' -s /tmp/cache_dirs.sleepflag ']'
+ '[' -e /tmp/cache_dirs.sleepflag ']'
+ sleep 10

 

if flag exists with a size

root@rgclws:/home/rcotrone/sh # echo 15 > /tmp/cache_dirs.sleepflag 
root@rgclws:/home/rcotrone/sh # DEBUG=3 NORMAL_INTERVAL=10 ./sleepflag.sh 


FLAGFILE=/tmp/cache_dirs.sleepflag
+ FLAGFILE=/tmp/cache_dirs.sleepflag
NORMAL_INTERVAL=${NORMAL_INTERVAL:=60}
+ NORMAL_INTERVAL=10

if [ -s ${FLAGFILE} ]
   then sleep $(<${FLAGFILE})
   else if [ -e ${FLAGFILE} ]
           then sleep ${NORMAL_INTERVAL}
        fi
fi
+ '[' -s /tmp/cache_dirs.sleepflag ']'
<${FLAGFILE})
<${FLAGFILE})
<${FLAGFILE}
+ sleep 15

 

So now rather then forcing cache_dirs to calculate the wait time.

"For now" we can have the presence of a flag do that.

 

Later on, we can have something else that monitors parity status drop this flag somewhere and/or possibly fill it in with a value.

 

It would be good for backups to drop a known duration into this file also.

Link to comment

I'm well aware of what thrashing is ... it can be caused by a number of factors, not just paging in the virtual memory system (thrashing can be an issue even on systems that don't use virtual memory).

 

Perhaps it's a bit of overstatement to call every disk access that Cache_Dirs makes during a parity check "thrashing" ... but technically that's what it is => any access that causes excess head movement is thrashing the disk ... even if only a little bit.

 

NOT complaining, by the way ... if Cache_Dirs only causes 2-3 thousand "thrashes" during a 7 hour parity check, that is VERY GOOD.  No complaints at all.

 

Link to comment

Well, it's doing something that occupies an extra 180,000 ms  :)

I'd think anything done entirely in memory would not be adding time (thus Robj's results)  so the extra time would seem to be from actual disk accesses from Find commands.

Not entirely true.  You are using CPU cycles, and the processes are all sharing the same CPU on a time-slice basis, and the I/O requests are still being queued to the disk buffer cache, and it has to be read to return the disk buffer even if not being read from a physical disk.)...

In any event, I'm very pleased with the results.

Good..
Link to comment
  • 3 weeks later...

Playing with cache_dirs and finding something strange.

 

Start it by:

cache_dirs -p 200 -i Specials -i UnArchived -i FanArt

 

Runs fine for awhile and I see the processes

ps -ef | grep cache

root@Tower:/root -> ps -ef | grep cache
root     14369     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14370     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14372     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14373     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14375     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14376     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14377     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14378     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14379     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14380     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14381     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14382     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14383     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14384     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14385     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14386     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14387     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14388     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14389     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14390     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
root     14472     1  0 10:49 pts/1    00:00:00 /bin/bash /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt

 

After awhile though, if I check again, it appears that cache_dirs has died.

Nothing in syslog to let me know what happened though.

 

If I do a cache_dirs -q  it tells me that #### is not running.

 

unRaid version: 5.0-beta11

VM

4GB Ram

 

Any thoughts?

 

 

 

 

Link to comment

A little more info. Did not show up in syslog but did on the screen.

cache_dirs had been running approx 3.5 hrs at this failure

 

/boot/scripts/cache_dirs: xmalloc: execute_cmd.c:3599: cannot allocate 72 bytes (901120 bytes allocated)
/boot/scripts/cache_dirs: line 449: [: : integer expression expected
/boot/scripts/cache_dirs: xmalloc: execute_cmd.c:578: cannot allocate 305 bytes (901120 bytes allocated)

 

Line 449 is the IF statement

  num_dirs=`find /mnt/disk[1-9]* /mnt/cache -type d -maxdepth 0 -print 2>/dev/null|wc -l`
  if [ "$num_dirs" -eq 0 ]
  then
    # array is not started, sleep and look again in 10 seconds.
    sleep 10
    continue
  fi

Link to comment

running the following script in background so that I can get a better feel of what is happening and how frequent.

 

#!/bin/bash

while [  true ]; do
   RUNNING=`ps -ef | grep cache_dirs | grep -v grep | grep -v check_cache_dirs.sh | wc -l`
   if [ ${RUNNING} -eq 0 ] ; then
      free -l >> /var/log/syslog
      /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
   fi
   sleep 600
done

 

at least this way, when cache_dirs dies, I start up another one (within 10 mins) and put to syslog the output from free -l

 

 

Link to comment

My cache_dirs seems to be ignoring exclude rules :( my drives haven't spun down in days and I just realized what the problem was.  This is what I have in my go script:

/boot/cache_dirs -e "z_*" -e "personal" -e "plugins" -w

 

But as you can see, it's scanning a share beginning with z_ :(  Those are massive windows and macbook backups with millions of files that I don't want cached.

 

root@Tower:~# lsof | grep mnt
cache_dir 3156 root cwd DIR 0,1 0 5676 /mnt/disk1
find 16192 root cwd DIR 9,1 360 3441818 /mnt/disk1/z_old_macbook/Time To Pretend/.AppleDouble
sleep 16655 root cwd DIR 0,1 0 5676 /mnt/disk1 

 

Is there something wrong with my setup?  Please help!!  Thankyou!

Link to comment
  • 4 weeks later...

I have been using the Cache_Dirs plugin, but in order to install Tom's new WEB-GUI he requested we delete all plugins from /boot/plugins. That left me without the cache_dirs plugin.

What does this have to do with Joe L's cache_dirs script?

 

I thought I'd go ahead and use his script directly and run it manually. So I downloaded it from the link in the OP and after extracting and FTPing it over to my unRAID box I had troubles running it with the error "/bin/bash^M: bad interpreter: No such file or directory."

 

I looked at the file using MCEdit and sure enough there were a bunch of ^M's at the end of each line. This is supposed to mean somewhere along the way a Windows text editor modified the file and added the ^M's as a EOL or CR.

 

I tried a number of ways to correct it but what finally worked was a tip from Joe L.

tr -d '\15\32' < cache_dirs > cache_dirs-1

A link to the article:

http://lime-technology.com/forum/index.php?topic=911.0;nowap Reply #7

 

That fixed it and I could run the script. BTW, I run Ubuntu Linux on my workstation and didn't edit the file in any way prior to copying it to my unRAID server.

 

So, is there a problem with the file format of the file in the download link?

 

Thanks!!

 

Link to comment

Not that I'm aware of...  But which download link are you referring to?

 

Thanks Joe L. for writing!  The link is the bottom of your original post in this thread, shown as cache_dirs.zip:

 

http://lime-technology.com/forum/index.php?action=dlattach;topic=4500.0;attach=14315

 

I appreciate the help!

Nothing at all wrong with the zip file. It has no ms-dos carriage returns in it. (I just downloaded it and un-zipped it on my server)
Link to comment

Nothing at all wrong with the zip file. It has no ms-dos carriage returns in it. (I just downloaded it and un-zipped it on my server)

 

Hmmm, I wonder why it's doing it on my system? Thanks for checking it out! At least it gave me the opportunity to learn about tr (truncate)!

tr = "translate"
Link to comment

I have noticed on my system that cache_dirs seems to die about every 4-6 hours.

 

How do I know this? I have a script that checks if cache_dirs is running, and if not, starts it again.

#!/bin/bash

while [  true ]; do
   RUNNING=`ps -ef | grep cache_dirs | grep -v grep | grep -v check_cache_dirs.sh | wc -l`
   if [ ${RUNNING} -eq 0 ] ; then
      free -l >> /var/log/syslog
      /boot/scripts/cache_dirs -p 200 -i Specials -i UnArchived -i FanArt
   fi
   sleep 600
done

 

 

 

snippet from syslog

Wed Aug 7 16:45:01 EDT 2013
             total       used       free     shared    buffers     cached
Mem:       4145916    4016680     129236          0     141856    3277628
Low:        865076     744028     121048
High:      3280840    3272652       8188
-/+ buffers/cache:     597196    3548720
Swap:            0          0          0
Aug  7 16:57:40 Tower cache_dirs: ==============================================
Aug  7 16:57:40 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt
Aug  7 16:57:40 Tower cache_dirs: vfs_cache_pressure=200
Aug  7 16:57:40 Tower cache_dirs: max_seconds=10, min_seconds=1
Aug  7 16:57:40 Tower cache_dirs: max_depth=9999
Aug  7 16:57:40 Tower cache_dirs: command=find -noleaf
Aug  7 16:57:40 Tower cache_dirs: version=1.6.5
Aug  7 16:57:40 Tower cache_dirs: ---------- caching directories ---------------
Aug  7 16:57:40 Tower cache_dirs: FanArt
Aug  7 16:57:40 Tower cache_dirs: Specials
Aug  7 16:57:40 Tower cache_dirs: UnArchived
Aug  7 16:57:40 Tower cache_dirs: ----------------------------------------------
Aug  7 16:57:40 Tower cache_dirs: cache_dirs process ID 4071 started, To terminate it, type: cache_dirs -q
Wed Aug 7 17:45:01 EDT 2013
Wed Aug 7 18:45:01 EDT 2013
Wed Aug 7 19:45:01 EDT 2013
Wed Aug 7 20:45:01 EDT 2013
Wed Aug 7 21:45:01 EDT 2013
             total       used       free     shared    buffers     cached
Mem:       4145916    4024564     121352          0     161140    3287004
Low:        865076     751516     113560
High:      3280840    3273048       7792
-/+ buffers/cache:     576420    3569496
Swap:            0          0          0
Aug  7 21:45:13 Tower cache_dirs: ==============================================
Aug  7 21:45:13 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt
Aug  7 21:45:13 Tower cache_dirs: vfs_cache_pressure=200
Aug  7 21:45:13 Tower cache_dirs: max_seconds=10, min_seconds=1
Aug  7 21:45:13 Tower cache_dirs: max_depth=9999
Aug  7 21:45:13 Tower cache_dirs: command=find -noleaf
Aug  7 21:45:13 Tower cache_dirs: version=1.6.5
Aug  7 21:45:13 Tower cache_dirs: ---------- caching directories ---------------
Aug  7 21:45:13 Tower cache_dirs: FanArt
Aug  7 21:45:13 Tower cache_dirs: Specials
Aug  7 21:45:13 Tower cache_dirs: UnArchived
Aug  7 21:45:13 Tower cache_dirs: ----------------------------------------------
Aug  7 21:45:14 Tower cache_dirs: cache_dirs process ID 24388 started, To terminate it, type: cache_dirs -q
Wed Aug 7 22:45:01 EDT 2013
Wed Aug 7 23:45:01 EDT 2013
Thu Aug 8 00:45:01 EDT 2013
Thu Aug 8 01:45:01 EDT 2013
             total       used       free     shared    buffers     cached
Mem:       4145916    3085692    1060224          0     152480    2356952
Low:        865076     676180     188896
High:      3280840    2409512     871328
-/+ buffers/cache:     576260    3569656
Swap:            0          0          0
Aug  8 02:20:20 Tower cache_dirs: ==============================================
Aug  8 02:20:20 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt
Aug  8 02:20:20 Tower cache_dirs: vfs_cache_pressure=200
Aug  8 02:20:20 Tower cache_dirs: max_seconds=10, min_seconds=1
Aug  8 02:20:20 Tower cache_dirs: max_depth=9999
Aug  8 02:20:20 Tower cache_dirs: command=find -noleaf
Aug  8 02:20:20 Tower cache_dirs: version=1.6.5
Aug  8 02:20:20 Tower cache_dirs: ---------- caching directories ---------------
Aug  8 02:20:20 Tower cache_dirs: FanArt
Aug  8 02:20:20 Tower cache_dirs: Specials
Aug  8 02:20:20 Tower cache_dirs: UnArchived
Aug  8 02:20:20 Tower cache_dirs: ----------------------------------------------
Aug  8 02:20:20 Tower cache_dirs: cache_dirs process ID 24880 started, To terminate it, type: cache_dirs -q
Thu Aug 8 02:45:01 EDT 2013
Thu Aug 8 03:45:01 EDT 2013
Thu Aug 8 04:45:01 EDT 2013
Thu Aug 8 05:45:01 EDT 2013
Thu Aug 8 06:45:01 EDT 2013
             total       used       free     shared    buffers     cached
Mem:       4145916    4011360     134556          0     106820    3332164
Low:        865076     749720     115356
High:      3280840    3261640      19200
-/+ buffers/cache:     572376    3573540
Swap:            0          0          0
Aug  8 06:57:54 Tower cache_dirs: ==============================================
Aug  8 06:57:54 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt
Aug  8 06:57:54 Tower cache_dirs: vfs_cache_pressure=200
Aug  8 06:57:54 Tower cache_dirs: max_seconds=10, min_seconds=1
Aug  8 06:57:54 Tower cache_dirs: max_depth=9999
Aug  8 06:57:54 Tower cache_dirs: command=find -noleaf
Aug  8 06:57:54 Tower cache_dirs: version=1.6.5
Aug  8 06:57:54 Tower cache_dirs: ---------- caching directories ---------------
Aug  8 06:57:54 Tower cache_dirs: FanArt
Aug  8 06:57:54 Tower cache_dirs: Specials
Aug  8 06:57:54 Tower cache_dirs: UnArchived
Aug  8 06:57:54 Tower cache_dirs: ----------------------------------------------
Aug  8 06:57:55 Tower cache_dirs: cache_dirs process ID 6440 started, To terminate it, type: cache_dirs -q
Thu Aug 8 07:45:01 EDT 2013
Thu Aug 8 08:45:01 EDT 2013
Thu Aug 8 09:45:01 EDT 2013
Thu Aug 8 10:45:01 EDT 2013
Thu Aug 8 11:45:01 EDT 2013
             total       used       free     shared    buffers     cached
Mem:       4145916    3030892    1115024          0     128020    2297612
Low:        865076     701360     163716
High:      3280840    2329532     951308
-/+ buffers/cache:     605260    3540656
Swap:            0          0          0
Aug  8 11:55:29 Tower cache_dirs: ==============================================
Aug  8 11:55:29 Tower cache_dirs: command-args=-p 200 -i Specials -i UnArchived -i FanArt
Aug  8 11:55:29 Tower cache_dirs: vfs_cache_pressure=200
Aug  8 11:55:29 Tower cache_dirs: max_seconds=10, min_seconds=1
Aug  8 11:55:29 Tower cache_dirs: max_depth=9999
Aug  8 11:55:29 Tower cache_dirs: command=find -noleaf
Aug  8 11:55:29 Tower cache_dirs: version=1.6.5
Aug  8 11:55:29 Tower cache_dirs: ---------- caching directories ---------------
Aug  8 11:55:29 Tower cache_dirs: FanArt
Aug  8 11:55:29 Tower cache_dirs: Specials
Aug  8 11:55:29 Tower cache_dirs: UnArchived
Aug  8 11:55:29 Tower cache_dirs: ----------------------------------------------
Aug  8 11:55:29 Tower cache_dirs: cache_dirs process ID 14707 started, To terminate it, type: cache_dirs -q
Thu Aug 8 12:45:01 EDT 2013

 

Any thoughts on what might be killing cache_dirs and how I should proceed?

 

Link to comment
  • 2 weeks later...

So for lack of a more technical approach I am trying the tidy up route.

 

This command added to bashrc

 

alias duf='du -a | cut -d/ -f2 | sort | uniq -c | sort -nr'

 

allows me to count files in dirs.

 

Have to say I was surprised by some of the numbers

Link to comment

Some limit is being hit on my secondary system with cache_dirs meaning the disks literally never spin down now.

 

I don't know what specific limit I hit but by simply finding folders that could compress large numbers of files into a smaller sub set the problem went away.

 

This is not ideal as for me at least there is a hard limit (which comes without warning) after which cache_dirs changes from a tool to help keep disks down to the tool that ensures disk always stay up.

 

We need to find a way to locate these limits and warn in some quantifiable way.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.