cache_dirs - an attempt to keep directory entries in RAM to prevent disk spin-up


Recommended Posts

I can update the logrotate on next release adding the missingok. I have attached a suggested logrotate-file for cachedirs

On 10/31/2018 at 7:25 PM, jowe said:

2018.10.31 19:09:59 Executed find in (0s) 00.16s, wavg=00.21s Idle____________  depth 9999 slept 10s Disks idle before/after 11s/11s suc/fail cnt=10/11/0 mode=3 scan_tmo=30s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU= 9%, filecount[9999]=214446

 

2018.10.31 19:10:09 Executed find in (30s) 30.02s, wavg=00.21s NonIdleTooSlow__  depth 9999(timeout 30s:Error=1) slept 10s Disks idle before/after 9s/0s suc/fail cnt=11/0/1 mode=3 scan_tmo=30s maxCur=9999 maxWeek=9999 isMaxDepthComputed=1 CPU=10%, filecount[9999]=214446

 

2018.10.31 19:10:39 Executed find in (0s) 00.10s, wavg=00.20s Idle____________  depth 4 slept 1s Disks idle before/after 0s/0s suc/fail cnt=12/1/0 mode=3 scan_tmo=30s maxCur=5 maxWeek=9999 isMaxDepthComputed=1 CPU=66%, filecount[4]=117348

So first scan went smooth, everythings cached, we do lightning fast scans. Then suddenly the scan takes longer than expected and the scan is killed. Default timeout for stable scan is the 30s you have in the log file. 'depth 9999(timeout 30s:Error=1)' means infinite depth, but scan timed out after 30s (ie. we killed the process). The third scan was reduced to depth 4. That supprises me that it went that far down, but I have made it adaptive based on number of files, 117348 @ depth 4 vs 214446 at depth infinite.
You can try disabling adaptive scan in settings, if you so please.

I experience this behaviour when other stuff puts a load on the system (CPU), or when cache is lost.

 

cache_dirs

Edited by Alex R. Berg
attached wrong file, now updated file
Link to comment

One of my cached folders has a space in it. It's showing up weird in the line "Setting Included dirs:" as "TV\,Shows"

 

Perhaps that's just a logging bug and doesn't impact behavior? Just checking.

 

Seeing a lot more spin ups (and far fewer spin downs) recently. I'll wake up in the morning expecting to see the whole array spin down, but that's not the case any longer. It's changed in the past week or two. Maybe since 6.6.3 or since some updates to the plugin and the changes? Unsure.

 

cache_dirs version 2.2.2
No Memory ulimit applied
Setting cache_pressure=10
Arguments=-i Movies -i Nate -i Pictures -i TV\ Shows -i Videos -l on -U 0
Max Scan Secs=10, Min Scan Secs=1
Scan Type=adaptive
Min Scan Depth=4
Max Scan Depth=none
Use Command='find -noleaf'
---------- Caching Directories ---------------
Movies Nate Pictures TV Shows Videos
----------------------------------------------
Setting Included dirs: Movies,Nate,Pictures,TV\,Shows,Videos
Setting Excluded dirs:
min_disk_idle_before_restarting_scan_sec=60
scan_timeout_sec_idle=150
scan_timeout_sec_busy=30
scan_timeout_sec_stable=30
frequency_of_full_depth_scan_sec=604800
cache_dirs started

 

Edited by NNate
Link to comment
17 hours ago, Alex R. Berg said:

I can update the logrotate on next release adding the missingok. I have attached a suggested logrotate-file for cachedirs

So first scan went smooth, everythings cached, we do lightning fast scans. Then suddenly the scan takes longer than expected and the scan is killed. Default timeout for stable scan is the 30s you have in the log file. 'depth 9999(timeout 30s:Error=1)' means infinite depth, but scan timed out after 30s (ie. we killed the process). The third scan was reduced to depth 4. That supprises me that it went that far down, but I have made it adaptive based on number of files, 117348 @ depth 4 vs 214446 at depth infinite.
You can try disabling adaptive scan in settings, if you so please.

I experience this behaviour when other stuff puts a load on the system (CPU), or when cache is lost.

cache_dirs

Hi, thanks for your reply Alex.

 

I have tried a lot of settings, but the disks never go to sleep. Increasing timeouts adaptive/fixed and so on.

 

Right now it's running these settings below, and if i understand the log correctly it should increase the "Disks idle before/after" But it's at 1s all the time. If I disable the plugin, the disks go to sleep as they should. With thees settings it doesn't seem to timeout at least.

 

JoWe

 

EDIT:

"After some time it starts to increase the "Disks idle before/after" But resets the time.

 

2018.11.03 10:17:42 Executed find in (0s) 00.17s, wavg=00.17s   depth 10 slept 1s Disks idle before/after 115s/115s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=32%, filecount[10]=214712


2018.11.03 10:17:43 Executed find in (0s) 00.92s, wavg=00.24s   depth 10 slept 1s Disks idle before/after 116s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=33%, filecount[10]=214712


2018.11.03 10:17:45 Executed find in (0s) 00.17s, wavg=00.24s   depth 10 slept 1s Disks idle before/after 1s/1s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=22%, filecount[10]=214712"

 

 

cache_dirs version 2.2.2
Setting Memory ulimit to 100000
Setting cache_pressure=10
Arguments=-S -i media -X 300 -Y 60 -Z 120 -U 100000 -l on -D 10
Max Scan Secs=10, Min Scan Secs=1
Scan Type=fixed
Max Scan Depth=10
Use Command='find -noleaf'
---------- Caching Directories ---------------
media
----------------------------------------------
Setting Included dirs: media
Setting Excluded dirs:
min_disk_idle_before_restarting_scan_sec=60
scan_timeout_sec_idle=300
scan_timeout_sec_busy=60
scan_timeout_sec_stable=120
frequency_of_full_depth_scan_sec=604800
cache_dirs started

 

018.11.03 09:08:42 Executed find in (195s) 195.97s, wavg=195.97s   depth 10 slept 0s Disks idle before/after 305s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=14%, filecount[10]=214712


2018.11.03 09:11:59 Executed find in (44s) 44.14s, wavg=195.97s   depth 10 slept 1s Disks idle before/after 1s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=11%, filecount[10]=214712

 

6min later


2018.11.03 09:17:17 Executed find in (41s) 41.26s, wavg=00.16s   depth 10 slept 1s Disks idle before/after 1s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=10%, filecount[10]=214712

 


2018.11.03 09:18:00 Executed find in (9s) 09.10s, wavg=00.16s   depth 10 slept 1s Disks idle before/after 1s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=16%, filecount[10]=214712


2018.11.03 09:18:10 Executed find in (8s) 08.64s, wavg=00.16s   depth 10 slept 1s Disks idle before/after 1s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=17%, filecount[10]=214712


2018.11.03 09:18:20 Executed find in (6s) 06.19s, wavg=00.16s  depth 10 slept 1s Disks idle before/after 2s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=17%, filecount[10]=214712


2018.11.03 09:18:27 Executed find in (15s) 15.04s, wavg=00.16s   depth 10 slept 1s Disks idle before/after 1s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=300s maxCur=10 maxWeek=10 isMaxDepthComputed=1 CPU=13%, filecount[10]=214712

Edited by jowe
Link to comment
7 hours ago, BRiT said:

Not a bug at all. Thats how spaces or special characters are escaped in Linux command lines. It's that or surrounding everything in double quotes. 

I'm aware you escape spaces with a backslash, but a backslash comma doesn't seem correct "TV\,Shows"

Link to comment

 

@jowe

Try cache_pressure 1. You might also debug with less files, like depth 2 or something ridiculously low.

 

Maybe you have to little memory.

 

Also next time please include diagnostics run 'cache_dirs -L'. Possibly remove syslog if you don't want to risk giving away sensitive data, if any. It might be a bit clearer for me.

 

> Disks idle before/after 1s/0s

This is a good indication that cache_dirs possibly accessed your disks, of cause sometimes its just other apps. 

 

I trust you verified disks go to sleep, if cache_dirs isn't running.

 

> "After some time it starts to increase the "Disks idle before/after" 

That means at that point in time at least you had enough memory and everything was cached. With cache_pressure 10 I often get dirs get 

 

Best Alex

Link to comment

may a question about cache pressure setting, i setted now to 0 wich is the maximum when i understand the help correctly.

 

may i ask if this is dangerous for stability, i have ~ 18750 files curently, i have fixed depth min 6 max unlimited.

 

it seems to work better with pressure 0 but i dont want to overact ... ;)

 

i have no idea how much ram is used by this ...

Link to comment

Is there a setting to adjust such that cache_dirs will NEVER spin up a disc that's spun down?  I don't want discs spinning up randomly which is what's happening now.  It causes whatever I'm (and family) is watch to freeze up on screen while the disc spins up and is also putting unnecessary wear and tear on discs.  The whole point of spinning dicks down is to reduce wear and tear, not increase it...

 

I appreciate all the help with this app, but for me it has turned into a disaster.

 

Also, the reset to defaults button is not working for me.  I guess I will lust reinstall it again with Alex's files...

 

Thanks again and kind regards,

craigr

Link to comment
On 11/3/2018 at 4:52 PM, Alex R. Berg said:

 

@jowe

Try cache_pressure 1. You might also debug with less files, like depth 2 or something ridiculously low.

 

Maybe you have to little memory.

 

Also next time please include diagnostics run 'cache_dirs -L'. Possibly remove syslog if you don't want to risk giving away sensitive data, if any. It might be a bit clearer for me.

 

> Disks idle before/after 1s/0s

This is a good indication that cache_dirs possibly accessed your disks, of cause sometimes its just other apps. 

 

I trust you verified disks go to sleep, if cache_dirs isn't running.

 

> "After some time it starts to increase the "Disks idle before/after" 

That means at that point in time at least you had enough memory and everything was cached. With cache_pressure 10 I often get dirs get 

 

Best Alex

Hi,

 

I have been testing the last couple of days, and it seems that cache_dirs works, but the disks spin up much more frequent than before. I just started a windows explorer without opening any files, and it spun up 3 disks. So from what i can tell latest windows updates/versions are to blame for some of the spinning up disks.

 

Then, there is something else as well, during this night the disks spin up, and looking at the times, some of them might be random, but most of them come at :00, :15, :30,

 

 

I have tried cache_pressure 0, and the total memory consumption lies around 60% (out of 16GB). It has never hung the system.

 

The logs are attached!

 

It seem the disk goes to sleep much more often with the plugin disabled.

 

Thanks

JoWe

 

 

2018.11.04 23:00:02 Executed find in (40s) 40.53s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 532s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU= 5%, filecount[3]=431


2018.11.04 23:15:01 Executed find in (0s) 00.05s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 859s/859s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU=23%, filecount[3]=431
2018.11.04 23:15:02 Executed find in (0s) 00.05s, wavg=00.05s   depth 3 slept 1s Disks idle before/after 1s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU=34%, filecount[3]=431

 

2018.11.04 23:33:36 Executed find in (0s) 00.04s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 9998s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU=11%, filecount[3]=431

 

2018.11.05 00:12:31 Executed find in (0s) 00.04s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 9998s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU=11%, filecount[3]=431


2018.11.05 01:00:05 Executed find in (0s) 00.06s, wavg=00.05s   depth 3 slept 1s Disks idle before/after 9998s/9998s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU=41%, filecount[3]=431
2018.11.05 01:00:06 Executed find in (0s) 00.04s, wavg=00.05s   depth 3 slept 1s Disks idle before/after 1s/1s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU=33%, filecount[3]=431


2018.11.05 02:00:01 Executed find in (31s) 31.08s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 9998s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU= 6%, filecount[3]=431


2018.11.05 02:30:01 Executed find in (8s) 08.62s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 9998s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU= 9%, filecount[3]=431


2018.11.05 03:15:02 Executed find in (8s) 08.70s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 9998s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU= 8%, filecount[3]=431

 

2018.11.05 04:01:52 Executed find in (0s) 00.04s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 9998s/9998s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU= 8%, filecount[3]=431
2018.11.05 04:01:53 Executed find in (0s) 00.04s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 1s/1s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU= 4%, filecount[3]=431

 

2018.11.05 04:15:00 Executed find in (0s) 00.04s, wavg=00.04s   depth 3 slept 1s Disks idle before/after 554s/554s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU= 6%, filecount[3]=431
2018.11.05 04:15:01 Executed find in (0s) 00.11s, wavg=00.05s   depth 3 slept 1s Disks idle before/after 0s/0s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU=43%, filecount[3]=431

 

2018.11.05 04:30:01 Executed find in (6s) 06.80s, wavg=00.69s   depth 3 slept 1s Disks idle before/after 900s/7s suc/fail cnt=0/0/0 mode=2 scan_tmo=150s maxCur=3 maxWeek=3 isMaxDepthComputed=1 CPU=15%, filecount[3]=431

 

cache_dirs_diagnostics.zip

Link to comment
3 hours ago, BRiT said:

What settings do you have on Windows? If you have display thunbnails enabled then opening Windows Explorer will always cause a disk to spin up, as WinExplorer will read contents of files. CacheDir can only cache filenames, not file contents.

 

Yes i know that, and don't "blame" cache_dirs for thoose spin ups.

 

JoWe

 

Link to comment

after 2 days usage with pressure 0 im there where i was before latest unraid update(s), my drives stay down ...

 

i only have 1 spinup at ~ 2 am, when i have to guess it has something todo with plex server ... nvm.

 

thanks again for this, made my server back to normal behavior as its supposed to be ...

 

i just hope cache pressure 0 will not break something else ;)

Link to comment
On 11/3/2018 at 4:52 PM, Alex R. Berg said:

 

@jowe

Try cache_pressure 1. You might also debug with less files, like depth 2 or something ridiculously low.

 

Maybe you have to little memory.

 

Also next time please include diagnostics run 'cache_dirs -L'. Possibly remove syslog if you don't want to risk giving away sensitive data, if any. It might be a bit clearer for me.

 

> Disks idle before/after 1s/0s

This is a good indication that cache_dirs possibly accessed your disks, of cause sometimes its just other apps. 

 

I trust you verified disks go to sleep, if cache_dirs isn't running.

 

> "After some time it starts to increase the "Disks idle before/after" 

That means at that point in time at least you had enough memory and everything was cached. With cache_pressure 10 I often get dirs get 

 

Best Alex

First of all, I have been using this plugin for years, and it has been working great, there used to be more than 200K files in the share that i try to cache. Has been working until the problems with it not starting with the system. (I think)

 

I did the same this night but increased the depth to 6 and with 200K files, the disks has not spun down a single time over the whole night. If i disable the plugin the disks spin down in 15min.

 

JoWe

 

 

 

 

cache_dirs_diagnostics.zip

Link to comment
8 hours ago, jowe said:

First of all, I have been using this plugin for years, and it has been working great, there used to be more than 200K files in the share that i try to cache. Has been working until the problems with it not starting with the system. (I think)

 

I did the same this night but increased the depth to 6 and with 200K files, the disks has not spun down a single time over the whole night. If i disable the plugin the disks spin down in 15min.

  

JoWe

 

 

 

 

cache_dirs_diagnostics.zip

Same here, my disks no longer spin down either. Disabling the plugin allows them to spin down. It started around the same time for me too (when the plugin would no longer start with the array).

Link to comment
1 hour ago, Fireball3 said:

Are you running the latest version?

Yeah, 2.2.2 if I remember correctly.

 

Yesterday I changed the cache pressure setting to 0 which seemed to help with the spinups when navigating the disks (if I manually spin them down).

 

The file activity plugin is still showing files getting opened during the night - the only thought I have is that cache_dirs is hitting them again to refresh the cache? They are files no other docker or plugin would otherwise touch.

Link to comment

Good to hear it's working for you jowe. My experience is also that cache pressure is the most important parameter. 

 

Nnate, cache_dirs does not access files, only dirs. I'm not sure what activity monitor you are seeing, but in my experience guess work regarding what arbitrary services on my machine accessing files is fruitless. Lsof might be your friend but I think you will be frustrated tracking that down, and might spend a lot of time gaining near nothing. But if it's fun go for it ;) 

Link to comment
2 hours ago, Alex R. Berg said:

Good to hear it's working for you jowe. My experience is also that cache pressure is the most important parameter. 

 

Nnate, cache_dirs does not access files, only dirs. I'm not sure what activity monitor you are seeing, but in my experience guess work regarding what arbitrary services on my machine accessing files is fruitless. Lsof might be your friend but I think you will be frustrated tracking that down, and might spend a lot of time gaining near nothing. But if it's fun go for it ;)

I'm afraid it's not working. I meant that there is no difference between now, after reinstall and before, the disks never goes to sleep while cache_dirs are running with 200K files. As i wrote in the reply below.

 

JoWe

 

15 hours ago, jowe said:

First of all, I have been using this plugin for years, and it has been working great, there used to be more than 200K files in the share that i try to cache. Has been working until the problems with it not starting with the system. (I think)

 

I did the same this night but increased the depth to 6 and with 200K files, the disks has not spun down a single time over the whole night. If i disable the plugin the disks spin down in 15min.

 

JoWe

 

 

 

 

cache_dirs_diagnostics.zip

 

Link to comment

i also had alot of disk spinups since latest unraid releases, so i came to this plugin and now its perfect here.

 

my settings are

 

Nov 5 06:34:38 AlsServer cache_dirs: Stopping cache_dirs process 27693

Nov 5 06:34:39 AlsServer cache_dirs: cache_dirs service rc.cachedirs: Stopped

Nov 5 06:34:39 AlsServer cache_dirs: Arguments=-i Daten -i Media -i Temp -X 300 -Y 120 -p 0 -u -U 0 -l off -D 9999

Nov 5 06:34:39 AlsServer cache_dirs: Max Scan Secs=10, Min Scan Secs=1

Nov 5 06:34:39 AlsServer cache_dirs: Scan Type=fixed Nov 5 06:34:39 AlsServer cache_dirs: Max Scan Depth=none

Nov 5 06:34:39 AlsServer cache_dirs: Use Command='find -noleaf' Nov 5 06:34:39 AlsServer cache_dirs: ---------- Caching Directories ---------------

Nov 5 06:34:39 AlsServer cache_dirs: Daten Nov 5 06:34:39 AlsServer cache_dirs: Media Nov 5 06:34:39 AlsServer cache_dirs: Temp

Nov 5 06:34:39 AlsServer cache_dirs: ----------------------------------------------

Nov 5 06:34:39 AlsServer cache_dirs: Setting Included dirs: Daten,Media,Temp

Nov 5 06:34:39 AlsServer cache_dirs: Setting Excluded dirs: Nov 5 06:34:39 AlsServer cache_dirs: min_disk_idle_before_restarting_scan_sec=60

Nov 5 06:34:39 AlsServer cache_dirs: scan_timeout_sec_idle=300

Nov 5 06:34:39 AlsServer cache_dirs: scan_timeout_sec_busy=120

Nov 5 06:34:39 AlsServer cache_dirs: scan_timeout_sec_stable=30

Nov 5 06:34:39 AlsServer cache_dirs: frequency_of_full_depth_scan_sec=604800

Nov 5 06:34:39 AlsServer cache_dirs: Including /mnt/user in scan

Nov 5 06:34:39 AlsServer cache_dirs: cache_dirs service rc.cachedirs: Started: '/usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -i "Daten" -i "Media" -i "Temp" -X 300 -Y 120 -p 0 -u -U 0 -l off -D 9999 2>/dev/null'

 

like this i have 0 spinups left ... may a try, specially cache pressre 0 helped ...

Link to comment

I've released new plugin, no function change. I've highlighted cache-pressure in plugin page by moving it to the top. I couldn't figure out how to make it bold...

 

Jowe: I was suspecious if I was reading it correctly. I cannot see your settings becauese that was cut from the log file, but it seems your not caching user-share. Try including that. Not that I see why it should matter, but then I still don't get why some have to scan user share to avoid disks spinning up. 
 

I've attached cache-dirs archive. My 2.0 versions are missing, I don't know there they are, probably somewhere in this long thread. You have probably been using 2.1.1 or 1.6.9 earlier. You can try one of those if you wish, just stop cache_dirs and run the script manually, like in the good old days before the dynamix plugin.


If you have less free memory now than back then, that could also be a reason for the change. I can see from your logs, that it seems like your system flushes the dirs from the memory cache, since once a minute a scan takes very long. Its also scanning with only 1s sleep in between each. I doubt Joe's original version of the script or my previous versions would change that. I also doubt the user-share scanning would make a difference but you can give it a try.

I've attached a test_free_memory.sh script which writes 2*4 GB of data on your /tmp drive which is mounted to ram on unRaid. It may crash your system, if you have to little memory, though it doesn't for me. But I think I have mounted filesystem with only 50% memory like this in my go-script:

    # Mount tmpfs at 50% capacity of memory. Will only use memory if filesystem is written to, and hopefully my logrorate will move it out (though it wont if its overfilled)
    echo "Mount tmpfs"
    mount -t tmpfs -o remount,size=2% tmpfs /var/log

Syntax test for 1x4 GB: 

test_free_memory.sh 1G

 

Manually delete /tmp/testing* afterwards, if you kill script, and my trap fails.

 

cache_dirs-releases.zip

test_free_memory.sh

Link to comment

Ran into this thread after noticing my disks not spinning down for quite some time.

When i turned off Cache Dir it finally spun disks down for the first time in months.

 

I have my depth set to 20 i only need 17 to catch all my files but i set this for headroom.

Adaptive makes no difference however.


2018.11.07 12:32:40 Executed find in (1s) 01.70s, wavg=03.99s Idle____________  depth 20 slept 10s Disks idle before/after 1541554359s/1541554361s suc/fail cnt=8/9/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=13%, filecount[20]=73711
2018.11.07 12:32:51 Executed find in (1s) 01.75s, wavg=03.89s Idle____________  depth 20 slept 10s Disks idle before/after 1541554371s/1541554373s suc/fail cnt=9/10/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=15%, filecount[20]=73711
2018.11.07 12:33:03 Executed find in (5s) 05.43s, wavg=04.14s Idle____________  depth 20 slept 10s Disks idle before/after 1541554383s/1541554388s suc/fail cnt=10/11/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=17%, filecount[20]=73711
2018.11.07 12:33:19 Executed find in (1s) 01.09s, wavg=03.95s Idle____________  depth 20 slept 10s Disks idle before/after 1541554399s/1541554400s suc/fail cnt=11/12/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=14%, filecount[20]=73711
2018.11.07 12:33:30 Executed find in (0s) 00.09s, wavg=03.67s Idle____________  depth 20 slept 10s Disks idle before/after 1541554410s/1541554410s suc/fail cnt=12/13/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=15%, filecount[20]=73711
2018.11.07 12:33:40 Executed find in (1s) 01.41s, wavg=03.51s Idle____________  depth 20 slept 10s Disks idle before/after 1541554420s/1541554421s suc/fail cnt=13/14/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=15%, filecount[20]=73711
2018.11.07 12:33:51 Executed find in (0s) 00.71s, wavg=03.27s Idle____________  depth 20 slept 10s Disks idle before/after 1541554431s/1541554432s suc/fail cnt=14/15/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=15%, filecount[20]=73711
2018.11.07 12:34:02 Executed find in (6s) 06.39s, wavg=03.58s Idle____________  depth 20 slept 10s Disks idle before/after 1541554442s/1541554448s suc/fail cnt=15/16/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=21%, filecount[20]=73711
2018.11.07 12:34:19 Executed find in (0s) 00.09s, wavg=03.26s Idle____________  depth 20 slept 10s Disks idle before/after 1541554459s/1541554459s suc/fail cnt=16/17/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=17%, filecount[20]=73711
2018.11.07 12:34:29 Executed find in (0s) 00.09s, wavg=02.94s Idle____________  depth 20 slept 10s Disks idle before/after 1541554469s/1541554469s suc/fail cnt=17/18/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=14%, filecount[20]=73711
2018.11.07 12:34:39 Executed find in (0s) 00.09s, wavg=02.62s Idle____________  depth 20 slept 10s Disks idle before/after 1541554479s/1541554479s suc/fail cnt=18/19/0 mode=4 scan_tmo=150s maxCur=20 maxWeek=20 isMaxDepthComputed=1 CPU=15%, filecount[20]=73711
 

I've tried setting my cache pressure to 0 and memory limit to 0 but makes no difference for me.

attached are my settings.

 

As per my attachments, i basically see every scan in the cache_dir.log file increments the disks all spike with read accesses.

Even though the content on all the drives is unchanged.

Even went i shutdown VM and Docker completely and unplug it from the network i still see these disk reads that coincidence with cache_dir.log find.

This seems to be preventing the disks spinning down, which seem to be the point of this plugin :)

 

Maybe someone can point me in the right direction here.

Not all my shares are across all my disks either, but my included folders in Cache Dir will include all the disks except my Disk 13.

 

 

cache.JPG

disks.JPG

Link to comment
12 hours ago, Alex R. Berg said:

I've released new plugin, no function change. I've highlighted cache-pressure in plugin page by moving it to the top. I couldn't figure out how to make it bold... 

 

Jowe: I was suspecious if I was reading it correctly. I cannot see your settings becauese that was cut from the log file, but it seems your not caching user-share. Try including that. Not that I see why it should matter, but then I still don't get why some have to scan user share to avoid disks spinning up. 
 

I've attached cache-dirs archive. My 2.0 versions are missing, I don't know there they are, probably somewhere in this long thread. You have probably been using 2.1.1 or 1.6.9 earlier. You can try one of those if you wish, just stop cache_dirs and run the script manually, like in the good old days before the dynamix plugin. 


If you have less free memory now than back then, that could also be a reason for the change. I can see from your logs, that it seems like your system flushes the dirs from the memory cache, since once a minute a scan takes very long. Its also scanning with only 1s sleep in between each. I doubt Joe's original version of the script or my previous versions would change that. I also doubt the user-share scanning would make a difference but you can give it a try.

I've attached a test_free_memory.sh script which writes 2*4 GB of data on your /tmp drive which is mounted to ram on unRaid. It may crash your system, if you have to little memory, though it doesn't for me. But I think I have mounted filesystem with only 50% memory like this in my go-script:

    # Mount tmpfs at 50% capacity of memory. Will only use memory if filesystem is written to, and hopefully my logrorate will move it out (though it wont if its overfilled)
    echo "Mount tmpfs"
    mount -t tmpfs -o remount,size=2% tmpfs /var/log

Syntax test for 1x4 GB: 

test_free_memory.sh 1G

 

Manually delete /tmp/testing* afterwards, if you kill script, and my trap fails. 

 

cache_dirs-releases.zip

test_free_memory.sh

Hi Alex, thanks for your reply.

 

I tried the with the new version this morning, and it has been running for a couple of hours. The disks spun down at around 7:05. After almost 2 hours. But it's still restarting a lot of times before that. Attached a file with log.

 

I used to have a Windows HTPC with 4GB RAM, now using LibreELEC with 1GB. So I think im actually using less RAM now. And with cache pressure 0 The server "should" crash before releasing any memory.

 

Sorry but I can't test the memory script right now, I'm not by my server so a crash would not be optimal, but i can try later tonight.

I had 1.6.9 on my flash already, so i probably used that one! Started the scrip with "cache_dirs -p 1 -S -i media -u -U 0 -d 6" I'll get back with results!

 

Edit:
After running 1.6.9 for a cuple of hours, its acting exactly the same. So the problem is not from within cache_dirs. But is there anything in the newer unraid versions that could interact with cache_dirs?

No times in this log, but seems to be restarting as well. This log is from when it's been running for more than 1h.

 

Executed find in 1.630547 seconds, weighted avg=2.683402 seconds, now sleeping 10 seconds
Executed find in 1.629726 seconds, weighted avg=2.332453 seconds, now sleeping 10 seconds
Executed find in 1.620553 seconds, weighted avg=1.980860 seconds, now sleeping 10 seconds
Executed find in 67.436761 seconds, weighted avg=7.897477 seconds, now sleeping 9 seconds
Executed find in 1.629236 seconds, weighted avg=7.583565 seconds, now sleeping 10 seconds
Executed find in 1.702406 seconds, weighted avg=7.276872 seconds, now sleeping 10 seconds
Executed find in 1.627479 seconds, weighted avg=6.962589 seconds, now sleeping 10 seconds
Executed find in 1.615260 seconds, weighted avg=6.647173 seconds, now sleeping 10 seconds
Executed find in 1.600871 seconds, weighted avg=6.330583 seconds, now sleeping 10 seconds
Executed find in 1.657069 seconds, weighted avg=6.019647 seconds, now sleeping 10 seconds
Executed find in 1.667682 seconds, weighted avg=5.709729 seconds, now sleeping 10 seconds
Executed find in 1.620685 seconds, weighted avg=5.395157 seconds, now sleeping 10 seconds
Executed find in 1.629018 seconds, weighted avg=5.081718 seconds, now sleeping 10 seconds
Executed find in 1.663094 seconds, weighted avg=4.771544 seconds, now sleeping 10 seconds
Executed find in 1.621273 seconds, weighted avg=4.457107 seconds, now sleeping 10 seconds
Executed find in 1.668013 seconds, weighted avg=4.146966 seconds, now sleeping 10 seconds
Executed find in 1.666618 seconds, weighted avg=3.836455 seconds, now sleeping 10 seconds
Executed find in 1.632011 seconds, weighted avg=3.522318 seconds, now sleeping 10 seconds
Executed find in 1.650354 seconds, weighted avg=3.209995 seconds, now sleeping 10 seconds
Executed find in 1.659944 seconds, weighted avg=2.898463 seconds, now sleeping 10 seconds
Executed find in 1.621575 seconds, weighted avg=2.583238 seconds, now sleeping 10 seconds
Executed find in 1.618180 seconds, weighted avg=2.267734 seconds, now sleeping 10 seconds
Executed find in 1.630193 seconds, weighted avg=1.953428 seconds, now sleeping 10 seconds
Executed find in 92.141068 seconds, weighted avg=10.259159 seconds, now sleeping 9 seconds
Executed find in 1.648757 seconds, weighted avg=9.828936 seconds, now sleeping 10 seconds
Executed find in 1.668640 seconds, weighted avg=9.400513 seconds, now sleeping 10 seconds
Executed find in 1.641687 seconds, weighted avg=8.969685 seconds, now sleeping 10 seconds
Executed find in 1.638518 seconds, weighted avg=8.538486 seconds, now sleeping 10 seconds
Executed find in 1.601272 seconds, weighted avg=8.103630 seconds, now sleeping 10 seconds
Executed find in 1.643961 seconds, weighted avg=7.672838 seconds, now sleeping 10 seconds
Executed find in 1.623800 seconds, weighted avg=7.240187 seconds, now sleeping 10 seconds
Executed find in 1.599030 seconds, weighted avg=6.805387 seconds, now sleeping 10 seconds
Executed find in 1.657247 seconds, weighted avg=6.376234 seconds, now sleeping 10 seconds
Executed find in 1.614359 seconds, weighted avg=5.942863 seconds, now sleeping 10 seconds
Executed find in 1.642858 seconds, weighted avg=5.512437 seconds, now sleeping 10 seconds
Executed find in 1.615809 seconds, weighted avg=5.079333 seconds, now sleeping 10 seconds
Executed find in 1.608156 seconds, weighted avg=4.645748 seconds, now sleeping 10 seconds
Executed find in 1.609557 seconds, weighted avg=4.212576 seconds, now sleeping 10 seconds
Executed find in 1.675881 seconds, weighted avg=3.785826 seconds, now sleeping 10 seconds
Executed find in 1.684113 seconds, weighted avg=3.359739 seconds, now sleeping 10 seconds
Executed find in 1.550842 seconds, weighted avg=2.920845 seconds, now sleeping 10 seconds
Executed find in 156.650812 seconds, weighted avg=17.253713 seconds, now sleeping 9 seconds
Executed find in 170.330314 seconds, weighted avg=32.151140 seconds, now sleeping 8 seconds
Executed find in 176.063238 seconds, weighted avg=46.791227 seconds, now sleeping 7 seconds
Executed find in 160.003598 seconds, weighted avg=59.502194 seconds, now sleeping 6 seconds
Executed find in 21.547191 seconds, weighted avg=58.272766 seconds, now sleeping 7 seconds
Executed find in 1.698216 seconds, weighted avg=55.058299 seconds, now sleeping 8 seconds
Executed find in 1.725258 seconds, weighted avg=51.846139 seconds, now sleeping 9 seconds
Executed find in 3.315480 seconds, weighted avg=48.785016 seconds, now sleeping 10 seconds
Executed find in 3.101892 seconds, weighted avg=45.695388 seconds, now sleeping 10 seconds
Executed find in 1.761153 seconds, weighted avg=42.471128 seconds, now sleeping 10 seconds
Executed find in 29.715485 seconds, weighted avg=41.908531 seconds, now sleeping 10 seconds
Executed find in 25.123730 seconds, weighted avg=40.774737 seconds, now sleeping 10 seconds
Executed find in 2.516035 seconds, weighted avg=37.376083 seconds, now sleeping 10 seconds
Executed find in 1.745359 seconds, weighted avg=33.899738 seconds, now sleeping 10 seconds
Executed find in 1.808898 seconds, weighted avg=30.428957 seconds, now sleeping 10 seconds
Executed find in 1.869671 seconds, weighted avg=26.963043 seconds, now sleeping 10 seconds
Executed find in 62.108122 seconds, weighted avg=29.232880 seconds, now sleeping 9 seconds
Executed find in 9.364115 seconds, weighted avg=26.191390 seconds, now sleeping 10 seconds
Executed find in 1.932508 seconds, weighted avg=22.405517 seconds, now sleeping 10 seconds

 

 

cache_dirs_diagnostics.zip

Edited by jowe
Add info
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.