cache_dirs - an attempt to keep directory entries in RAM to prevent disk spin-up


Recommended Posts

On ‎11‎/‎10‎/‎2018 at 10:41 PM, Alex R. Berg said:

A reboot fixed it for her when I came to her later, and then she frustrated said 'Oh I always forget that rebooting is an option'. There's Value Learning Experience. I think she just rose a level in the hacker-class :)

Have been watching the movie "Skyscraper" just recently.

Will also recommends his wife to reboot her mobile because in 90% a reboot will solve the issue. :D

 

Personally, I often wear this nice T...

 

4640Turning-it-off-and-on-again_4640_square.jpg

Edited by Fireball3
Link to comment
On 11/10/2018 at 10:41 PM, Alex R. Berg said:

@jowe how much memory do you have in your machine? I'm just curious, I probably cannot help with the problem. 

I have 16GB, and it's used to about 60%

But i don't have anymore problems since 6.6.4 and now with 6.6.5 all cron jobs work as well :)

 

JoWe

Link to comment

@Fireball3 Neat :)

 

@jowe Awesome, that's good to hear.

 

I've released a new version with minor changes to plugin page. Fixed help-text and moved user-share info up, so its more visible.

 

I was wondering whether the unRaid version could also have something to do with some users (like you Fireball) having to include user-share in the scan to avoid disks spin-ups. Its probably not that though.

 

 

Link to comment
5 hours ago, Alex R. Berg said:

@Fireball3 Neat :)

 

@jowe Awesome, that's good to hear.

 

I've released a new version with minor changes to plugin page. Fixed help-text and moved user-share info up, so its more visible.

 

I was wondering whether the unRaid version could also have something to do with some users (like you Fireball) having to include user-share in the scan to avoid disks spin-ups. Its probably not that though.

 

 

Without getting into any politics that may or may not be going on, but wouldn't it be wise to publish this fork of the dynamix plugin in the Apps Tab or issue PRs against the dynamix plugin?

Link to comment
On 11/9/2018 at 6:07 PM, nuhll said:

For me update plugin and restarting only worked half way. But since i upgraded to 6.6.5 its working perfeclty and disks spinning down again... but didnt saw any changes in log which indicates anything... mysterios.

 

If i enable loggin, where i can see activity?

Its all in the help of folder caching plugin

 

Quote

Select whether logging is enabled or disabled. Logging will be made to /var/log/cache_dirs.log and /var/log/cache_dirs_lost_cache.csv. Be wary that the cache_dirs logs are placed in memory on unRAID and are not automatically rolled.

 

Link to comment

@pluginCop

 

Certainly. Dynamix can just accept my pull request, that would be great for all those still on the old plugin. Its also the old plugin url that's in the 'appstore'.

 

We can try pinging dynamix again, and see if he responds, I think its boniel. The downside is that at the moment I'm more active than he is. I don't know how to issue PR's. I you most welcome to ping him, if you know the way. It seems the plugin is stable now with unRaid 6.6.5, so would be good to pull to 'original' github repository.

 

I've justed pinged boniel (if using @ syntax pings him) here:

 

Ahh, I just noticed a 13 days old mail in my mailbox I havn't checked for some time. Sorry boniel... :)

Edited by Alex R. Berg
Link to comment

Not sure exactly when this started, but sometime in the past couple of days cache_dirs has changed drastically. When I browse to a share it takes about 60 to 90 seconds before the items start loading and then when they do it is in stages. Maybe 100 items every 10 or 15 seconds. Takes about 5 minutes to load all the items in my media share. I would say that it seems that cache_dirs isn't caching the share.

 

I only see this behavior when cache_dirs is enabled. If I disable the plugin it takes a short time for the items to load in the browser and of course some disks spin up, but the total time required is well under 60 seconds.

 

My settings-

SafariScreenSnapz129.thumb.jpg.162944906391aea9181751c7d3a06132.jpg

 

cache_dirs.log (snipped)

2018.11.14 16:17:24 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.70s   depth 6 slept 1s Disks idle before/after scan 1252s/1253s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:17:26 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1254s/1256s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:17:29 Executed find in (1s) (disks 0s + user 1s) 01.73s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1257s/1259s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:17:32 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1260s/1262s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:17:35 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1263s/1264s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:17:37 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1265s/1267s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:17:40 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1268s/1270s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:17:43 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1271s/1273s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:17:46 Executed find in (1s) (disks 0s + user 1s) 01.69s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1274s/1275s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:17:48 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1276s/1278s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:17:51 Executed find in (1s) (disks 0s + user 1s) 01.68s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1279s/1281s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=12%, filecount[6]=64452
2018.11.14 16:17:54 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1282s/1283s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:17:56 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1284s/1286s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:17:59 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1287s/1289s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:02 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1290s/1292s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:05 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1293s/1294s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:07 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1295s/1297s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:18:10 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1298s/1300s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:13 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1301s/1303s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:16 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1304s/1305s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:18 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1306s/1308s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:18:21 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1309s/1311s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:18:24 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1312s/1313s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:26 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1314s/1316s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:29 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1317s/1319s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:18:32 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1320s/1322s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:35 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1323s/1324s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:37 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1325s/1327s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:40 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1328s/1330s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:43 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1331s/1332s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:18:46 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1333s/1335s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=12%, filecount[6]=64452
2018.11.14 16:18:48 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1336s/1338s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:18:51 Executed find in (1s) (disks 0s + user 1s) 01.73s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1339s/1341s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:18:54 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1342s/1343s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:56 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1344s/1346s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:18:59 Executed find in (1s) (disks 0s + user 1s) 01.73s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1347s/1349s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=20%, filecount[6]=64452
2018.11.14 16:19:02 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1350s/1352s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:19:05 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1353s/1354s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:19:07 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1355s/1357s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:19:10 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1358s/1360s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=15%, filecount[6]=64452
2018.11.14 16:19:13 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1361s/1363s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:19:16 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1364s/1365s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:19:18 Executed find in (1s) (disks 0s + user 1s) 01.69s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1366s/1368s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:19:21 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1369s/1371s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:19:24 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1372s/1373s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=15%, filecount[6]=64452
2018.11.14 16:19:26 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1374s/1376s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:19:29 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1377s/1379s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:19:32 Executed find in (1s) (disks 0s + user 1s) 01.73s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1380s/1382s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=15%, filecount[6]=64452
2018.11.14 16:19:35 Executed find in (1s) (disks 0s + user 1s) 01.71s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1383s/1384s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=15%, filecount[6]=64452
2018.11.14 16:19:37 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1385s/1387s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=14%, filecount[6]=64452
2018.11.14 16:19:40 Executed find in (1s) (disks 0s + user 1s) 01.68s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1388s/1390s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=12%, filecount[6]=64452
2018.11.14 16:19:43 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1391s/1393s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=15%, filecount[6]=64452
2018.11.14 16:19:46 Executed find in (1s) (disks 0s + user 1s) 01.72s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1394s/1395s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=16%, filecount[6]=64452
2018.11.14 16:19:48 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1396s/1398s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452
2018.11.14 16:19:51 Executed find in (1s) (disks 0s + user 1s) 01.70s, wavg=01.71s   depth 6 slept 1s Disks idle before/after scan 1399s/1401s Scan completed/timedOut counter cnt=0/0/0 mode=2 scan_tmo=150s maxCur=6 maxWeek=6 isMaxDepthComputed=1 CPU=13%, filecount[6]=64452

 

Full log if needed- cache_dirs.log.txt.zip

Edited by wgstarks
Link to comment
Just now, saarg said:

@wgstarks

Please edit your post and attach the log as a file instead. It's not so fun to scroll for 5 minutes on mobile to see there are no other posts 😉

Already trimmed it down. Didn't realize how long it was when I pasted it in. I'll zip the full log and attach it even though it looks to be just more and more of the same.

Link to comment

@wgstarkswell under 60 sec  to load 100 files, as in >= 30? Are you in 1980 with pin-hole paper? Sounds weird to me.

 

I can see from the logs that its scanning every second, and I would say it should have backed off to sleep 10s by that time. But its only CPU and files are fully cached since disks spin down, so you have a different problem if that little CPU load can hurt your pc.

Link to comment
On 10/29/2018 at 3:42 PM, Alex R. Berg said:

I seem to be getting an overload of info for me to handle here. I will not be offering support on the official plugin of cache-dirs at the moment from dynamix, because its not up2date. I don't want to spend my time helping others going over their logs regarding issues I might already have fixed.

 

I have created (temporary) release of the cache-dirs plugin in my fork of dynamix plugin, the PLG is attached and will download the archive itself. Place the plg in /boot/config/plugins (\\flash\config\plugins) and reboot. It is still the cache_dirs 2.2.1 version, but does add a logrotate script so cache_dirs logging won't waste your mem-mounted root-diskspace too much.

I think all problems above are because of lack of scan of user-share. I don't need it to avoid disk spin-up, but others do. 2018.10.14 does not contain this feature, as its based on old script 2.0.0j.

@NAS: I suspect your problem is not the adaptiveness, but the lack of scan of user-share, as reported by others users. Check attached release. Also if you don't find love for the new adaptive feature just disable it. I added it because I hated seeing cache_dirs absoluty thrashing my disks when they where otherwise occupied with writing huge files or scanning for md5's, and that was the moment when the linux filesystem disaded to use the file cache for something else than directories causing cache-dirs to thrash the disks. I'm also find sometimes that the adaptiveness does not seem at all perfect. It does seem to work with me with cache-pressure of 1 though and enough memory or few enough files that it works.

 

@NAS I can add a global minimum depth that is adjustable. I already have that in the code, its just not user-modifiable. Actually just checking the code now, it looks like I don't have a minimum depth. Maybe I removed that by mistake. But definitely that is easy and a good idea. I'll add it later. I also thought it would be cool to have filters, but its to difficult to add into the bash-script as cache_dirs is implemented in. It certainly possible and not extremely difficult, as find does support excludes, but it's a pain to work with bash. I've considered re-implementing in scala, but don't feel like it. I have discovered in my process of working with it that its impossible to make it really good because cache_dirs is a hack. It scans the dirs repeatedly in the hope of making linux keep the dirs in memory. Sometimes linux will decide to evict the dirs, and there is no way for us to tell whether linux have evicted the dirs. I try to determine this by checking scan-duration and if its long, I kill the scan procses and back-off, to avoid thrashing the disks, when my system use them for other stuff. But that strategy is never going to be perfect, so I don't feel like messing that much more with it. If you feel like adding it to the script, go for it. Actually its a dead simple scan, so implementing in scala seems super easy, but then people would need to download jvm, and might not want it. 

 

It might be helpful, if others can chime in helping out, if I already helped them through some issues. Read further up to see diagnostics check, something about running cache_dirs -L on the new version attached, if my memory serves me. I think it was Fireball3 I helped.

 

Sorry to be so long replying. I wanted to wait until I was sure my assumptions fixed the issue before I posted again and time got away from me.

 

tl;dr my fundamental issue was at some point the addon update dropped the includes but not the excludes so it appeared to be working but actually was doing nothing. Changing the settings mostly fixed the issue.

 

I am keen to throw some other ideas into the pot but the most important matter at hand is to deal with the PRs. Forking and getting users to manually pull addons from the internet is sub optimal.

 

There has been some discussion on this above, where are we today with it.

 

Link to comment

Yeah it sucks with the fork, I guess its what happens when unpaid voluntary hands put their efforts together. I know I'm looknig at cache_dirs only when I feel like it, I'm not kind of committed to helping in the same way as my paying customers (ie. my employer), where I have a tighter relationship. I'm guessing boniel has it the same way. But maybe we can manage to do a better team-work effort in the future, that would be nice for everybody I suspect :)

 

But actually its not that bad if looked at in another way. If we think of my branch as the beta-branch, then it makes sense. I can send everybody back to main when I'm done, so everybody gets an automatic update in the future from the main dynamix branch.

 

Good to hear its working again NAS.

Edited by Alex R. Berg
Link to comment

I've publish new version (sorry to bonienl who just merged, but I respond to no blame for bugs in my code-base :))

 

I noticed from @wgstarts logs that it didn't sleep as I expected. When using fixed depth, it now sleeps up to 10 secs (-M param) depending on disks-idle and current scan duration compared to avg. 

 

If its well received I'll send people to the main dynamix branch in around a weeks time.

  • Like 1
  • Upvote 1
Link to comment

So one of the problems with cache_dirs is that it is at its core a hack. Quite a bit of the user setup is based on educated guesswork and very little info is presented or logged to know how well it is functioning e.g. how many files are being cached, how often is it waking up disks.

 

Most of the performance feedback we can gather is based on timings.

 

Some ideas:

  • track timings and present them as a time based graph for easier analysis
  • store timings outside of rotating logs for easier long term analyisis
  • estimate memory usage

 

In theory we can can reasonably estimate maximum memory requirements by counting potential inode entrys and multiplying it by the inode size . This would allow what/if scenarios to be built where each share can be presented in terms of estimated memory requirements and if we do end up adding a depth per share option the user can tune their settings (and potentially reorganize their data) to cache as efficiently as possible. This will take some reading and experiemtation and unless someone is capable of reading the kernel code to understand how cache pressure impacts this it will always be worst case..... but to my eye this is the direction to go

  • Upvote 1
Link to comment

Interesting. It should definitely be built in something other than bash, but actually its a very simple fundamental idea, so could relatively easily be built in another language. If I was to do it, I would do it in scala. We would need a database to put the scan-durations in, which in my eyes complicates the setup quite a bit. I wonder if that's overengineering it a bit. I doubt many would access it, unless there's also a web-page to display the timings in some aggregate format.

 

I doubt I'll play more with it. I don't think its worth it, because in my experience tinkering with cache_dirs, it can never be really good. Linux will discard the cache when it wants, and we don't know whether or not it has discarded cache or system is otherwise busy. I think there are many other projects much more valuable to put my efforts into.

 

Actually I write to a csv-file which can be opened in excel. I've just added a 2.2.6 version that puts most or all the data in a csv-file. I think that might be useful to some.

 

Best Alex

  • Like 1
Link to comment
  • 2 weeks later...

I'm running the latest 2.2.6 version of your plugin.

 

As an FYI for anyone else trying to exclude sub directories using the suggestions at https://forums.unraid.net/topic/4351-cache_dirs-an-attempt-to-keep-directory-entries-in-ram-to-prevent-disk-spin-up/?page=4&tab=comments#comment-60549

That comment would suggest the following format to add to the User defined options box:

-a '-noleaf \( -name "logs" -prune -o -name "temp" -prune \) -o -print'

 

However the only way I could get cache_dirs to start was using:

-a \'-noleaf \( -name logs -prune -o -name temp -prune \) -o -print\'

 

That said, I was trying to watch my lsof output while it was initially running and it still seemed to scan my logs and temp directories, so your mileage may vary.

 

 

Link to comment

So I am running the dynamix version (currently have 2.2.0j on unraid 6.5.2). But I have been having a recurring problem across a few versions now. I will get back to my server, and find one core running at 100% constantly. Its always cache_dirs that is doing this. I would disable it, then enable again and it would start working right again. It used to happen maybe once a month, now it seems to be happening like every other day. I am uploading my settings, I excluded my appdata and docker system folder based on some feedback a while back. Any idea what is up?

image.thumb.png.93fdcc62543111d389ee200be1e9e80b.png

Link to comment
15 hours ago, Necrotic said:

So I am running the dynamix version (currently have 2.2.0j on unraid 6.5.2). But I have been having a recurring problem across a few versions now. I will get back to my server, and find one core running at 100% constantly. Its always cache_dirs that is doing this. I would disable it, then enable again and it would start working right again. It used to happen maybe once a month, now it seems to be happening like every other day. I am uploading my settings, I excluded my appdata and docker system folder based on some feedback a while back. Any idea what is up?

 

 

I also still have the same issue with version 2018.11.20

Link to comment
On 12/2/2018 at 2:28 AM, THO said:

I'm running the latest 2.2.6 version of your plugin.

 

As an FYI for anyone else trying to exclude sub directories using the suggestions at https://forums.unraid.net/topic/4351-cache_dirs-an-attempt-to-keep-directory-entries-in-ram-to-prevent-disk-spin-up/?page=4&tab=comments#comment-60549

That comment would suggest the following format to add to the User defined options box:

-a '-noleaf \( -name "logs" -prune -o -name "temp" -prune \) -o -print'

 

However the only way I could get cache_dirs to start was using:

-a \'-noleaf \( -name logs -prune -o -name temp -prune \) -o -print\'

 

That said, I was trying to watch my lsof output while it was initially running and it still seemed to scan my logs and temp directories, so your mileage may vary.

 

 

Wow that's pretty cool, I didn't realize we could just filter folders with that trick. 

 

The -a option is broken, with your \' you just avoided it crashing spectaculously. You can see the cache_dirs command that is executed in syslog (unraid log). The cache_dirs log also contains the 'find $args' command, which should list your -a arguments. Right now I think, its just empty for you, as with -a option.

 

I'll release a new version maybe tomorrow, and send a patch pull. I cannot get the '*' in Joe's name filter to work, which is a shame, but maybe that's just me. More on that later.

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.