pwm Posted January 19, 2018 Share Posted January 19, 2018 On 1/13/2018 at 3:23 AM, Necrotic said: Did anyone else have issues with cachedirs going nuts and pegging one cpu to 100% forever? It happens rarely but over the past year or something had it happen twice. I went into settings, disabled it and re-enabled and it fixed it. i just found my machine doing that on a 6.3.5. It has been consuming 104 hours so just over 4 days since it went bananas. Quote Link to comment
NewDisplayName Posted January 19, 2018 Share Posted January 19, 2018 (edited) So i guess i have the same problem with high cpu. I now set included dirs to the real dirs i wanna chache. Maybe that fixes the problem. Ill report back. Till now i can say my CPU is down from 30-40% to <10%. So it changed something. Edited January 19, 2018 by nuhll Quote Link to comment
jeffreywhunter Posted January 27, 2018 Share Posted January 27, 2018 I renamed a couple shares and I'm seeing this in my log. How do I correct? When I go into settings, the old directories are not in the list. Jan 26 16:40:25 HunterNAS cache_dirs: ---------------------------------------------- Jan 26 16:40:25 HunterNAS cache_dirs: ERROR: included directory "DocArchive" does not exist. Jan 26 16:40:25 HunterNAS cache_dirs: ERROR: included directory "ISO\ Files" does not exist. Jan 26 16:40:26 HunterNAS cache_dirs: cache_dirs process ID 9360 started Thanks! Quote Link to comment
c3 Posted January 27, 2018 Share Posted January 27, 2018 45 minutes ago, jeffreywhunter said: I renamed a couple shares and I'm seeing this in my log. How do I correct? When I go into settings, the old directories are not in the list. Jan 26 16:40:25 HunterNAS cache_dirs: ---------------------------------------------- Jan 26 16:40:25 HunterNAS cache_dirs: ERROR: included directory "DocArchive" does not exist. Jan 26 16:40:25 HunterNAS cache_dirs: ERROR: included directory "ISO\ Files" does not exist. Jan 26 16:40:26 HunterNAS cache_dirs: cache_dirs process ID 9360 started Thanks! Wow, the same thing as you reported in March 1, 2017, might want to try the same thing this year and see if it works. Quote Link to comment
jeffreywhunter Posted January 27, 2018 Share Posted January 27, 2018 4 hours ago, c3 said: Wow, the same thing as you reported in March 1, 2017, might want to try the same thing this year and see if it works. Hey thanks for the suggestion. But I had already tired that again when the error showed up this time. Yep, it worked then and cleared the log, but when it showed up again this time the error continues (and why did it come back?). So apologies for a terse post, should have included that history, will try to do better next time. Quote Link to comment
c3 Posted January 27, 2018 Share Posted January 27, 2018 3 hours ago, jeffreywhunter said: Hey thanks for the suggestion. But I had already tired that again when the error showed up this time. Yep, it worked then and cleared the log, but when it showed up again this time the error continues (and why did it come back?). So apologies for a terse post, should have included that history, will try to do better next time. Naw, I am still thinking the config needs to be changed to remove the old directory names, and include the new if you want them. Quote Link to comment
jowi Posted June 2, 2018 Share Posted June 2, 2018 Maybe i don't understand the purpose of this; but if i start it, all my disks are getting spinned up and are constantly accessed and monitored, never spinning down anymore... i thought this was to PREVENT spinning up disks... removed the plugin, and all my disks are spinned down again. Quote Link to comment
NewDisplayName Posted June 2, 2018 Share Posted June 2, 2018 Did you change any settings? Quote Link to comment
jowi Posted June 2, 2018 Share Posted June 2, 2018 I enabled it and there was an option 'scan user shares'? i enabled as well, since user shares are the ones i want to keep in memory. Quote Link to comment
trurl Posted June 2, 2018 Share Posted June 2, 2018 45 minutes ago, jowi said: I enabled it and there was an option 'scan user shares'? i enabled as well, since user shares are the ones i want to keep in memory. If you click on "Scan user shares" to see the help you will see it isn't necessary. Quote Link to comment
NewDisplayName Posted June 2, 2018 Share Posted June 2, 2018 (edited) Best is to just enable really the directory(s) which you want to browse via smb Edited June 2, 2018 by nuhll 1 Quote Link to comment
NewDisplayName Posted June 2, 2018 Share Posted June 2, 2018 On 1/19/2018 at 4:26 PM, nuhll said: So i guess i have the same problem with high cpu. I now set included dirs to the real dirs i wanna chache. Maybe that fixes the problem. Ill report back. Till now i can say my CPU is down from 30-40% to <10%. So it changed something. Sorry, forget to report back. After i added only the directory which i need, everything works perfectly for months. Quote Link to comment
avpap Posted August 5, 2018 Share Posted August 5, 2018 (edited) Does this plugin also work for docker apps (eg plex, couchpotato etc) that want access to directory listing ? For example if plex or any other indexing application wants to see if the directories/files have changed will it access the image of the unraid as captured by the folder caching plugin? Edited August 5, 2018 by avpap Quote Link to comment
Alex R. Berg Posted August 5, 2018 Share Posted August 5, 2018 It will only cache directories on the array. The plex docker app of cause is using media-files on the array via a volume mapping, and those directories on the array will be cached. The internal 'operating system' files of the docker app will not be cached. Quote Link to comment
interwebtech Posted September 10, 2018 Share Posted September 10, 2018 I am having an issue where Cache_Dirs stops running. Go into setting to check and the setting is still set to Enable but it is not running. Reapplying Enable gets it running again but it eventually turns itself back off. Quote Link to comment
themaxxz Posted September 10, 2018 Share Posted September 10, 2018 (edited) 5 hours ago, interwebtech said: I am having an issue where Cache_Dirs stops running. Go into setting to check and the setting is still set to Enable but it is not running. Reapplying Enable gets it running again but it eventually turns itself back off. I just checked my server (6.5.3) and I also noticed the state was stopped. I started it again using the same procedure. (cache_dirs version: 2.2.0j) Edited September 10, 2018 by themaxxz Quote Link to comment
Fireball3 Posted September 12, 2018 Share Posted September 12, 2018 Same here. Service is always "stopped" when server comes up. Quote Link to comment
BRiT Posted September 12, 2018 Share Posted September 12, 2018 This thread is for the JoeL standalone script. It is not for the dynamix plugin called Cache Dirs. If you want help on the dynamix version, it would likely be best if you post in its specific support thread. Quote Link to comment
Fireball3 Posted September 12, 2018 Share Posted September 12, 2018 Sorry for posting wrong. This should be the right place... Quote Link to comment
Eisi2005 Posted September 20, 2018 Share Posted September 20, 2018 Hi, i have installed the latest version before 2 weeks.Since then I have problems that all my hard drives are awakened again and again. I never had any problems with the old version. Can someone interpret the logfile? I have a share server including movies, shows, in it I have directories 0-9, A-C .. I have the movies in directories Movie1, Movie2, there are the files in.The settings can be seen in the logfile. Thanks for any help https://pastebin.com/cDDG8z1V Quote Link to comment
Alex R. Berg Posted September 20, 2018 Share Posted September 20, 2018 Hi Eisi, This may sound stupid, but are you sure its the cache-dirs doing it? You don't have many files under watch, only 57171 files and you've only watched till depth 5. Setting maxdepth=5 The log reports when disks were last accessed, but there seems to be a bug, since it reports a crazy duration, 1537449213s/1537449213s , the first being the idle time before scanning the dirs, the latter the idle time after scanning the dirs. Here below something else touched the disks it seems, because it slept 10 secs and in between disks idle time became sensible: 2018.09.20 15:14:24 Executed find in (0s) 00.15s, wavg=01.33s Idle____________ depth 5 slept 10s Disks idle before/after 9999s/9999s suc/fail cnt=18/18/0 mode=4 scan_tmo=150s maxCur=5 maxWeek=5 isMaxDepthComputed=1 CPU= 3%, filecount[5]=57171 2018.09.20 15:14:35 Executed find in (0s) 00.16s, wavg=00.98s Idle____________ depth 5 slept 10s Disks idle before/after 8s/8s suc/fail cnt=19/19/0 mode=3 scan_tmo=30s maxCur=5 maxWeek=5 isMaxDepthComputed=1 CPU=22%, filecount[5]=57171 Personally i've reduced cache-pressure to 0 or 1, but then I have plenty of ram. Cache-pressure of 0 makes it very risky of running out of ram, so now I use 1. But I also have a crazy amount of files cached, 1.6 million files. Best Alex Quote Link to comment
Alex R. Berg Posted September 20, 2018 Share Posted September 20, 2018 These are the lines used to extract the duration since last disk-access. Seems there's a problem with them on your server. You could try fidling with those, if you want. It might give some insight into what the actual idle time is, and you could update the script if you want to help. I'm not an awk expert, and kind of don't feel like messing much around this crazy long bash script at the moment. But I'll probably help out getting a correction into the code-base https://github.com/bergware/dynamix mdcmd_cmd=/usr/local/sbin/mdcmd # rdevLastIO will be non-zero if a disk is spinning, it will be the timestamp of last IO (in seconds since epoch) last=$($mdcmd_cmd status | grep -a rdevLastIO | grep -v '=0') echo "$(echo $last | awk '{t=systime(); gsub("rdevLastIO..=",""); for(i = 1; i <= NF; i++) a[++y]=$i}END{c=asort(a); if (NF > 0) print t-a[NF]; else print 9999; }')" Best Alex Quote Link to comment
Eisi2005 Posted September 20, 2018 Share Posted September 20, 2018 (edited) Hi Alex,I have to say that I do not quite understand the whole settings. The "old" version I had installed and everything went as it should. I have a share includes and 3 excluded. The share that is included has 105507 files. My RAM is only minimally occupied and I use the adaptive mode. After I've set Depth 7, it is now 105507 Files count in the logfile. I have now set cache pressure to 1 and will test again Hi Alex,Sorry, but I do not understand your last post. where should I lie down these lines? Here a new log https://pastebin.com/p4jddM8z Edited September 20, 2018 by Eisi2005 Quote Link to comment
Alex R. Berg Posted September 21, 2018 Share Posted September 21, 2018 I included the lines in case you knew your way around the bash shell, and didn't mind getting your hands dirty. Just ignore those lines. It looks like it works fine anyway in your last log, and those lines are only for reporting anyway. Best Alex Quote Link to comment
Fireball3 Posted September 24, 2018 Share Posted September 24, 2018 I'm a bit confused. Alex, are you maintaining the new Dynamix CacheDirs plugin or is your talk about some other plugin? Anyway, the Dynamix CacheDirs seems to have some other issues too. Me and some others posted in the Dynamix support thread. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.