Fireball3 Posted October 2, 2018 Share Posted October 2, 2018 2 minutes ago, Alex R. Berg said: If you just send him the plg and the txz he can do the same as you, but he can also wait. I guess it won't hurt if he gives some feedback also before committing. Would you please read my addition to my last post. I edited while you were typing. Quote Link to comment
Fireball3 Posted October 2, 2018 Share Posted October 2, 2018 8 hours ago, Fireball3 said: I suspect some content is not cached. Is there a way to check what drives are and what not? When I'm accessing the share - just browsing, it will pause to spin up some discs. I checked the GUI and found only 3 disks spinning. The share definitely spans mor than those 3 disks. Not sure if it matters, but the disks that spin up are XFS while the others are reiserFS. Quote Link to comment
Alex R. Berg Posted October 2, 2018 Share Posted October 2, 2018 If you in-comment line 601 of /usr/local/bin/cache_dirs (or the version I sent+attached) it becomes a lot more verbose in the log file. Incomment by removing the leading # # (( $DEBUG_THREAD )) && log "scanning $depth_num $dir_to_scan - remaining_time=$remaining_time - pid=$BASHPID" The best way I can think of to check what is cached, is by running find yourself. If it returns immediately it was probably cached, if its slow, it wasn't cached. find /mnt/disk*/share > /dev/null where 'share' is the share you want to check Interestingly one of my shares with 800.000 files is scanned in 1.5 secs when scanning /mnt/disk* but takes 30 secs when scanning /mnt/user/. All with disks spun down. So unRaid is pretty slow at converting user to disk share, but that shouldn't matter. Note I do not scan the user share in the program, only /mnt/cache and /mnt/disk*. It shouldn't matter though, as /mnt/user consist of cache+disk* If you spin down the disks, and run the find on that disk, and it does not spin up, then definitely its cached. If you don't have enough memory and you have many files, it might not be able to cache all. I doubt filesystem matters, but I don't know. I use XFS for my disks and it works well with cache-Dirs. cache_dirs Quote Link to comment
Alex R. Berg Posted October 2, 2018 Share Posted October 2, 2018 I've just added a -P switch that will do the scan and count files. It can also be used to swiftly check if files are under caching, by running it manually just like the find command I gave. Eg. sudo ./cache_dirs -P -i Video this just gives you number of files. The find command is better at checking which disks aren't cached, becaues you can target it on the disk you choose. cache_dirs Quote Link to comment
Fireball3 Posted October 9, 2018 Share Posted October 9, 2018 I ran find against all disks and everything was cached as it should. Then I accessed the share and the 3 known (xfs) drives were spinning up. Quote Link to comment
Alex R. Berg Posted October 10, 2018 Share Posted October 10, 2018 I have added a -u option to scan user share, and updated the plugin. Try it out and see if that helps. It shouldn't be necessary to scan the user share, but it seems I'm wrong. Let me know whether or not it makes a difference. Best Alex dynamix.cache.dirs.plg dynamix.cache.dirs.txz Quote Link to comment
Fireball3 Posted October 12, 2018 Share Posted October 12, 2018 Yes, this seems to work. Drives are not spinning any more. I will monitor if its reproducible. Quote Link to comment
interwebtech Posted October 15, 2018 Share Posted October 15, 2018 Plug-in was updated today. Mine is still not starting on reboot. Tried twice. I do have a new error just before the UR version number line on the console: cat: error: Broken pipe No idea if that is cache_dirs or the 6.6.2 update. Quote Link to comment
Alex R. Berg Posted October 15, 2018 Share Posted October 15, 2018 Are you referring to a test of the plugin I attached 3 posts up, or the one published on dynamix github? Best Alex Quote Link to comment
interwebtech Posted October 15, 2018 Share Posted October 15, 2018 (edited) 9 hours ago, Alex R. Berg said: Are you referring to a test of the plugin I attached 3 posts up, or the one published on dynamix github? Best Alex I was notified through UnRaid that an update was available. notes on popup: dynamix.cache.dirs 2018.10.14 fixed service not starting upon system reboot minimum Unraid version 6.4 Edited October 15, 2018 by interwebtech Quote Link to comment
Alex R. Berg Posted October 16, 2018 Share Posted October 16, 2018 Regarding cache_dirs plugin update Reply to question in other thread : try this and report in this dynamix thread what you see (as root, ie with 'sudo -i') grep -Po "^mdState=\K.*" /proc/mdstat and cat cat /proc/mdstat Probably boniel is on top of it, but I don't know Quote Link to comment
Alex R. Berg Posted October 16, 2018 Share Posted October 16, 2018 Ah, thank you for the info. I didn't know that bergware/bonienl was messing with it. His change is in the plugin. Lets take the discussion there, so boniel knows what's going on: Quote Link to comment
interwebtech Posted October 16, 2018 Share Posted October 16, 2018 (edited) 19 hours ago, Alex R. Berg said: Regarding cache_dirs plugin update Reply to question in other thread : try this and report in this dynamix thread what you see (as root, ie with 'sudo -i') grep -Po "^mdState=\K.*" /proc/mdstat and cat cat /proc/mdstat Probably boniel is on top of it, but I don't know the result of your requests Linux 4.18.14-unRAID. root@Tower:~# grep -Po "^mdState=\K.*" /proc/mdstat STARTED root@Tower:~# cat cat /proc/mdstat cat: cat: No such file or directory sbName=/boot/config/super.dat sbVersion=2.5.0 sbCreated=1438220329 sbUpdated=1539589936 sbEvents=1047 sbState=0 sbNumDisks=13 sbSynced=1538379001 sbSyncErrs=0 sbSynced2=1538464876 sbSyncExit=0 sbLabel=0781-5406-1031-811B45D2F146 mdVersion=2.9.4 mdState=STARTED mdNumDisks=13 mdNumDisabled=0 mdNumReplaced=0 mdNumInvalid=0 mdNumMissing=0 mdNumWrong=0 mdNumNew=0 mdSwapP=0 mdSwapQ=0 mdResyncAction=check P Q mdResyncSize=7814026532 mdResyncCorr=0 mdResync=0 mdResyncPos=0 mdResyncDt=0 mdResyncDb=0 diskNumber.0=0 diskName.0= diskSize.0=7814026532 diskState.0=7 diskId.0=ST8000VN0002-1Z8_xx rdevNumber.0=0 rdevStatus.0=DISK_OK rdevName.0=sdc rdevOffset.0=64 rdevSize.0=7814026532 rdevId.0=ST8000VN0002-1Z8112_xx rdevNumErrors.0=0 rdevLastIO.0=0 rdevSpinupGroup.0=0 diskNumber.1=1 diskName.1=md1 diskSize.1=7814026532 diskState.1=7 diskId.1=ST8000VN0022-2EL_xx rdevNumber.1=1 rdevStatus.1=DISK_OK rdevName.1=sdb rdevOffset.1=64 rdevSize.1=7814026532 rdevId.1=ST8000VN0022-2EL112_xx rdevNumErrors.1=0 rdevLastIO.1=0 rdevSpinupGroup.1=0 diskNumber.2=2 diskName.2=md2 diskSize.2=5860522532 diskState.2=7 diskId.2=ST6000DX000-1H21_xx rdevNumber.2=2 rdevStatus.2=DISK_OK rdevName.2=sde rdevOffset.2=64 rdevSize.2=5860522532 rdevId.2=ST6000DX000-1H217Z_xx rdevNumErrors.2=0 rdevLastIO.2=0 rdevSpinupGroup.2=0 diskNumber.3=3 diskName.3=md3 diskSize.3=7814026532 diskState.3=7 diskId.3=ST8000VN0022-2EL112_xx rdevNumber.3=3 rdevStatus.3=DISK_OK rdevName.3=sdf rdevOffset.3=64 rdevSize.3=7814026532 rdevId.3=ST8000VN0022-2EL112_xx rdevNumErrors.3=0 rdevLastIO.3=0 rdevSpinupGroup.3=0 diskNumber.4=4 diskName.4=md4 diskSize.4=5860522532 diskState.4=7 diskId.4=ST6000DM001-1XY17Z_xx rdevNumber.4=4 rdevStatus.4=DISK_OK rdevName.4=sdg rdevOffset.4=64 rdevSize.4=5860522532 rdevId.4=ST6000DM001-1XY17Z_xx rdevNumErrors.4=0 rdevLastIO.4=0 rdevSpinupGroup.4=0 diskNumber.5=5 diskName.5=md5 diskSize.5=7814026532 diskState.5=7 diskId.5=ST8000AS0002-1NA17Z_xx rdevNumber.5=5 rdevStatus.5=DISK_OK rdevName.5=sdh rdevOffset.5=64 rdevSize.5=7814026532 rdevId.5=ST8000AS0002-1NA17Z_xx rdevNumErrors.5=0 rdevLastIO.5=0 rdevSpinupGroup.5=0 diskNumber.6=6 diskName.6=md6 diskSize.6=3907018532 diskState.6=7 diskId.6=HGST_HDN724040ALE640_xx rdevNumber.6=6 rdevStatus.6=DISK_OK rdevName.6=sdi rdevOffset.6=64 rdevSize.6=3907018532 rdevId.6=HGST_HDN724040ALE640_xx rdevNumErrors.6=0 rdevLastIO.6=0 rdevSpinupGroup.6=0 diskNumber.7=7 diskName.7=md7 diskSize.7=3907018532 diskState.7=7 diskId.7=HDN724040ALE640_xx rdevNumber.7=7 rdevStatus.7=DISK_OK rdevName.7=sdk rdevOffset.7=64 rdevSize.7=3907018532 rdevId.7=HGST_HDN724040ALE640_xx rdevNumErrors.7=0 rdevLastIO.7=0 rdevSpinupGroup.7=0 diskNumber.8=8 diskName.8=md8 diskSize.8=7814026532 diskState.8=7 diskId.8=ST8000AS0002-1NA17Z_xx rdevNumber.8=8 rdevStatus.8=DISK_OK rdevName.8=sdl rdevOffset.8=64 rdevSize.8=7814026532 rdevId.8=ST8000AS0002-1NA17Z_xx rdevNumErrors.8=0 rdevLastIO.8=0 rdevSpinupGroup.8=0 diskNumber.9=9 diskName.9=md9 diskSize.9=3907018532 diskState.9=7 diskId.9=ST4000DM000-1F21_xx rdevNumber.9=9 rdevStatus.9=DISK_OK rdevName.9=sdm rdevOffset.9=64 rdevSize.9=3907018532 rdevId.9=ST4000DM000-1F2168_xx rdevNumErrors.9=0 rdevLastIO.9=0 rdevSpinupGroup.9=0 diskNumber.10=10 diskName.10=md10 diskSize.10=3907018532 diskState.10=7 diskId.10=ST4000DM000-1F21_xx rdevNumber.10=10 rdevStatus.10=DISK_OK rdevName.10=sdn rdevOffset.10=64 rdevSize.10=3907018532 rdevId.10=ST4000DM000-1F2168_xx rdevNumErrors.10=0 rdevLastIO.10=0 rdevSpinupGroup.10=0 diskNumber.11=11 diskName.11=md11 diskSize.11=7814026532 diskState.11=7 diskId.11=ST8000VN0022-2EL_xx rdevNumber.11=11 rdevStatus.11=DISK_OK rdevName.11=sdj rdevOffset.11=64 rdevSize.11=7814026532 rdevId.11=ST8000VN0022-2EL112_xx rdevNumErrors.11=0 rdevLastIO.11=0 rdevSpinupGroup.11=0 diskNumber.12=12 diskName.12= diskSize.12=0 diskState.12=0 diskId.12= rdevNumber.12=12 rdevStatus.12=DISK_NP rdevName.12= rdevOffset.12=0 rdevSize.12=0 rdevId.12= rdevNumErrors.12=0 rdevLastIO.12=0 rdevSpinupGroup.12=0 diskNumber.13=13 diskName.13= diskSize.13=0 diskState.13=0 diskId.13= rdevNumber.13=13 rdevStatus.13=DISK_NP rdevName.13= rdevOffset.13=0 rdevSize.13=0 rdevId.13= rdevNumErrors.13=0 rdevLastIO.13=0 rdevSpinupGroup.13=0 diskNumber.14=14 diskName.14= diskSize.14=0 diskState.14=0 diskId.14= rdevNumber.14=14 rdevStatus.14=DISK_NP rdevName.14= rdevOffset.14=0 rdevSize.14=0 rdevId.14= rdevNumErrors.14=0 rdevLastIO.14=0 rdevSpinupGroup.14=0 diskNumber.15=15 diskName.15= diskSize.15=0 diskState.15=0 diskId.15= rdevNumber.15=15 rdevStatus.15=DISK_NP rdevName.15= rdevOffset.15=0 rdevSize.15=0 rdevId.15= rdevNumErrors.15=0 rdevLastIO.15=0 rdevSpinupGroup.15=0 diskNumber.16=16 diskName.16= diskSize.16=0 diskState.16=0 diskId.16= rdevNumber.16=16 rdevStatus.16=DISK_NP rdevName.16= rdevOffset.16=0 rdevSize.16=0 rdevId.16= rdevNumErrors.16=0 rdevLastIO.16=0 rdevSpinupGroup.16=0 diskNumber.17=17 diskName.17= diskSize.17=0 diskState.17=0 diskId.17= rdevNumber.17=17 rdevStatus.17=DISK_NP rdevName.17= rdevOffset.17=0 rdevSize.17=0 rdevId.17= rdevNumErrors.17=0 rdevLastIO.17=0 rdevSpinupGroup.17=0 diskNumber.18=18 diskName.18= diskSize.18=0 diskState.18=0 diskId.18= rdevNumber.18=18 rdevStatus.18=DISK_NP rdevName.18= rdevOffset.18=0 rdevSize.18=0 rdevId.18= rdevNumErrors.18=0 rdevLastIO.18=0 rdevSpinupGroup.18=0 diskNumber.19=19 diskName.19= diskSize.19=0 diskState.19=0 diskId.19= rdevNumber.19=19 rdevStatus.19=DISK_NP rdevName.19= rdevOffset.19=0 rdevSize.19=0 rdevId.19= rdevNumErrors.19=0 rdevLastIO.19=0 rdevSpinupGroup.19=0 diskNumber.20=20 diskName.20= diskSize.20=0 diskState.20=0 diskId.20= rdevNumber.20=20 rdevStatus.20=DISK_NP rdevName.20= rdevOffset.20=0 rdevSize.20=0 rdevId.20= rdevNumErrors.20=0 rdevLastIO.20=0 rdevSpinupGroup.20=0 diskNumber.21=21 diskName.21= diskSize.21=0 diskState.21=0 diskId.21= rdevNumber.21=21 rdevStatus.21=DISK_NP rdevName.21= rdevOffset.21=0 rdevSize.21=0 rdevId.21= rdevNumErrors.21=0 rdevLastIO.21=0 rdevSpinupGroup.21=0 diskNumber.22=22 diskName.22= diskSize.22=0 diskState.22=0 diskId.22= rdevNumber.22=22 rdevStatus.22=DISK_NP rdevName.22= rdevOffset.22=0 rdevSize.22=0 rdevId.22= rdevNumErrors.22=0 rdevLastIO.22=0 rdevSpinupGroup.22=0 diskNumber.23=23 diskName.23= diskSize.23=0 diskState.23=0 diskId.23= rdevNumber.23=23 rdevStatus.23=DISK_NP rdevName.23= rdevOffset.23=0 rdevSize.23=0 rdevId.23= rdevNumErrors.23=0 rdevLastIO.23=0 rdevSpinupGroup.23=0 diskNumber.24=24 diskName.24= diskSize.24=0 diskState.24=0 diskId.24= rdevNumber.24=24 rdevStatus.24=DISK_NP rdevName.24= rdevOffset.24=0 rdevSize.24=0 rdevId.24= rdevNumErrors.24=0 rdevLastIO.24=0 rdevSpinupGroup.24=0 diskNumber.25=25 diskName.25= diskSize.25=0 diskState.25=0 diskId.25= rdevNumber.25=25 rdevStatus.25=DISK_NP rdevName.25= rdevOffset.25=0 rdevSize.25=0 rdevId.25= rdevNumErrors.25=0 rdevLastIO.25=0 rdevSpinupGroup.25=0 diskNumber.26=26 diskName.26= diskSize.26=0 diskState.26=0 diskId.26= rdevNumber.26=26 rdevStatus.26=DISK_NP rdevName.26= rdevOffset.26=0 rdevSize.26=0 rdevId.26= rdevNumErrors.26=0 rdevLastIO.26=0 rdevSpinupGroup.26=0 diskNumber.27=27 diskName.27= diskSize.27=0 diskState.27=0 diskId.27= rdevNumber.27=27 rdevStatus.27=DISK_NP rdevName.27= rdevOffset.27=0 rdevSize.27=0 rdevId.27= rdevNumErrors.27=0 rdevLastIO.27=0 rdevSpinupGroup.27=0 diskNumber.28=28 diskName.28= diskSize.28=0 diskState.28=0 diskId.28= rdevNumber.28=28 rdevStatus.28=DISK_NP rdevName.28= rdevOffset.28=0 rdevSize.28=0 rdevId.28= rdevNumErrors.28=0 rdevLastIO.28=0 rdevSpinupGroup.28=0 diskNumber.29=29 diskName.29= diskSize.29=7814026532 diskState.29=7 diskId.29=ST8000VN0002-1Z8112_xx rdevNumber.29=29 rdevStatus.29=DISK_OK rdevName.29=sdd rdevOffset.29=64 rdevSize.29=7814026532 rdevId.29=ST8000VN0002-1Z8112_xx rdevNumErrors.29=0 rdevLastIO.29=0 rdevSpinupGroup.29=0 Edited October 17, 2018 by interwebtech Quote Link to comment
Alex R. Berg Posted October 16, 2018 Share Posted October 16, 2018 It looks good interwebotech, so your problem is not the line i noticed had changed. After I sent the message I noticed I completely had missed pulling all the changes from the release-repository of the plugin. I found the bug causing the plugin not to start, which was surely also Fireballs problem. I didn't have that bug in the plugin I attached above to Fireball, so that's why it suddenly worked, with regards to being started on reboot. I have pushed a change request to the repository, so the new plugin should be released when bergware gets around to it. Best Alex 2 Quote Link to comment
interwebtech Posted October 16, 2018 Share Posted October 16, 2018 Excellent. Thanks. Quote Link to comment
Fireball3 Posted October 17, 2018 Share Posted October 17, 2018 @interwebtech Would you mind wrapping your posted code into "code" tags? It really improves readability! Thank you! Quote Link to comment
interwebtech Posted October 17, 2018 Share Posted October 17, 2018 14 minutes ago, Fireball3 said: @interwebtech Would you mind wrapping your posted code into "code" tags? It really improves readability! Thank you! Doesn't look any better for it here. Quote Link to comment
Fireball3 Posted October 17, 2018 Share Posted October 17, 2018 Yes indeed, I see. Unfortunately not much better. I must have mixed the forum. There are forum solutions where it makes a difference. Sorry for bothering you. Quote Link to comment
Alex R. Berg Posted October 17, 2018 Share Posted October 17, 2018 I just noticed in linked this thread above. I meant to link the dynamix thread for the discussion of the plugin-error with startup: Quote Link to comment
wgstarks Posted October 17, 2018 Share Posted October 17, 2018 9 minutes ago, Alex R. Berg said: I just noticed in linked this thread above. I meant to link the dynamix thread for the discussion of the plugin-error with startup: Just so I’m clear, this update is intended to fix the broken pipe error? Quote Link to comment
Alex R. Berg Posted October 17, 2018 Share Posted October 17, 2018 I don't know if it'll fix that because, I don't know the cause of the broken pipe. It might fix it though, its worth a shot. Quote Link to comment
tyrindor Posted October 22, 2018 Share Posted October 22, 2018 (edited) All sorts of issues with this lately (I'm on unRAID 6.6.3). Constant reading all drives even though nothing is being accessed, not starting up on restart (despite the changelog saying that was fixed), and the removal of scanning user shares. Scanning user shares was necessary or I'd get constant spinups on my setup. Tested it many times prior to it being removed since the plugin always claimed it wasn't needed. Please add that feature back. I have tried adding -u but it still seems nothing is being cached despite it "running" and my shares selected. Every time I go into a folder I get reads on the disk. I think it started happening after 6.6.3 update but not positive. Edited October 22, 2018 by tyrindor Quote Link to comment
Alex R. Berg Posted October 23, 2018 Share Posted October 23, 2018 Try the new version i posted in the dynamix thread, or wait for 'official' release. It at least fixes the user share, adding it back in, and the start on boot issue. Quote Link to comment
tyrindor Posted October 24, 2018 Share Posted October 24, 2018 (edited) On 10/23/2018 at 3:06 AM, Alex R. Berg said: Try the new version i posted in the dynamix thread, or wait for 'official' release. It at least fixes the user share, adding it back in, and the start on boot issue. I am still not seeing any caching taking place. If I disable it and re-enable it, there's no reads on any of my disks. Just going into random directories results in low amount of reads. When cache directories was working, I'd see it go through my disks 1 at a time and produce a fair amount of reads when I started it. Something possibly wrong with my settings? Edited October 24, 2018 by tyrindor Quote Link to comment
Zonediver Posted October 25, 2018 Share Posted October 25, 2018 (edited) Same here - since 6.6.3 it seems that this plugin is not working anymore - there is no difference between enable or disable... nothing happens. Whenever i do something on my server, it takes about 30-50sec, all disk spin up and then i can copy to the server. Edited October 25, 2018 by Zonediver Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.