TheBuz Posted June 7, 2021 Share Posted June 7, 2021 I am getting reads on all of the disks in my array every few seconds, preventing them from spinning down. The reads are only a few kbs I have looked with the open files plugin and no files appear to be open. I stopped all VMs, no change. I stopped all docker containers, no change. I stopped the docker service, this seemed to fix it. Having docker running with no containers causes the micro reads across all drives again, in the main array and the cache I have attached diags urserver-diagnostics-20210607-1028.zip Quote Link to comment
wildfire305 Posted December 8, 2021 Share Posted December 8, 2021 I have this exact same problem. Did you figure out the cause? I've been watching iotop, but I can't seem to see it. Quote Link to comment
trurl Posted December 8, 2021 Share Posted December 8, 2021 2 hours ago, wildfire305 said: I have this exact same problem. Did you figure out the cause? I've been watching iotop, but I can't seem to see it. According to the diagnostics of that original poster, appdata, domains, system shares were on the array. These shares are used by dockers/VMs and performance will be impacted by slower parity and array will stay spunup since these files are always open. Attach diagnostics to your NEXT post in this thread. Quote Link to comment
wildfire305 Posted December 9, 2021 Share Posted December 9, 2021 I'll double check, but I'm pretty sure I put all of those on the ssd cache pool with my setup. I think the installation instructions told me to do so. Quote Link to comment
trurl Posted December 9, 2021 Share Posted December 9, 2021 23 hours ago, trurl said: Attach diagnostics to your NEXT post in this thread. Quote Link to comment
wildfire305 Posted December 9, 2021 Share Posted December 9, 2021 I see this one a lot: shfs /mnt/user -disks 63 -o noatime,allow_other -o remember=330 But i think that is just normal array writing activity. I do have the urbackup docker, I know that keeps it awake. But even putting that one on a schedule or turning it off completely doesn't fix the problem. cvg02-diagnostics-20211209-1200.zip Quote Link to comment
trurl Posted December 9, 2021 Share Posted December 9, 2021 Your appdata share has files on disk1. What do you get from the command line with this? du -h -d 1 /mnt/disk1/appdata Quote Link to comment
wildfire305 Posted December 9, 2021 Share Posted December 9, 2021 Should I move that somehow? I don't typically run krusader unless i'm feeling in a gui mood. Quote Link to comment
trurl Posted December 9, 2021 Share Posted December 9, 2021 If krusader isn't started, mover should be able to move its appdata unless they are duplicates. What do you get with this? du -h -d 1 /mnt/cache/appdata Quote Link to comment
wildfire305 Posted December 9, 2021 Share Posted December 9, 2021 root@CVG02:~# du -h -d 1 /mnt/cache/appdata 5.0G /mnt/cache/appdata/Plex-Media-Server 13M /mnt/cache/appdata/krusader 0 /mnt/cache/appdata/jsdos 0 /mnt/cache/appdata/Shinobi 5.2M /mnt/cache/appdata/DiskSpeed 464K /mnt/cache/appdata/QDirStat 0 /mnt/cache/appdata/MotionEye 2.7M /mnt/cache/appdata/HandBrake 0 /mnt/cache/appdata/photostructure 7.1G /mnt/cache/appdata/duplicati 12K /mnt/cache/appdata/transmission 812K /mnt/cache/appdata/luckybackup 60K /mnt/cache/appdata/filebrowser 111M /mnt/cache/appdata/crushftp 17M /mnt/cache/appdata/ripper 41G /mnt/cache/appdata/photoprism 98M /mnt/cache/appdata/JDownloader2 1.8G /mnt/cache/appdata/mariadb-official 63M /mnt/cache/appdata/sabnzbd 228K /mnt/cache/appdata/MakeMKV 100M /mnt/cache/appdata/firefox 3.8M /mnt/cache/appdata/nzbhydra2 6.3G /mnt/cache/appdata/binhex-urbackup 51M /mnt/cache/appdata/LibreELEC 348M /mnt/cache/appdata/digikam 208M /mnt/cache/appdata/tdarr 596K /mnt/cache/appdata/beets 595M /mnt/cache/appdata/FoldingAtHome 16K /mnt/cache/appdata/NoIp 333M /mnt/cache/appdata/clamav 18G /mnt/cache/appdata/jellyfin 0 /mnt/cache/appdata/PlexMediaServer 184M /mnt/cache/appdata/mysql 13M /mnt/cache/appdata/vm_custom_icons 81G /mnt/cache/appdata Quote Link to comment
trurl Posted December 9, 2021 Share Posted December 9, 2021 Since it should be using cache instead of disk1 when there are duplicates doesn't seem like that should spinup disks. You can delete the krusader appdata on disk1, or even in both places since it will get recreated when krusader is started. I haven't used it much so not sure if there is anything worth keeping in its appdata or not. Quote Link to comment
wildfire305 Posted December 9, 2021 Share Posted December 9, 2021 I ran a diff on the two folders and it appears that the newer data is in cache. And disk 1 doesn't contain anything that cache doesn't already have. Quote Link to comment
wildfire305 Posted December 10, 2021 Share Posted December 10, 2021 I deleted it (krusader on disk 1). Shut down all dockers and all vm's. Turned off file integrity automatic. Still all the disks pop awake about a minute after being spun down. Shfs is the thing that shows up in iotop. Where else can I look at to try to find the culprit? I used to have the file caching plugin a while back, but I noticed that it was spending too much time scanning my directories and ironically it too was keeping the disks awake. The disks used to sleep a while ago. Would the esata enclosures have anything to do with the issue? I did move the other half of the disks into them a while ago around the time I noticed it wasn't staying spun down anymore. Quote Link to comment
trurl Posted December 10, 2021 Share Posted December 10, 2021 1 hour ago, wildfire305 said: esata enclosures have anything to do with the issue? Could be Quote Link to comment
wildfire305 Posted December 10, 2021 Share Posted December 10, 2021 Santa Clause might be helping me fix that this year. I'll update this thread if he does. Those esata enclosures have weird issues that waste a lot of my time. I added a Seagate drive once and so many errors popped under load. Turning off ncq fixed that problem. But speed suffered dramatically. I removed the Seagate and turned ncq back on. That Seagate tested perfectly fine on the motherboard sata port. Quote Link to comment
wildfire305 Posted January 2, 2022 Share Posted January 2, 2022 SANTA DELIVERED! I converted everything to an external SAS enclosure and while I now get around 160 MB/s on a parity check across six disks, it didn't resolve the issue with the disks not staying asleep. However, after doing some more digging, I used the inotifywait command to watch what was actually going on with the disks. Problems found: 1. I had mistakenly put a syslog server on the array instead of the ssd cache 2. I had configured all of the windows computers (eight) on the network to use file history (w10 built-in backup) to backup to the array. I had it landing on the cache, but I failed to realize that windows would compare existing files in the backup and it was waking the disks to do that. I moved those shares completely to cache - they were already being backed up to the cloud daily. 3. I re-configured a lot of daily server maintenance tasks to take place within the same two hour window. 4. I temporarily stopped all VM's and dockers and the automatic part of the file integrity plug-in. I'll see if I find more, but the point of this post was to mention the use of the inotifywait command allowed me to quickly and easily see what was going on. Syntax I used was: inotifywait -r -m /mnt/diskx Be patient, sometimes that command took a while (several minutes) to start. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.