SafteScizors Posted August 4, 2022 Share Posted August 4, 2022 (edited) I just started noticing a weird issue. I have a huge number of writes to disk1 of my server. (~3.5 million read/4.5 Million Write in 20 minutes of uptime). Recently the GUI crashed (flash disappeared, and storage disks moved between showing and disappearing). The server crashed soon after with everything "Dying" and multiple disks becoming unwritable. I have a feeling its the large amount of transcoding (Moving from 264 to 265) combined with Mounting Gdrive into the system. However, if anyone has any ideas of potential issues let me know. Total DISK READ : 41.38 M/s | Total DISK WRITE : 41.27 M/s Actual DISK READ: 41.04 M/s | Actual DISK WRITE: 0.00 B/s 23 be/4 root 0.00 B/s 3.46 K/s ?unavailable? [kworker/u98:0-writeback] 18494 be/4 root 1327.91 K/s 663.96 K/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 12534 be/4 root 774.62 K/s 1991.87 K/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 18775 be/4 root 774.62 K/s 1327.91 K/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 18960 be/4 root 1327.91 K/s 2.38 M/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 6156 be/4 root 0.00 B/s 38.04 K/s ?unavailable? dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --log-level=fatal --storage-driver=btrfs 6239 be/4 root 0.00 B/s 13.83 K/s ?unavailable? dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --log-level=fatal --storage-driver=btrfs 8789 be/4 root 0.00 B/s 13.83 K/s ?unavailable? s6-supervise tdarr_node 19129 be/4 root 774.62 K/s 0.00 B/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 19459 be/4 root 995.93 K/s 1549.23 K/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 19572 be/4 root 995.93 K/s 663.96 K/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 19598 be/4 root 1106.59 K/s 0.00 B/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 10111 be/4 root 885.28 K/s 663.96 K/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 19484 be/4 root 995.93 K/s 0.00 B/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 19485 be/4 root 885.28 K/s 221.32 K/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 7846 be/4 root 0.00 B/s 55.33 K/s ?unavailable? [kworker/u97:12+loop2] 18113 be/4 root 774.62 K/s 442.64 K/s ?unavailable? shfs /mnt/user -disks 7 -o default_permissions,allow_other,noatime -o remember=0 home-diagnostics-20220803-2318.zip Edited August 4, 2022 by SafteScizors Removed Excess Files Quote Link to comment
JorgeB Posted August 4, 2022 Share Posted August 4, 2022 Diags are after rebooting so not much to see, if it happens again grab them before rebooting. Quote Link to comment
Lorthium Posted August 8, 2022 Share Posted August 8, 2022 Did you ever get anything from this? Because I also have the same issue. unraid-diagnostics-20220808-2117.zip Quote Link to comment
trurl Posted August 8, 2022 Share Posted August 8, 2022 28 minutes ago, Lorthium said: I also have the same issue Describe your issue in your own words. I will probably split you into your own thread so OP can have this one. Quote Link to comment
Lorthium Posted August 8, 2022 Share Posted August 8, 2022 Sure thing, sorry I am seeing constant writes to the array also; 598 be/4 root 0.00 B/s 88.35 K/s ?unavailable? dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --~g-opt max-file=1 --log-level=fatal --storage-driver=btrfs 23229 be/4 root 0.00 B/s 7.68 K/s ?unavailable? shfs /mnt/user -disks 3 -o default_permissions,allow_other,noatime -o remember=0 23302 be/4 root 0.00 B/s 11.52 K/s ?unavailable? shfs /mnt/user -disks 3 -o default_permissions,allow_other,noatime -o remember=0 23462 be/4 root 0.00 B/s 3.84 K/s ?unavailable? shfs /mnt/user -disks 3 -o default_permissions,allow_other,noatime -o remember=0 23469 be/4 root 0.00 B/s 3.84 K/s ?unavailable? shfs /mnt/user -disks 3 -o default_permissions,allow_other,noatime -o remember=0 23408 be/4 root 0.00 B/s 3.84 K/s ?unavailable? shfs /mnt/user -disks 3 -o default_permissions,allow_other,noatime -o remember=0 They very in size from just a few K/s, up around 20m/s is the max I have seen. I have yet to have the system crash because of this, or had any disks fail, but this has only really been happening in the last couple of days. I initially thought that this was maybe due to transcoding video from 264 to 265, as iotop would reference the tdarr containers a lot, however even with these shut down, it doesn't make much a difference. I never see much reference to anything else really mentioned, at least not in a constant manner. Quote Link to comment
trurl Posted August 8, 2022 Share Posted August 8, 2022 Both of you have appdata, domains, system shares with files on the array. You want these shares all on a fast pool (cache) so docker/VM performance isn't impacted by slower parity array, and so array disks can spin down since these files are always open. Quote Link to comment
Lorthium Posted August 8, 2022 Share Posted August 8, 2022 I had moved them to Cache preferred and did mover, but it seems some files were left behind, I'll try and move them with Krusader and see how it goes, thanks! Quote Link to comment
trurl Posted August 8, 2022 Share Posted August 8, 2022 Nothing can move open files. 9 minutes ago, trurl said: these files are always open You will have to disable Docker and VM Manager in Settings. And mover won't move duplicates so if anything is left on the array you will need to take a look. Probably the files on cache are the current versions. 1 Quote Link to comment
Lorthium Posted August 8, 2022 Share Posted August 8, 2022 25 minutes ago, trurl said: Nothing can move open files. You will have to disable Docker and VM Manager in Settings. And mover won't move duplicates so if anything is left on the array you will need to take a look. Probably the files on cache are the current versions. This was the magic. Stopped the VM's and Docker, ran mover again, and once it was done, the constant writing the disks has stopped. A few appdata things like Swag did remain, but I'll investigate this myself. Hope this also resolves the issues for OP. Quote Link to comment
Lorthium Posted August 8, 2022 Share Posted August 8, 2022 Just to see if will help you OP. The above did drastically reduce the writes and reads I was seeing on the array, however I was still having them, did a little more digging, and saw that Time Machine was the culprit for the remaining, which by default is going to do a backup every hour. Hope this helps, but I am guessing it's the above from Trutl that will be the main fix for you. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.