jeffrey.el Posted November 10, 2021 Share Posted November 10, 2021 (edited) Hello! I have an odd issue which I can't really seem to find the cause of... about a week ago my Unraid server finished a parity check and during that check 2 8TB disks got disabled due to an I/O error (Only thing I could find in the logs). 1 8TB disk was the parity disk and the other was a data disk. I noticed this because some folders on my share where empty all of a sudden. I was able to get the data disk to be enabled again and started another parity check/rebuild to get the parity disk back up again as well. This succeeded without any issues. However, a day or 2 later, the 8TB data disk got disabled again, since the parity disk was still running without showing any signs of failure I decided to remove the drive from the array and adding it again, forcing Unraid to wipe the drive and do a rebuild again. This also succeeded without any problems. I thought the issue would be resolved by now, but... today... the 8TB was disabled again... and I have no idea what is causing this.. I have attached the diagnostics of my server and am hoping someone could maybe help me shed some light on the cause of this... My first thought was to rebuild the array again, but since it has happened twice now I want to know for sure.. The SMART tests all came back green showing no signs of disk failures and the cable also seem to work fine since I'm using SAS backplanes and the other 3 drives which are connected over the same cable run without issues. Is there anybody who might know what I'm missing or maybe doing wrong? Or could it be that the disk is indeed dying? obscurity-diagnostics-20211110-2006.zip Edited November 15, 2021 by jeffrey.el Quote Link to comment
jeffrey.el Posted November 10, 2021 Author Share Posted November 10, 2021 (edited) Nevermind, I missed something.. I see there are read/write errors on 4 specific sectors: Nov 8 21:00:30 Obscurity kernel: md: disk13 read error, sector=9293734176 Nov 8 21:00:30 Obscurity kernel: md: disk13 read error, sector=9293734184 Nov 8 21:00:30 Obscurity kernel: md: disk13 read error, sector=9293734192 Nov 8 21:00:30 Obscurity kernel: md: disk13 read error, sector=9293734200 Nov 8 21:00:56 Obscurity kernel: md: disk13 write error, sector=9293734176 Nov 8 21:00:56 Obscurity kernel: md: disk13 write error, sector=9293734184 Nov 8 21:00:56 Obscurity kernel: md: disk13 write error, sector=9293734192 Nov 8 21:00:56 Obscurity kernel: md: disk13 write error, sector=9293734200 Is it possible to fix these by using the preclear plugin to zero the drive? Edited November 10, 2021 by jeffrey.el Missing information Quote Link to comment
trurl Posted November 10, 2021 Share Posted November 10, 2021 Emulated disk13 is mounted, so all your files should be accessible. Looks like you have only done short test. You might have to disable spindown on the disk to get extended test to complete. Quote Link to comment
jeffrey.el Posted November 10, 2021 Author Share Posted November 10, 2021 I'm in a different situation now, I was trying to replace the failed 8tb to prevent any further issues, but after I tried to stop the array Unraid got stuck with the following error message spamming the log: Nov 10 22:44:39 Obscurity kernel: md: disk0 read error, sector=488460120 Nov 10 22:44:39 Obscurity kernel: md: disk0 read error, sector=488460120 Nov 10 22:44:39 Obscurity kernel: md: disk0 read error, sector=488460120 Is there anything I can do to safely stop it so that I can spin it up one more time? The other disk also only has 4 bad sectors so not all information should be lost... right?... Quote Link to comment
jeffrey.el Posted November 10, 2021 Author Share Posted November 10, 2021 Also another note, this started happening after I enabled Unraid to spin down disks... Before that I had them on 24/7. I read somewhere on the forum that I/O errors can be spin down related? Quote Link to comment
JorgeB Posted November 11, 2021 Share Posted November 11, 2021 https://forums.unraid.net/topic/103938-69x-lsi-controllers-ironwolf-disks-disabling-summary-fix/?do=getNewComment 1 Quote Link to comment
jeffrey.el Posted November 15, 2021 Author Share Posted November 15, 2021 On 11/11/2021 at 8:47 AM, JorgeB said: https://forums.unraid.net/topic/103938-69x-lsi-controllers-ironwolf-disks-disabling-summary-fix/?do=getNewComment This looks like the exact issue I'm having, especially since I haven't had these issues before enabling drive spin down. Right now I set the drives to never spin down, and thus far I haven't had any issues, so I'm going to apply this fix! Thanks a lot for this information! Marking the post as solved Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.