Patmanduu Posted May 19, 2022 Share Posted May 19, 2022 (edited) The other day I noticed that one of my two, fairly new parity disks was disabled. I checked the SMART report and everything looked good so I proceeded with the following steps for the disabled parity disk: [*]Stop the array [*]Unassign the disk [*]Start the array. [*]Stop the array [*]Assign the disk [*]Start the array and parity build should start After about 20 hours the rebuild finished successfully. Then a few hours later the same parity disk dropped disabled again, AND one of the older array disks dropped into a disabled/emulated state. The affected array disk was pretty old and I had a new one on hand so I just replaced it (as per https://wiki.unraid.net/Replacing_a_Data_Drive). I also checked my SATA cables and reseated the LSI card. Again it took around 20 hours to rebuild and all disks came back online briefly. Then a little later the same problem occurred, parity disk 2 disabled and the brand new array disk in emulated state. Both these disks are new Ironwolf NAS drives with clean SMART reports (at least in my limited interpretation). I repeated the above after swapping around some SATA cables to check if they were the cause but no luck there. I can't swear that the affected array disk was emulated right from the start, but since it has been replaced I am not sure what is going on. Any help would be greatly appreciated! Diagnostics attached. syslog is showing both read and write errors to "disk29" though I am unsure which disk that is. Parity 2's disk log shows several: May 19 15:21:52 Spring kernel: blk_update_request: I/O error, dev sdj, sector 722319680 op 0x0:(READ) flags 0x0 spring-diagnostics-20220519-1556.zip Edited May 20, 2022 by Patmanduu Quote Link to comment
Solution JorgeB Posted May 20, 2022 Solution Share Posted May 20, 2022 See here: Quote Link to comment
Patmanduu Posted May 20, 2022 Author Share Posted May 20, 2022 Thank you @JorgeB. The two affected drives are indeed both 8TB Ironwolf ST8000VN004, and are both connected to my LSI card, so it is likely caused by the issue you linked. Before I go modifiying the disk settings however, I decided to try moving the affected disks off the LSI and on to the motherboard SATA, which should confirm if that is the problem. I have a total of 12 disks (only three of which are ST8000VN004) and 8 SATA ports on the MB, so they do not need to be connected to the LSI. So far, the array disk is back online and the parity 2 rebuild is progressing normally. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.