woolooloo Posted October 8, 2018 Share Posted October 8, 2018 I'm on 5.0.6, I do not use my server that actively and have not gotten around to upgrading to 6. It's been working fine with minimal effort for a couple years. I recently noticed it had been like a year since I had done a parity check, so I kicked one off the other day. I came back to check the status today and one of my disks was disabled. It was an older 1.5TB drive and I had a spare one sitting around, so I went ahead and swapped it out. Since then, I've been trying to get it to rebuild onto the new disk, but 1) the write count on the new drive never increments even though the % complete on the rebuild keeps going up and 2) after a while other drives start showing a massive number of errors and if I try to look at their contents, those disks show up as empty. Stopping the array then shows those drives as disabled. A reboot seems to clear it back up and starts another rebuild. If I stop the array and remove the new disk from the array, then start the array, it is showing all of the files on that disk through emulation, so everything still seems intact at this point. I just do not understand what is causing the rebuild to fail. I guess it could be a failing SATA controller or a failing IcyDock or other hardware component, but I'm not sure the best way to track it down and I don't want to nuke multiple disks through repeated attempts. Anyone have thoughts on the best way to approach this? TIA Link to comment
JorgeB Posted October 8, 2018 Share Posted October 8, 2018 Please post your syslog after the problem, also SMART reports for all drives. Link to comment
woolooloo Posted October 17, 2018 Author Share Posted October 17, 2018 Ok sorry, got pulled into jury duty last week which turned my life upside down, finally digging my way out. After starting a rebuild, the errors started up basically immediately, my syslog grew to 128mb of read errors by time I stopped it a couple min later. I've truncated it to post and removed all the repetitive read errors that were at the end. After stopping the rebuild, unRAID says there are 6 drives missing. Disk 10 is actually the one that I have replaced. My icydocks hold 4 drives, but if memory serves, at least one of my SATA cards has 6 drives, could that be going bad? syslog-2018-10-17 - truncated.txt Link to comment
JorgeB Posted October 17, 2018 Share Posted October 17, 2018 It appears your SASLP controller crashed, interestingly while this is very common on v6 it's quite rare on v5, try rebooting, missing disks should show up, but if they don't or it happens again replace the controller. Link to comment
woolooloo Posted October 18, 2018 Author Share Posted October 18, 2018 It's happened multiple times now after reboot so I will look to identify which controller is attached to these drives and replace it. Thanks for the insight. I may need some help figuring out how to replace the controller while maintaining the integrity of the array - especially since one of the drives is dead so I can't just rebuild parity. Link to comment
JorgeB Posted October 18, 2018 Share Posted October 18, 2018 The controller is the SASLP, you only have one of those, if you replace it with a regular HBA nothing more will be needed, just re-start the rebuild. Link to comment
woolooloo Posted October 26, 2018 Author Share Posted October 26, 2018 Replaced the SASLP controller with the current version of it and was able to get everything back up and running. Thanks for the help debugging everything! Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.