Ndgame Posted February 1, 2018 Posted February 1, 2018 Hello Everyone, I have been using unraid for a few years now but I don't know the nitty gritty of how to resolve problems with it. A few days ago I added 4 WD Red 3TB drives to my build. Prior to this build I have 5 4TB Seagate drives and my parity is one of those drives. I notice this morning after the parity check that the new drives have thousands and thousands of errors. With the new drives I have excluded them from the shares that I had in place and created new shares that only used the new drives. Can anyone help me resolve the problem or direct me to a place in unraid to see the problem to fix? I have attached a screenshot to help you understand what I am talking about.
itimpi Posted February 1, 2018 Posted February 1, 2018 The chances are high that the drives have dropped offline! Providing system diagnostics (Tools->Diagnostics) would allow this to be conformed.
JorgeB Posted February 1, 2018 Posted February 1, 2018 Most likely a controller crashed/dropped offline, diags would show.
Ndgame Posted February 1, 2018 Author Posted February 1, 2018 I have attached the diagnostics. I know the drives are active as I am copying data to them. Please let me know if you see anything in this. tower-diagnostics-20180201-0950.zip
itimpi Posted February 1, 2018 Posted February 1, 2018 According to the diagnostics the 4 3TB drives have dropped offline. The syslog is full of errors relating to them. I do not see how you can be successfully reading or writing to those drives.
JorgeB Posted February 1, 2018 Posted February 1, 2018 Disk7 looks to have problems, we'll need after reboot diags to confirm, then the Marvell controller went nuts and disabled all 4 disks. The only strange thing is none of the disks being disabled, I would expect disk7 to be redballed, but no write attempt was made after the read errors, which doesn't make sense on a correcting parity check.
Ndgame Posted February 1, 2018 Author Posted February 1, 2018 For those 4 drives I used a external bay that connects all 4 drives to my server with a esata cable. I just tested again and was successful in moving a 4GB file with no errors. What would you recommend to resolve the issue?
JorgeB Posted February 1, 2018 Posted February 1, 2018 1 minute ago, Ndgame said: What would you recommend to resolve the issue? You need to reboot and grab and post new diags. 1 minute ago, Ndgame said: For those 4 drives I used a external bay that connects all 4 drives to my server with a esata cable This is a bad idea and a disaster waiting to happen, if you need external use SAS, no eSATA or USB.
JorgeB Posted February 1, 2018 Posted February 1, 2018 49 minutes ago, johnnie.black said: The only strange thing is none of the disks being disabled, I would expect disk7 to be redballed, but no write attempt was made after the read errors, which doesn't make sense on a correcting parity check. OK, looking more carefully at the syslog I think I know why the disk wasn't disabled, the read errors on disk7 happened multiple times, and for most of these there were only read errors on disk7, that would mean unRAID wrote back those sectors successfully, then on the last ones that started at 5:47 there were issues with all four disks practically at the same time, so they dropped offline very close together before unRAID could try to write the last read errors back.
Recommended Posts
Archived
This topic is now archived and is closed to further replies.