Jump to content
We're Hiring! Full Stack Developer ×

Parity Check errors


Ndgame

Recommended Posts

Hello Everyone,

 

I have been using unraid for a few years now but I don't know the nitty gritty of how to resolve problems with it.

 

A few days ago I added 4 WD Red 3TB drives to my build.  Prior to this build I have 5 4TB Seagate drives and my parity is one of those drives.

 

I notice this morning after the parity check that the new drives have thousands and thousands of errors.   With the new drives I have excluded them from the shares that I had in place and created new shares that only used the new drives.

 

Can anyone help me resolve the problem or direct me to a place in unraid to see the problem to fix?   I have attached a screenshot to help you understand what I am talking about.

driveerrors.JPG

Link to comment

Disk7 looks to have problems, we'll need after reboot diags to confirm, then the Marvell controller went nuts and disabled all 4 disks.

 

The only strange thing is none of the disks being disabled, I would expect disk7 to be redballed, but no write attempt was made after the read errors, which doesn't make sense on a correcting parity check.

Link to comment
1 minute ago, Ndgame said:

What would you recommend to resolve the issue?

You need to reboot and grab and post new diags.

 

1 minute ago, Ndgame said:

For those 4 drives I used a external bay that connects all 4 drives to my server with a esata cable

This is a bad idea and a disaster waiting to happen, if you need external use SAS, no eSATA or USB.

Link to comment
49 minutes ago, johnnie.black said:

The only strange thing is none of the disks being disabled, I would expect disk7 to be redballed, but no write attempt was made after the read errors, which doesn't make sense on a correcting parity check.

OK, looking more carefully at the syslog I think I know why the disk wasn't disabled, the read errors on disk7 happened multiple times, and for most of these there were only read errors on disk7, that would mean unRAID wrote back those sectors successfully, then on the last ones that started at 5:47 there were issues with all four disks practically at the same time, so they dropped offline very close together before unRAID could try to write the last read errors back.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...