Red X - Parity Device Disabled Warning as Array was Started


Recommended Posts

I received the Parity Device Disabled with an X next to it.   I think it may  have been due to a physical issue since I moved the box slightly to check what I needed for my cache drive issue that I have been dealing with and need to replace the M.2 SSD.  

 

So after some movement randomly I got the error and now it says Parity device disabled.  I am not sure what the proper procedure is to get it to see the parity again.

 

The Short SMART report came back just fine, the long one has been sitting on 10% for a while.

 

I started the Read test thinking that would re-enable the parity but not sure what the proper sequence would be.  

 

I did power down the machine and remove, refasten the data and power plugs to the drive and on the board. 

 

tower-diagnostics-20190617-2011.ziptower-smart-20190617-1544.zip

Edited by manolodf
Link to comment

Parity may well not have been perfect as a red ‘x’ means a write has failed.

 

the steps you went through are the general case were you make no assumptions about the state of parity.    As you were virtually certain that your parity was almost perfect you could have done:

  • stop array
  • use Tools >>> New Config selecting the option to retail all current assignments,
  • On Main tab tick the ‘parity is correct’ checkbox and then start array

at this point I would recommend doing a correcting parity check just in case the parity was not as perfect as you thought.   I would expect at least one correction to correspond to the original red ‘x’, and this is most likely to happen near the beginning.

 

If you include the recommended parity check in the above steps the total elapsed time is not much different.    The difference is that if you were right about the parity being valid then you have more chance of recovering if an array data disk fails during the process.

Link to comment

10MB/sec sounds very slow but your screen shot suggests something may be writing to disk2 which could easily slow it down to that sort of speed.   Writes during a parity check severely degrade its speed.  Provide the diagnostics if you want to see if anyone can suggest anything else.

Link to comment
16 minutes ago, manolodf said:

I disabled Dockers and now the speed is more like 100MB/sec, which should be a bit more normal?

That sounds more typical :)

 

Where do you have your docker.img file located?   Ideally for performance reasons you want it on the cache drive as accessing the cache drive does not affect parity check/build speeds.   Also if practical you also want any docker mapped drives to be located there for the same reason, in particular the ‘appdata’ share.

Link to comment

The docker image is in /mnt/cache/docker.img

 

the appdata used to reside as a cache only share though I did move to the array temporarily since I have been having those vicious cache drive issues and got tired of restoring often until my new nvme drive arrives.  

 

I did update firmware on the nvme drive though for now was leaving the appdata in the array and off cache for now though. I will set it back once I feel a bit more comfortable 

Link to comment
4 minutes ago, manolodf said:

The docker image is in /mnt/cache/docker.img

 

the appdata used to reside as a cache only share though I did move to the array temporarily since I have been having those vicious cache drive issues and got tired of restoring often until my new nvme drive arrives.  

 

I did update firmware on the nvme drive though for now was leaving the appdata in the array and off cache for now though. I will set it back once I feel a bit more comfortable 

Ok.  While appdata is on the array you will need to remember to stop the docker containers to get decent parity check speeds.

Link to comment

I have installed the new cache M.2 NVMe SSD, but not sure why it seems that it is running slower than it should.  Is there something in the Bios that I should look for to perhaps speed up the write speed?  

 

One example is a 20gb file that took 8mins to transfer to Disk2 yesterday took about 18mins to transfer to Cache today on the M.2 SSD that should be way faster. 

 

Or the ~40gb appdata I am transferring just is taking hours upon hours when I remember  previously it performed to and from array much faster. 

 

When Dockers are off parity checks at about 100MB/sec, when they are on, it is about 20MB/sec, and when mover and dockers are off it can hit 160MB/Sec

 

tower-diagnostics-20190618-2348.zip

Link to comment

You're right, these speeds seem unusually slow.  I don't have an NVME drive.  Though, I've heard there are "good" and "bad" NVME drives.  You might search for "best nvme for unraid" and find out what nvme drives are being used successfully by other unRAID users.

 

You should double check your overall disk and share setup.  For example, what shares are set to reside only on the cache vs those that are set to utilize the array in some capacity?

 

One last thing comes to mind, have you tried setting up Netdata docker?  I've found Netdata to be a fabulous tool for looking at IOWAIT.  Here's a resource regarding IOWAIT.

 

https://bencane.com/2012/08/06/troubleshooting-high-io-wait-in-linux/

 

Link to comment

 

23 hours ago, DoItMyselfToo said:

You're right, these speeds seem unusually slow.  I don't have an NVME drive.  Though, I've heard there are "good" and "bad" NVME drives.  You might search for "best nvme for unraid" and find out what nvme drives are being used successfully by other unRAID users.

 

You should double check your overall disk and share setup.  For example, what shares are set to reside only on the cache vs those that are set to utilize the array in some capacity?

 

One last thing comes to mind, have you tried setting up Netdata docker?  I've found Netdata to be a fabulous tool for looking at IOWAIT.  Here's a resource regarding IOWAIT.

 

https://bencane.com/2012/08/06/troubleshooting-high-io-wait-in-linux/

 

I installed the metadata, its pretty cool!  

 

How would you recommend I do a write test to cache drive?  When I was transferring the appdata from array to cache, well my dockers were off, so I could not really see that.  Do you recommend a different test I could setup to see the write speed?  


Do you think there is something in bios I need to do to improve on this?  My previous 256gb NVMe M.2 SSD did not have any issues on the speed department that I noticed.  

Link to comment

My approach was to perform some task that I would usually do, and then look at the other processes in Netdata and Glances to see which processes were impacted by my task.  Then it's a matter of figuring out what needs to change in order to improve the performance.

 

Regarding your NVME, I would think just setting up a typical write you would normally do, and then see what other processes, viewed in Glances, show IOWAIT.

 

Like I mentioned earlier, I don't have an NVME drive.  My standard SATA III SSD's are all connected to my motherboard SATA II.  And my performance is great.  The fact is the SATA II spec is up to 300 MB/s.  But because of parity writing, the max speed writing to the array is much less than that.

 

You mentioned that your previous NVM drive worked great.  Why not get another one of them?

Edited by DoItMyselfToo
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.