Jump to content

Parity correction 10mb, reads 140mb/sec?


petebnas

Recommended Posts

I just rebuilt my array...and when I switched hardware, I pulled two empty drives as well.  Now it's running through a parity correction at 11mb/sec and estimating about 7 days to complete...so it's just racking up the 'corrections', instead of just writing out parity.  Trying to figure out if I'm looking at the norm, or a problem to solve...or a problem I created, as I don't recall a 'sync errors corrected' counter racking up like this.  Currently running v6beta14b.

 

I've been doing parity checks for a couple years now and they usually just fly along...but in thinking back, it was mainly doing reads, not doing corrections (writes).  In the past I was preclearing drives and adding...or perhaps rebuilding a drive, but I don't recall any step being this slow.  I ran the tunables-tester script on the new system and was getting 85 MB/s timings in the testing portion...which I believe just tests against a 'parity read/verification' operation, so I made some adjustments.  I also killed off unmenu, just to be safe.  I have the unraid stock gui set so it won't update automatically, which was another performance tweak I read in a few places. 

 

Currently running a trio of 6TB drives, a pair of 4TB, and a pair of 3TB... with the parity and three other drives coming of my motherboard, the three other drives attached to a Supermicro SAS2LP connected x4.  All the drives are showing 6G connections, no errors are popping up...

 

Any other ideas? 

Pete

 

 

Link to comment

I may have solved my own issue...just in case someone else runs into this later..

 

I stopped the array.  Knowing I had garbage parity drive data, I went into the settings and selected 'new config'.  Assigned all the drives properly, and left the parity drive unassigned.  I started the array services back up, stopped the array servuces, and added my old parity drive back in, which it mentioned it would need to create parity data and be unprotected until it was complete/etc/etc.. I said yes, and it started off.

 

Currently estimating about 12-13 hours at 140MB/sec.. not bad :)  Beats 7 days of 11MB/sec, hoping the wheels don't fall off the data drives in the meantime...

 

By looking at the read-write stats, my guess is that I made the mistake of saying 'parity is good' when I removed my two empty drives and starting the array back up.  Since I ran a parity check right after that, it wasn't happy with anything on the parity drive, so it was wasting cycles reading, checking, correcting, repeat.  That read/check/write process against the parity drive was a lot slower than the 'read from the data drives/sequentially write to the parity drive fresh' process.

 

Pete

Link to comment

I suspect that was indeed the problem.  As you noted, although the end result is the same (a good parity drive); if there are a LOT of parity errors, the extra rotation of the drive that's required for each correction can add a LOT of time ... and result in a dramatically slower average speed.

 

The "Trust Parity" option should, of course, only be used if you're absolutely certain it's true  :)

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...