Jump to content

Is 6Tb Too Big (For Parity Checks / Re-builds??)


Recommended Posts

FYI, just to give you some comparables...

 

I'm running a P5B with a PCI video card and an SAP2LP in the x16 slot.  I have 5 drives in the array, 6TB parity, 1 6TB data and 3x3TB data.  My parity check takes around 26 hours...

 

It was faster when I ran parity on the motherboard but that results in unstable operation for my motherboard/sata controller combo under unRAID6.  Things are very stable with everything on the expansion card, but a bit slower.

 

I attribute the slower speeds to the old PCIex 1.1 motherboard design.  6TB parity checks are never going to be fast, but they should be faster on newer motherboards.

 

 

How are the drives distributed among controllers? does the SAS2LP have only the 6TB drives?

 

All 4 data drives (1 6TB, 3x3TB), parity drive (6TB), and cache drive are on the SAS2LP.  If I run any of the drives off the motherboard controllers I get errors and false red balls under unRAID 6 on the SAS2LP.  The theoretical bandwidth of an x8 card in an x16 slot on a PCIex 1.1 motherboard should be more than adequate to support 5 drives during a parity check, but in practice I find it about 10-15% slower than running off the motherboard (parity in particular).

Link to comment

To add a little data to the discussion of large parity drives, I just finished replacing my parity drive with the 8TB Seagate shingled drive and it took 20.5 hours to complete the parity sync with an array of 1x2TB, 3x3TB, 1x4TB with all the array drives connected directly to the mobo and the parity drive connected through a SATAIII PCI-E controller.

Link to comment

To add a little data to the discussion of large parity drives, I just finished replacing my parity drive with the 8TB Seagate shingled drive and it took 20.5 hours to complete the parity sync with an array of 1x2TB, 3x3TB, 1x4TB with all the array drives connected directly to the mobo and the parity drive connected through a SATAIII PCI-E controller.

 

Can you copy a relatively large file (10G+) to an array disk (not to a user share that is cached) and monitor the speed? 

Link to comment

... Can you copy a relatively large file (10G+) to an array disk (not to a user share that is cached) and monitor the speed?

 

Not a bad test, but given what we already know about the 25GB persistent cache, it's clearly not going to cause any band rewrite issues with such a small write.  In fact, it'd be a far better test to copy a series of smaller files (smaller than a full band) to multiple array disks simultaneously (from multiple clients) ... but even this would likely work fine unless the total amount of data was well over 25GB.  Note that the band size is apparently in the 20-40MB range (depending on where on the disk the bands are) ... so the files would have to be smaller than that to force use of the persistent cache.

 

 

Link to comment

... Can you copy a relatively large file (10G+) to an array disk (not to a user share that is cached) and monitor the speed?

 

Not a bad test, but given what we already know about the 25GB persistent cache, it's clearly not going to cause any band rewrite issues with such a small write.  In fact, it'd be a far better test to copy a series of smaller files (smaller than a full band) to multiple array disks simultaneously (from multiple clients) ... but even this would likely work fine unless the total amount of data was well over 25GB.  Note that the band size is apparently in the 20-40MB range (depending on where on the disk the bands are) ... so the files would have to be smaller than that to force use of the persistent cache.

 

What we know on paper vs what we observe in practice are two different things. I would be interested in these results.

Link to comment

Can you copy a relatively large file (10G+) to an array disk (not to a user share that is cached) and monitor the speed?

I'm currently running a preclear on the 4TB drive I just removed from my parity slot, so take these results with a huge grain of salt. I copied an 11.2GB file directly to the array and it took exactly 5 minutes to complete with the speed hovering right around 37MB/s during the entire transfer according to Windows. Which according to my math (11.2GB / 5 minutes = 37.33 MB/s) is accurate. There were no slowdowns or speed ups during the transfer, just a consistent 36-38MB/s.

 

I'll rerun the test again once my preclear finishes in 24 hours to see if it increases any in speed.

Link to comment

A pre-clear is outside of the array operations; and on a different SATA port, so it's very unlikely it will make any difference in the speed.  I think we now know enough of the technical characteristics of these drives to have a very good feel for where the performance bottlenecks will be ... and writing anything < 25GB is simply not going to be an issue.  And even > 25GB won't be a problem unless the data is such that it fills the persistent cache (which for most UnRAID use cases is unlikely).

 

Link to comment
  • 1 year later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...