New 6TB SAS drives added, very slow parity rebuild speed


Recommended Posts

I'm in the process of adding several 6TB 512e drives (physically 4k block size, but emulate 512).  Right now, my parity drive upgrade with the first drive is going *extremely* slow compared to expected speeds with this new drive, even on the portion of the drive that is larger than any other drive (ie, no reads are required)

 

All other drives in the system are <=3TB, and the parity rebuild onto the 6TB drive is currently at ~5TB position, but is only writing at <40MB/sec. The only activity is write to the new parity drive, and no reads are happening at all.  From preclear testing, even the slowest portion of this one disk writes at >175MB/sec.

 

Digging into iostat details, it looks like unraid is using write size of 512 instead of 4k, which is likely slowing this down significantly (I assume the drive internally does a read-modify-write to emulate 512B writes to a 4k physical sector?).

 

How can I tell unraid to use a 4k access size for parity checks/rebuilds? 

 

New filesystems can be done with a 4k block size which should help for the new data disks, but if my parity drive is 512e, it would also likely be faster if unraid used 4k access size for everything, not just new filesystems; is there a setting to change this as well?

Screenshot 2020-07-16 15.23.01.png

Edited by nick5429
change subject
Link to comment

That is almost certainly not the problem, I would guess that over 99% of drives used with Unraid are 512E, and they perform as expected, and they should as long as the partitions are 4k aligned, which Unraid does by default.

 

Posting the diags might give some clues on the actual problem.

Link to comment

You're likely right, I misinterpreted the output of iostat and came to the wrong conclusion. 'write request size' is listed in kB, writes are just being buffered to 512kB chunks before written. Unlikely to have anything to do with drive block size.

 

I did a brief parity check for speed test after the rebuild finished, and a read-only parity check was going at ~110MB/sec.  Still, something's not right if the 5-6TB portion of the rebuild (when the new fast drive is the only disk active) is going at 40MB/sec, when it was >175MB/sec during the preclear

nickserver-diagnostics-20200717-1007.zip

 

"routine writes" to the array (non-cache, tested with dd) with the new 6tb drive installed go at about 40MB/sec, regardless of RMW or 'turbo write' mode.  The new drives are all 6TB SAS, and individually perform great.  Something odd showed up when testing with the diskspeed plugin, which shows that when all drives are in use -- the new 6TB SAS ones consistently take a much bigger perf hit than the legacy SATA's (when if anything I'd expect them to be faster; and alone they are faster)

 

Diags and diskspeed results attached

2020-07-17 -- diskspeed plugin.png

Link to comment

Good catch, where do you see that in the diag reports? I found similar info digging in the syslog once you pointed it out, but not formatted like you have quoted. Is there a summary somewhere I'm missing?

 

Looks like these are the commands for SAS drives:

sdparm --get=WCE /dev/sdX
sdparm --set=WCE /dev/sdX

Possibly has to be re-enabled on every boot, from internet comments? 

 

Will give that a try and see how the 2nd parity disk rebuild goes. Initial regular array write tests with 'reconstruct write' definitely see speed improvement after enabling this on the parity drive.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.