Parity Check speed is 2/3 slower on V6.1.7


Recommended Posts

I upgraded to V6.1.7 from V6.1.6.  Now my parity check is only abot 27MB/s.  It should normally be about 75MB/s compared to V5 and V6.0.x.  Sorry, I was not on any V6.1.x long enough to remember actual parity check speed.

 

I am running one Windows 7 Pro VM and the only additional pluggin I have is the Nerd Tools and Cache Directories.  All of which has been there since I upgraded from V5 to V6.

 

Any suggestions where to look to see why parity check is too slow?

 

Edit: Added syslog attachment.

beanstalk-syslog-20160203-1023.zip

Link to comment

These disks are causing your slowdown:

 

Jan 29 20:43:15 Beanstalk emhttp: SAMSUNG_HD203WI_S1UYJ1CZ313622 (sdq) 1953514584
Jan 29 20:43:15 Beanstalk emhttp: SAMSUNG_HD203WI_S1UYJ1LZ306377 (sdf) 1953514584
Jan 29 20:43:15 Beanstalk emhttp: SAMSUNG_HD203WI_S1UYJ1CZ313616 (sdp) 1953514584

 

More info here:

 

https://lime-technology.com/forum/index.php?topic=42384.0

 

I'm not sure in what v6 release this started, but certainly since 6.1, works fine on v5.

Link to comment

Don't have any settings for md_sync_threshold.  I even searched the disk.cfg and the settings is not present.

 

edit:

just issue the following command during my parity check:

mdcmd set md_sync_thresh 383

 

So since I did not have it set, I assumed the value of md_sync_window/2 was being used.  I have now revert back to the old settting of md_sync_window - 1 (or 383 with the default md_sync_window value of 384).

 

I sure hope this brings my parity check back up.  Otherwise the parity check is useless unless I can find a window of over 24 hours to run parity without having to write to my array.

Link to comment
Otherwise the parity check is useless unless I can find a window of over 24 hours to run parity without having to write to my array.

My understanding is that you can write to your array while parity is being checked (I've done it many times).

 

Write speed will of course be reduced in that situation.

Link to comment

Note that v6 is appreciably more CPU intensive than v5 was, and a lot of folks have encountered slower parity checks as a result.  Your Pentium E6500 is a relatively modest CPU that may very well be over-taxed with parity checks, and this, coupled with the old 500GB/platter disks noted above, could easily explain your slow speeds.

 

There ARE a few things you can do to mitigate this:

 

(a)  Increase your md_num_stripes and md_sync_window settings to provide more buffers.  I'd use 3584 for the num_stripes and 1536 for sync_window.

 

(b)  On Display Settings set the Page Update Frequency to "Disabled" => this will eliminate all automatic updating of the display, which significantly reduces the CPU load from the GUI

 

©  It was supposedly resolved around 6.1.4 or so, but just in case, it won't hurt to set the nr_requests to a lower number than the default ... I've found 8 is a very good setting.  Just type the following lines for each of your disks from a console prompt [Or add these lines to your GO file]:

 

echo 8 > /sys/block/sda/queue/nr_requests

echo 8 > /sys/block/sdb/queue/nr_requests

echo 8 > /sys/block/sdc/queue/nr_requests

...

(one line for every disk)

 

(d)  Shut down your VM during a parity check.  You can try it with/without an active VM after making the changes I just outlined; but with an E6500 I suspect the VM is using a good bit of the CPU and significantly impacting the parity check speed.

 

Link to comment

(a)  Increase your md_num_stripes and md_sync_window settings to provide more buffers.  I'd use 3584 for the num_stripes and 1536 for sync_window.

 

(b)  On Display Settings set the Page Update Frequency to "Disabled" => this will eliminate all automatic updating of the display, which significantly reduces the CPU load from the GUI

 

©  It was supposedly resolved around 6.1.4 or so, but just in case, it won't hurt to set the nr_requests to a lower number than the default ... I've found 8 is a very good setting.  Just type the following lines for each of your disks from a console prompt [Or add these lines to your GO file]:

 

echo 8 > /sys/block/sda/queue/nr_requests

echo 8 > /sys/block/sdb/queue/nr_requests

echo 8 > /sys/block/sdc/queue/nr_requests

...

(one line for every disk)

 

(d)  Shut down your VM during a parity check.  You can try it with/without an active VM after making the changes I just outlined; but with an E6500 I suspect the VM is using a good bit of the CPU and significantly impacting the parity check speed.

 

While those are good suggestions, it won’t fix is problem, with v6 and those Samsung disks parity checks will spend most of the time between 25 and 30MB/s.

Link to comment

Thanks for all the suggestions.

 

Page Update Frequency was already disabled.

 

The CPU load was only 45% during the parity check with the VM running.  The VM is pretty much idle since it is a Windows 7 VM for DVR tasks.

 

I have also set the tunable md_sync_thresh to md_sync_window - 1 from the default of md_sync_window/2 at the 60% mark of the parity check.  It did not seem to immediately have any effects, but toward the 98% mark, my parity check speed went up to 45 MB/s from 26 MB/s.  Does the md_sync_thresh tunable takes effect immediately or is it on the next parity check?

 

Looks like johnnie.black has tried a lot of different things and I may have to accept defeat.  My only hope is that I may be running a different configuration and so the tunables may help me.

Link to comment

By far the biggest change on my media server (which also has an E6300) was changing the nr_requests parameters.    When I moved from v5 to v6, the parity checks took about 20% longer than with v5.  Disabling page updates made a noticeable difference; but changing nr_requests brought the speed back to what it had been with v5.    However, your slowdown is much worse ... and Johnnie has experience with the same disk model you're using ... so the disk model is almost certainly the culprit in your case.    Likely nothing you can do about it short of replacing the disks.

 

Link to comment

I have been playing with the md_num_stripes, md_sync_wind, and md_sync_thresh in trying to get my parity check speed back to the level of V4 & V5 (about 75MB/s) compared to the atrocious 25 MB/s of V6.1.x.

 

It appears that the parity speed varies very little regardless of what values I used for the tunables.  I see turning off my Windows 7 VM has more impact, but not significant (maybe 10MB/s faster), since that leaves me more CPU headroom for the parity check.

 

Here are the values I tried for the tunables:

Tunable

Default

Suggested

md_num_stripes

1280

3584

md_sync_window

384

1536

md_sync_thresh

383

1535

 

The semi good news is that I have managed to get my parity speed check into the 55MB/s range.  Not great, but definitely a vast improvement from 25MB/s.  I was only able to improve the parity check speed after patching the firmware of the three Samsumg HD203WI drives from version 1AN10002 to 1AN10003.

 

I have two more tests to conduct to see if the speed increase is related to the patched firmware.  The first test is to start the parity check with the VM running (like my first initial parity check where the speed was 25MB/s).  The second test is to play with the nr_requests parameters.

 

I am now back to the default for md_num_stripes, md_sync_window, and md_sync_thresh since changing these values had very little impact, but added more load on the CPU and memory.

Link to comment

I am reporting the speed that unRAID reports at the end of the parity check.  I assumed unRAID is reporting the average speed.  Anyhow, the duration is 9 1/2 hours faster than that first parity check (19 1/2 hours).

 

It could all be related to not having the VM running and that is why I still need to start a parity check with the VM running to get a better idea as to what caused the parity check speed to improve.  I believe my second parity check was on the old firmware with the VM stopped.  That run looked like it was the same as the first run.  I did not let that second parity check complete and canceled it 10 hours into the check with over 9 hours remaining.

 

 

Link to comment

The average speed can vary wildly, these were my last two using these disks:

 

8:20:46 - 66.6MB/s

18:37:25 - 29.8MB/s

 

Best way to check for it is let it run for 1 or 2 hours and then use dynamix system stats to look for the variations, two speed examples below, a very bad and a medium one.

 

I believe there's nothing you can do, but I really wish you could prove me wrong, so it would fix it for me also.

1.png.15f4eb3690f4abeb09a03a915a3bd9e2.png

2.png.cd02bcde469b2693b62c27b0e16284ea.png

Link to comment

I suspect the disk model is the primary culprit, since Johnnie has the same disks and same issue => but just to confirm, I'd also add the nr_requests changes to your GO file  (or you can type them all individually from the console).

 

e.g.

 

echo 8 > /sys/block/sda/queue/nr_requests

echo 8 > /sys/block/sdb/queue/nr_requests

echo 8 > /sys/block/sdc/queue/nr_requests

echo 8 > /sys/block/sdd/queue/nr_requests

echo 8 > /sys/block/sde/queue/nr_requests

echo 8 > /sys/block/sdf/queue/nr_requests

 

...  for as many disks as you have

 

This made a HUGE difference on my older system using a similar CPU  [E6300]

 

Link to comment

Thanks gary.

 

I will try the nr_requests next to see if I get any performance gain during parity check.

 

It does not seem like the firmware patch had any impact.  Starting the parity check with the VM running slowed the parity check down to 35 MB/s.  Once I stopped the VM, the parity check speed went back up to 55 MB/s.  So you are right that the CPU is having a harder time on the parity check with V6.1.x.

 

In summary:

With the VM running and all else being equal, I think I get 25 MB/s with the md_sync_thresh set to md_sync_window/2 and 35 MB/s when it is set to md_sync_window - 1.  With the VM stopped, my parity check hovers around 55 MB/s.

 

Now tonight I will give the nr_requests suggestion a try to see if the parity check speed improves.

 

I guess this just gives me more incentive to start a new unRAID build.  I had already vow to stop buying 2 TB drives and look into 4TB or 6TB drives.  However, now that my CPU is being taxed by V6.1.x, it looks like I will have to look into a new system. 

 

The SuperMicro X10SRH-CF looks interesting.  Doesn't seem too many mentions on the forum though.

Link to comment

... So you are right that the CPU is having a harder time on the parity check with V6.1.x.

 

...  I think I get 25 MB/s with the md_sync_thresh set to md_sync_window/2 and 35 MB/s when it is set to md_sync_window - 1.  With the VM stopped, my parity check hovers around 55 MB/s.

 

Now tonight I will give the nr_requests suggestion a try to see if the parity check speed improves.

 

I'm not at all surprised that stopping the VM helps => v6 is simply more CPU intensive than previous versions, and with a relatively modest processor you need all the CPU cycles you can get during a parity check.    Not sure why this is the case (since parity checks weren't nearly as "taxing" with v5) ... but the simple fact is that it is.

 

It'll be interesting to see if the nr_requests changes help => they made a BIG difference with my system under 6.1.3 and I've simply left them alone with 6.1.7 (they're in my GO file), so I don't know if they're still needed or not [May experiment one of these days].

 

 

... The SuperMicro X10SRH-CF looks interesting.

 

It does indeed look very nice -- 18 SATA ports and support for an E5v3 Xeon

 

Link to comment

I'm not surprised -- as I noted earlier, Johnnie's experience with the same model drives you have clearly show that these drives have an issue with the new version.    Not sure what that interaction is, but clearly it's there => so you'll just have to live with your current parity check speed until you replace those drives.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.