[Partially SOLVED] Is there an effort to solve the SAS2LP issue? (Tom Question)


TODDLT

Recommended Posts

I don’t want to celebrate too early but it appears you found the cause of the problem!!

 

My test server with 8 32GB SSDs

 

Before: Duration: 5 minutes, 27 seconds. Average speed: 97.9 MB/sec

After: Duration: 2 minutes, 31 seconds. Average speed: 212.0 MB/sec

 

Another server with 11 disks total, 4 on the SAS2LP, you can see the jump in the graph as I entered the commands:

 

Nice!  On your server with 11 disks, not counting the 4 connected to the SAS2LP, are the other 7 just connected to the motherboard?

 

Just wondering because one of the (older) motherboards I'm using has 6 x SATA2 ports but I was only able connect and fully saturate 4 HDDs.  Once I connected a 5th, it brought the speed down on all 5... so I was limited by that sata chip's upstream bandwidth (probably PCIe x1).

Link to comment
  • Replies 453
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Nice!  On your server with 11 disks, not counting the 4 connected to the SAS2LP, are the other 7 just connected to the motherboard?

 

Just wondering because one of the (older) motherboards I'm using has 6 x SATA2 ports but I was only able connect and fully saturate 4 HDDs.  Once I connected a 5th, it brought the speed down on all 5... so I was limited by that sata chip's upstream bandwidth (probably PCIe x1).

 

Older board probably has DMI 1.0 (max 1000MB/s), I also have one older Supermicro and in my tests real world max speed on the onboard ports was:

 

4 x 180MB/s

5 x 140MB/s

6 x 120MB/s

 

In this server I’m using an Asrock B75 Pro3-M with DMI 2.0 (2000MB/s), 6 onboard intel + 1 onboard Asmedia 1061, with WD green drives I start hitting the DMI 2.0 limit with 10 disks (~145MB/s)

 

Link to comment

I presume the parity sync is writing synchronously and blocks more reads from queuing until the parity write is completed which conveniently finishes by time the pending reads complete from the other drives.  In other words, if there are no writes (parity check with no errors) then it'll keep filling the disk queues with async read requests.  Tom would be able to explain it better (or correct my poor interpretation).

 

Can you or Tom see a mechanism to explain the rarer but much more serious issue of data corruption with the SAS2LP?  I'm wondering if the issue is not just a delay/backup in the queue processing, but a queue overflow or overwrite of an I/O, that might explain the more serious issues a few users have seen, like false (and repeatable) parity check errors.  Unfortunately, those users are the ones most likely to have sold off or trashed their SAS2LP cards, but they are the ones we most need to test this.

Link to comment

 

echo 8 > /sys/block/sdc/queue/nr_requests
echo 8 > /sys/block/sdd/queue/nr_requests
echo 8 > /sys/block/sde/queue/nr_requests
echo 8 > /sys/block/sdf/queue/nr_requests
echo 8 > /sys/block/sdg/queue/nr_requests
echo 8 > /sys/block/sdh/queue/nr_requests
echo 8 > /sys/block/sdi/queue/nr_requests
echo 8 > /sys/block/sdj/queue/nr_requests
echo 8 > /sys/block/sdk/queue/nr_requests
echo 8 > /sys/block/sdl/queue/nr_requests
echo 8 > /sys/block/sdm/queue/nr_requests
echo 8 > /sys/block/sdn/queue/nr_requests
echo 8 > /sys/block/sdo/queue/nr_requests
echo 8 > /sys/block/sdp/queue/nr_requests

 

This will limit the requests each hdd (in the array) is trying to handle at a time from the default 128 down to 8.  No need to run it on cache devices and/or SSDs.  This seems to make a HUGE difference on Marvell controllers.  I suspect earlier versions on Linux (hence unRAID 5.x) had lower defaults for nr_requests.

 

Parity check can be already running or not when you're trying the commands above.  Speed should increase almost instantly...

 

can anyone with unRAID V5 post their nr_request values to compare them with V6?

Link to comment

My v5 system shows 128.

 

Note this is NOT the same system I recently upgraded to v6 ... it's a different motherboard & CPU.  I presume that doesn't make any difference, but just to be thorough I wanted to note this is NOT the C2SEA that that I upgraded and had a 30% slowdown in parity check speed on.  [That system is now on v6, and I've since added a drive that's XFS, so I can't revert to v5 to try it.]

 

Link to comment

... Let's see if we can increase your parity check speeds by running the following commands in the console or ssh session:

 

echo 8 > /sys/block/sdc/queue/nr_requests
echo 8 > /sys/block/sdd/queue/nr_requests
echo 8 > /sys/block/sde/queue/nr_requests
echo 8 > /sys/block/sdf/queue/nr_requests
echo 8 > /sys/block/sdg/queue/nr_requests
echo 8 > /sys/block/sdh/queue/nr_requests
echo 8 > /sys/block/sdi/queue/nr_requests
echo 8 > /sys/block/sdj/queue/nr_requests
echo 8 > /sys/block/sdk/queue/nr_requests
echo 8 > /sys/block/sdl/queue/nr_requests
echo 8 > /sys/block/sdm/queue/nr_requests
echo 8 > /sys/block/sdn/queue/nr_requests
echo 8 > /sys/block/sdo/queue/nr_requests
echo 8 > /sys/block/sdp/queue/nr_requests

 

This will limit the requests each hdd (in the array) is trying to handle at a time from the default 128 down to 8.

 

...  No need to run it on cache devices and/or SSDs.

 

Emphasis added on the last sentence => Note that Johnnie's test system is an all-SSD server, yet this change made a BIG difference !!

 

 

... I suspect earlier versions on Linux (hence unRAID 5.x) had lower defaults for nr_requests.

 

Based on my quick look at both a v5 & a v6 server, that doesn't seem to be the case.    But it does seem that v6 is doing something different where the queue depth matters.    I'm anxious to see if it helps with my parity check speeds on the v6 server, but can't do that this weekend, as we've got company and the server's very busy for the next 3 days.

Link to comment

Now that work is over I had the chance to do some more tests, I can confirm that the default nr_requests from at least v5b12a is the same 128, I mention that release because that’s the last one I could get an unlimited check with the SAS2LP, but as changing the nr_requests gives me a better speed than v5.0.6, I’m very happy.

 

From earlier tests these are the speeds I got with the SAS2LP in my test server with 8 32GB SSDs and with the default settings.

 

V5b12 V5b14 V5.0.6 V6.0.0 V6.0.1 V6.1.0 V6.1.1 V6.1.2 V6.1.3

222.3 214.8 185.0    106.4    103.6    109.3    122.2    112.3    111.2

 

V6 results with the SAS2LP tend to vary a little if run more than once -/+ 10MB/s

 

 

For comparison, same tests with a Dell H310

 

V5B12  V5b14  V5.0.6  V6.0.0  V6.0.1  V6.1.0  V6.1.1  V6.1.2  V6.1.3

222.3    222.3    222.3    222.3    222.3    222.3    222.3    222.3    222.3

 

These are the actual values I got, duration was always the same down to the second.

 

 

Now again with the SAS2LP and v6.1.3 and experimenting with different nr_requests values:

 

6 - Duration: 2 minutes, 26 seconds. Average speed: 219.3 MB/sec

8 - Duration: 2 minutes, 28 seconds. Average speed: 216.3 MB/sec

10 - Duration: 2 minutes, 26 seconds. Average speed: 219.3 MB/sec

12 - Duration: 2 minutes, 29 seconds. Average speed: 214.9 MB/sec

15 - Duration: 2 minutes, 31 seconds. Average speed: 212.0 MB/sec

20 - Duration: 2 minutes, 33 seconds. Average speed: 209.3 MB/sec

30 - Duration: 3 minutes, 17 seconds. Average speed: 162.5 MB/sec

 

 

These small SSDs max read speed is announced at 230MB/s, so these are excellent results and enough speed for any modern HDD, during the weekend I plan to test with faster SSDs to confirm that the issue is really resolved, at least for my hardware.

 

If this turns out to be the solution I think it would be a good idea to add this to the disk settings as a configurable tunable in unraid (please also add back the md_write_limit that was in v5 and missing in v6)

 

Link to comment

Gave those commands a try on my array

 

Unraid 6.1.3

Motherboard - SSD cache, 1.5TB, 3TB, 5TB, SDD cache

SAS2LP - 1.5TB, 2TB, 2TB, 5TB, 2TB, 3TB

Total 10 drives

 

Before:

 

Elapsed time: 8 minutes

Current position: 22.1 GB (0.4 %)

Estimated speed: 44.7 MB/sec

Estimated finish: 1 day, 6 hours, 54 minutes

Disk stats: ~330MB/s

 

After:

 

Elapsed time: 10 minutes

Current position: 30.8 GB (0.6 %)

Estimated speed: 108.1 MB/sec

Estimated finish: 12 hours, 46 minutes

Disk stats: ~860MB/s

 

I say the results look pretty good!

 

 

EDIT:

 

Parity check much further along now and has slowed down a bit.

Elapsed time: 4 hours, 28 minutes

Current position: 1.38 TB (27.6 %)

Estimated speed: 61.7 MB/sec

Estimated finish: 16 hours, 18 minutes

Disk stats: ~480MB/s

Link to comment

... If this turns out to be the solution I think it would be a good idea to add this to the disk settings as a configurable tunable in unraid (please also add back the md_write_limit that was in v5 and missing in v6)

 

Absolutely agree => I was going to suggest exactly that (well, at least the nr_requests values ... hadn't thought about adding the md_write_limit, although that's also a good idea).

 

Link to comment

Thanks for the detailed info.  Let's see if we can increase your parity check speeds by running the following commands in the console or ssh session:

 

echo 8 > /sys/block/sdc/queue/nr_requests
echo 8 > /sys/block/sdd/queue/nr_requests
echo 8 > /sys/block/sde/queue/nr_requests
echo 8 > /sys/block/sdf/queue/nr_requests
echo 8 > /sys/block/sdg/queue/nr_requests
echo 8 > /sys/block/sdh/queue/nr_requests
echo 8 > /sys/block/sdi/queue/nr_requests
echo 8 > /sys/block/sdj/queue/nr_requests
echo 8 > /sys/block/sdk/queue/nr_requests
echo 8 > /sys/block/sdl/queue/nr_requests
echo 8 > /sys/block/sdm/queue/nr_requests
echo 8 > /sys/block/sdn/queue/nr_requests
echo 8 > /sys/block/sdo/queue/nr_requests
echo 8 > /sys/block/sdp/queue/nr_requests

 

This will limit the requests each hdd (in the array) is trying to handle at a time from the default 128 down to 8.  No need to run it on cache devices and/or SSDs.  This seems to make a HUGE difference on Marvell controllers.  I suspect earlier versions on Linux (hence unRAID 5.x) had lower defaults for nr_requests.

 

Parity check can be already running or not when you're trying the commands above.  Speed should increase almost instantly...

 

Calling this out so more people can try it if they didn't see the post.

Link to comment

Thanks for the detailed info.  Let's see if we can increase your parity check speeds by running the following commands in the console or ssh session:

 

echo 8 > /sys/block/sdc/queue/nr_requests
echo 8 > /sys/block/sdd/queue/nr_requests
echo 8 > /sys/block/sde/queue/nr_requests
echo 8 > /sys/block/sdf/queue/nr_requests
echo 8 > /sys/block/sdg/queue/nr_requests
echo 8 > /sys/block/sdh/queue/nr_requests
echo 8 > /sys/block/sdi/queue/nr_requests
echo 8 > /sys/block/sdj/queue/nr_requests
echo 8 > /sys/block/sdk/queue/nr_requests
echo 8 > /sys/block/sdl/queue/nr_requests
echo 8 > /sys/block/sdm/queue/nr_requests
echo 8 > /sys/block/sdn/queue/nr_requests
echo 8 > /sys/block/sdo/queue/nr_requests
echo 8 > /sys/block/sdp/queue/nr_requests

 

This will limit the requests each hdd (in the array) is trying to handle at a time from the default 128 down to 8.  No need to run it on cache devices and/or SSDs.  This seems to make a HUGE difference on Marvell controllers.  I suspect earlier versions on Linux (hence unRAID 5.x) had lower defaults for nr_requests.

 

Parity check can be already running or not when you're trying the commands above.  Speed should increase almost instantly...

 

Thanks for the tip Eric! I just kicked off a parity check and as expected, it was running between 40-50MB/s. After adjusting the nr_requests to 8 as you recommended, my speed took off to ~ 120MB/s!! I'll post back once complete to confirm the speed increase continues for the duration of the check.

 

-Brian

 

Before:

K4woy1N.png

 

Immediately After:

39IA2Ji.png

Link to comment

For me changing the setting also improves the speed for the SASLP, brings it back to v5 speeds, much smaller improvement than the SAS2 but in a large array can translate to 1 or 2 hours.

 

same 8 32Gb SSDs

 

v5.0.6 - 80.6MB/s

v6.1,3 default - Duration: 7 minutes, 33 seconds. Average speed: 70.7 MB/sec

v6.1.3 nr_requests=8 - Duration: 6 minutes, 38 seconds. Average speed: 80.4 MB/sec

Link to comment

Thanks for the detailed info.  Let's see if we can increase your parity check speeds by running the following commands in the console or ssh session:

 

echo 8 > /sys/block/sdc/queue/nr_requests
echo 8 > /sys/block/sdd/queue/nr_requests
echo 8 > /sys/block/sde/queue/nr_requests
echo 8 > /sys/block/sdf/queue/nr_requests
echo 8 > /sys/block/sdg/queue/nr_requests
echo 8 > /sys/block/sdh/queue/nr_requests
echo 8 > /sys/block/sdi/queue/nr_requests
echo 8 > /sys/block/sdj/queue/nr_requests
echo 8 > /sys/block/sdk/queue/nr_requests
echo 8 > /sys/block/sdl/queue/nr_requests
echo 8 > /sys/block/sdm/queue/nr_requests
echo 8 > /sys/block/sdn/queue/nr_requests
echo 8 > /sys/block/sdo/queue/nr_requests
echo 8 > /sys/block/sdp/queue/nr_requests

 

This will limit the requests each hdd (in the array) is trying to handle at a time from the default 128 down to 8.  No need to run it on cache devices and/or SSDs.  This seems to make a HUGE difference on Marvell controllers.  I suspect earlier versions on Linux (hence unRAID 5.x) had lower defaults for nr_requests.

 

Parity check can be already running or not when you're trying the commands above.  Speed should increase almost instantly...

 

Calling this out so more people can try it if they didn't see the post.

 

 

For anyone that doesn't know and wants a quick way to do this:

for i in {a..t}; do echo 8 > /sys/block/sd$i/queue/nr_requests; done

With {a..t} matching the first and last disks in your array.

Link to comment

Very cool news!

 

I had the slow parity check speeds with these drives. All were hooked up to two SAS2LP cards. Changed the cards to Dell H310's and the slow speeds went away. I have both my SAS2LP cards sitting in anti-static bags and can re-install them if more testing is needed. What would cause this to only affect some Marvell chipsets?

 

 

Parity        WDC 4001FAEX-00MJRA0_WD-WCC1F0127613 - 4 TB

Disk 1 WDC_WD2002FAEX-007BA0_WD-WMAWP0099064 - 2 TB

Disk 2 HGST_HDN724040ALE640_PK2338P4HE525C - 4 TB

Disk 3 Hitachi_HDS723020BLA642_MN5220F32HHMJK - 2 TB

Disk 4 HGST_HDN724040ALE640_PK2338P4HEPH8C - 4 TB

Disk 5 Hitachi_HDS723020BLA642_MN1220F30803DD - 2 TB

Disk 6 WDC_WD2002FAEX-007BA0_WD-WMAWP0284322 - 2 TB

Disk 7 HGST_HDN724040ALE640_PK1334PCJY9BRS - 4 TB

Disk 8 WDC_WD4001FAEX-00MJRA0_WD-WCC130263021 - 4 TB

Disk 9 WDC_WD4003FZEX-00Z4SA0_WD-WCC130966733 - 4 TB

Disk 10 HGST_HDN724040ALE640_PK2334PCGYD32B - 4 TB

Disk 11 WDC_WD2001FASS-00U0B0_WD-WMAUR0142521 - 2 TB

Disk 12 WDC_WD2002FAEX-007BA0_WD-WCAY01300087 - 2 TB

 

 

Link to comment

Works here also!  I went from starting out about 45MB/s to over 110MB/s.

 

For the novice out there, please note that you do need a space after the "8" and ">"  If you don't add the space, you will not get an error message but the command apparently doesn't work.  Don't ask me how I know this.  :D

Link to comment

For me changing the setting also improves the speed for the SASLP, brings it back to v5 speeds, much smaller improvement than the SAS2 but in a large array can translate to 1 or 2 hours.

 

same 8 32Gb SSDs

 

v5.0.6 - 80.6MB/s

v6.1,3 default - Duration: 7 minutes, 33 seconds. Average speed: 70.7 MB/sec

v6.1.3 nr_requests=8 - Duration: 6 minutes, 38 seconds. Average speed: 80.4 MB/sec

 

I wasn't expecting a speed increase for SSDs but I guess I wasn't expecting really (old?) SSDs to be used :) 

 

From the Main tab, click on one of the SSDs to view its details then click on the Identity tab.  What is listed for the 'ATA Version' and 'SATA Version' fields?

Link to comment

Very cool news!

 

I had the slow parity check speeds with these drives. All were hooked up to two SAS2LP cards. Changed the cards to Dell H310's and the slow speeds went away. I have both my SAS2LP cards sitting in anti-static bags and can re-install them if more testing is needed. What would cause this to only affect some Marvell chipsets?

 

 

Parity        WDC 4001FAEX-00MJRA0_WD-WCC1F0127613 - 4 TB

Disk 1 WDC_WD2002FAEX-007BA0_WD-WMAWP0099064 - 2 TB

Disk 2 HGST_HDN724040ALE640_PK2338P4HE525C - 4 TB

Disk 3 Hitachi_HDS723020BLA642_MN5220F32HHMJK - 2 TB

Disk 4 HGST_HDN724040ALE640_PK2338P4HEPH8C - 4 TB

Disk 5 Hitachi_HDS723020BLA642_MN1220F30803DD - 2 TB

Disk 6 WDC_WD2002FAEX-007BA0_WD-WMAWP0284322 - 2 TB

Disk 7 HGST_HDN724040ALE640_PK1334PCJY9BRS - 4 TB

Disk 8 WDC_WD4001FAEX-00MJRA0_WD-WCC130263021 - 4 TB

Disk 9 WDC_WD4003FZEX-00Z4SA0_WD-WCC130966733 - 4 TB

Disk 10 HGST_HDN724040ALE640_PK2334PCGYD32B - 4 TB

Disk 11 WDC_WD2001FASS-00U0B0_WD-WMAUR0142521 - 2 TB

Disk 12 WDC_WD2002FAEX-007BA0_WD-WCAY01300087 - 2 TB

 

What I found to cause the slowdowns was when there was a drive using ATA Version 'ATA8-ACS' + Marvell chip + default nr_requests at 128.

 

If a drive has ATA Version 'ACS-2' or 'ACS-3' then changing the nr_requests doesn't matter for that device, it'll run fast unless it has to wait on a slower disk in the array.  It was just easier to have everyone here change their nr_requests to 8 for all array devices... but you really only need to change the drives that have the lower ATA spec 'ATA8-ACS'. (You can see what the ATA Version is from the Identity tab on the disk details page).

 

Link to comment

I went from starting out about 45MB/s to over 110MB/s.

 

This was the jump I experienced using 8 for nr_requests.  I was able to ultimately increase it from 110MB/s to 125-130MB/s by tweaking the md_num_stripes and md_sync_window values from the Disk Settings page.

 

For reference, the default values are:

md_num_stripes = 1280

md_sync_window = 384

 

With my 14 drive array (including parity disk), I used these values:

md_num_stripes = 3840

md_sync_window = 1728

 

You're mileage may vary, but I encourage you to at least experiment with the md_sync_window value to see what you can top out at.

Link to comment

Voila !!    This very clearly resolves the parity check speeds ... at least for me.    The system has 6 ports from an Intel ICH10 and 12 ports from 3 Adaptec 1430SA cards.    Turns out the 1430SA's do indeed have a Marvell chip [Just looked at my spare board, and it's got a Marvell 88SX7042 chip on it] ... so that's apparently the issue.

 

Considering how common C2SEA boards were a few years ago (since it was the board LimeTech used, a lot of us used the same board for our systems) ... and the 1430SA was also the controller most recommended by Tom in those days ... I suspect I'm not the only one who's encountered this issue (or will, when upgrading to v6).

 

In any event, I didn't have time to do a full parity check, but curiosity got me so I didn't wait until after the weekend.  I just started a check; waited exactly 1 minute and refreshed; then refreshed at the 10 minute point; and then stopped the check.    Then set all the nr_request values to 8 and repeated the process.  Not only did that bump the check up to the speeds I used to get with v4 & v5  (~ 85MB/s at the start when some old 500GB/platter drives are involved), but it also dropped the CPU utilization down to 50-60% compared to 80-100% I was seeing with v6.

 

I'm confident that when I run my next full check it will match the speeds I was accustomed to before the v6 upgrade.

 

Nice work isolating this.    Now PLEASE make this a configurable "tunable" parameter on the Disk Settings page  :)

Link to comment

 

Disks: (multiple 2TB and 3TB on 2xSAS2LP, parity on mb)

 

Oct 24 18:30:09 Tower emhttp: WDC_WD5000BPKX-00HPJT0_WD-WX61AC4L4H8D (sdb) 488386584 [Cache - on MB SATA]

Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WMC4N2022598 (sdc) 2930266584 [Parity - on MB SATA]

Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WMC4N0H2AL9C (sdd) 2930266584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WMC4N0F81WWL (sde) 2930266584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4N4TRHA67 (sdf) 2930266584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4NPRDDFLF (sdg) 2930266584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4N1VJKTUV (sdh) 2930266584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: Hitachi_HDS722020ALA330_JK1101B9GME4EF (sdi) 1953514584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: Hitachi_HDS5C3020ALA632_ML0220F30EAZYD (sdj) 1953514584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: ST2000DL004_HD204UI_S2H7J90C301317 (sdk) 1953514584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: ST2000DL003-9VT166_6YD1WXKR (sdl) 1953514584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4N4EZ7Z5Y (sdm) 2930266584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: Hitachi_HDS722020ALA330_JK11H1B9GM9YKR (sdn) 1953514584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: Hitachi_HDS722020ALA330_JK1101B9GKEL4F (sdo) 1953514584 [sAS2LP]

Oct 24 18:30:09 Tower emhttp: WDC_WD30EFRX-68EUZN0_WD-WCC4N3YFCR2A (sdp) 2930266584 [sAS2LP]

 

Last full parity check on 5.0.5:

37311 seconds / 10  hours, 21 minutes, 51 seconds (regularly averaged between 10-11 hrs, same drives plugged into same ports)

 

Full parity check after upgrade to 6.1.3:

Last checked on Sun 25 Oct 2015 12:11:19 PM MST (four days ago), finding 0 errors.

Duration: 17 hours, 38 minutes, 42 seconds. Average speed: 47.2 MB/sec

 

Thanks for the detailed info.  Let's see if we can increase your parity check speeds by running the following commands in the console or ssh session:

 

echo 8 > /sys/block/sdc/queue/nr_requests
...
echo 8 > /sys/block/sdp/queue/nr_requests

 

This will limit the requests each hdd (in the array) is trying to handle at a time from the default 128 down to 8.  No need to run it on cache devices and/or SSDs.  This seems to make a HUGE difference on Marvell controllers.  I suspect earlier versions on Linux (hence unRAID 5.x) had lower defaults for nr_requests.

 

Parity check can be already running or not when you're trying the commands above.  Speed should increase almost instantly...

 

Thanks for the tip Eric! I just kicked off a parity check and as expected, it was running between 40-50MB/s. After adjusting the nr_requests to 8 as you recommended, my speed took off to ~ 120MB/s!! I'll post back once complete to confirm the speed increase continues for the duration of the check.

 

-Brian

 

Before: ~ 42MB/sec

 

Immediately After: ~ 126MB/sec

 

Full check just finished.... a full hour faster than my final check on stock 5.0.5. Thanks!! Now I can buy a UPS for my unRAID box instead of replacement controllers  :) :) :)

 

Last checked on Fri 30 Oct 2015 11:43:41 PM MST (today), finding 0 errors.

Duration: 9 hours, 21 minutes, 37 seconds. Average speed: 89.0 MB/sec

Link to comment

For me changing the setting also improves the speed for the SASLP, brings it back to v5 speeds, much smaller improvement than the SAS2 but in a large array can translate to 1 or 2 hours.

 

same 8 32Gb SSDs

 

v5.0.6 - 80.6MB/s

v6.1,3 default - Duration: 7 minutes, 33 seconds. Average speed: 70.7 MB/sec

v6.1.3 nr_requests=8 - Duration: 6 minutes, 38 seconds. Average speed: 80.4 MB/sec

 

I wasn't expecting a speed increase for SSDs but I guess I wasn't expecting really (old?) SSDs to be used :) 

 

From the Main tab, click on one of the SSDs to view its details then click on the Identity tab.  What is listed for the 'ATA Version' and 'SATA Version' fields?

 

The SSDs are ACS-2, and the parity check speed on the SAS2LP doubled from ~100MB/s to >200MB/s.

 

They are all the same model:

 

Device Model:	TS32GSSD370S
Serial Number:	C160931700
Firmware Version:	N1114H
User Capacity:	32,017,047,552 bytes [32.0 GB]
Sector Size:	512 bytes logical/physical
Rotation Rate:	Solid State Device
Device:	Not in smartctl database [for details use: -P showall]
ATA Version:	ACS-2 (minor revision not indicated)
SATA Version:	SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time:	Sat Oct 31 08:49:03 2015 GMT
SMART support:	Available - device has SMART capability.
SMART support:	Enabled
SMART overall-health:	Passed

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.