unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables


Recommended Posts

I know this was a Short test so the results aren't that accurate, but none of the results came close to your current baseline.

 

I have to ask about the ratios you are using.  For the tests, I'm using num_stripes = 2x sync_window, but you've gone 384 higher.  Any rationale behind that? 

 

Also, while you're using a high value for nr_requests, you're not using sync_window-1, but sync_window-48.  How did you get to these values?

 

A Normal length test is required to see if this is just a fluke or if you've truly got better values, but color me interested.

 

Thanks,

Paul

 

 

Basically I found those setting worked great on all my servers, independent of the controllers used.

 

At the time I used your old script, and because it didn't test for md_sync_thresh I manually entered round values, e.g., I would do 4 runs with sync_thresh manually set at 500, 1000, 1500 and 2000 and picked the value with the best result.

 

I also found that in most cases there was only a noticeable difference when the value approached half sync_window, e.g, with a sync_window set at 2048, performance was very similar with sync_thresh set at 1500, 2000 or 2047.

Link to comment

And finally short test for the last one, next I’ll do the normal tests but only during the weekend, it will take some time because if I have all servers on at the same time my UPSes run very close to max capacity with very short runtime in case of a power failure.

 

This makes me feel similar to how I feel after visiting the Georgia Aquarium, then returning home to see my rinky-dink aquarium.  You have challenges that are completely different to us amateurs...

 

I'd be curious what kind of power numbers you get on your collection of servers.  My 42TB array, with a couple parity and an ssd cache (17 total drives) consumes about 140W all spun up.  I worked really hard to make it very energy efficient (for a server of this size).

 

Tower4 - 8 Disks, 1430SA + Intel PCH

 

Current Values:  md_num_stripes=4096, md_sync_window=2048, md_sync_thresh=2000
                 Global nr_requests=8
                    sdd nr_requests=
                    sdf nr_requests=
                    sdg nr_requests=
                    sdh nr_requests=
                    sdi nr_requests=
                    sdb nr_requests=
                    sdc nr_requests=
                    sde nr_requests=

 

I see it didn't pick up any nr_requests being set at the drive level.  That's unexpected.  Or I have a bug in the script.  Can you help troubleshoot?

 

--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 30sec Duration)---

Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh |   Speed 
-----------------------------------------------------------------------------
   1  | 138 |     4096    |    2048     |     8   |     2000    | 145.5 MB/s 

 

So after seeing the results for all the different servers, I'm beginning to recognize that these are your favorite go-to values (well, I guess you set num_stripes a little lower on this one), and with good reason, as they seem to be giving you good results.

 

I'm still hopeful Normal Auto test length will let the script get close to these results, but because your ratios are different than what the test is doing, there's a good possibility it won't be able to match your results.

 

--- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 60sec Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |     1536    |     768     |     128     |      767    | 106.0 MB/s 
  2   |     1536    |     768     |     128     |      384    |  85.1 MB/s 
  3   |     1536    |     768     |       8     |      767    | 100.3 MB/s 
  4   |     1536    |     768     |       8     |      384    |  74.9 MB/s 

Fastest vals were nr_reqs=128 and sync_thresh=99% of sync_window at 106.0 MB/s

 

Once again, since the first nr_requests test came up with the wrong answer (so I assume) the rest of the test goes downhill from here.  Hopefully Normal Auto gets this right.

 

The Fastest Sync Speed tested was md_sync_window=1208 at 127.0 MB/s
     Tunable (md_num_stripes): 2416
     Tunable (md_sync_window): 1208
     Tunable (md_sync_thresh): 1207
     Tunable (nr_requests): 8
This will consume 81 MB with md_num_stripes=2416, 2x md_sync_window.
This is 57MB less than your current utilization of 138MB.

 

Yup, never even got close.  But that's not the point of the Short Auto.

 

Completed: 1 Hrs 1 Min 20 Sec.

 

1hr, that's the point of the Short Auto, it's quick so you don't waste a lot of time, and based upon what I see, we can determine that your server responds well to changing the tunables.

 

Since the short auto is turning out such crap results, I'm gonna tweak it to make it faster, by removing a lot of the test points.  Even with half the test points, it should still tell us whether it is worth running a Normal Auto 16hr test.

 

Outputting lshw information for Drives and Controllers:

Bus info          Device     Class      Description
===================================================
pci@0000:01:00.0             storage    Serial ATA II RAID 1430SA
pci@0000:00:1f.2             storage    6 Series/C200 Series Chipset Family SATA AHCI Controller
usb@2:1.1         scsi0      storage    
scsi@0:0.0.0      /dev/sda   disk       7862MB DT Micro
                  /dev/sda   disk       7862MB 
                  scsi3      storage    
scsi@3:0.0.0      /dev/sdb   disk       4TB WDC WD40EZRX-00S
                  scsi4      storage    
scsi@4:0.0.0      /dev/sdc   disk       4TB WDC WD40EZRX-00S
                  scsi5      storage    
scsi@5:0.0.0      /dev/sdd   disk       8001GB ST8000AS0002-1NA
                  scsi6      storage    
scsi@6:0.0.0      /dev/sde   disk       8001GB ST8000AS0002-1NA
                  scsi7      storage    
scsi@7:0.0.0      /dev/sdf   disk       8001GB ST8000AS0002-1NA
                  scsi8      storage    
scsi@8:0.0.0      /dev/sdg   disk       8001GB ST8000AS0002-1NA
                  scsi9      storage    
scsi@9:0.0.0      /dev/sdh   disk       8001GB ST8000AS0002-1NA
                  scsi10     storage    
scsi@10:0.0.0     /dev/sdi   disk       4TB WDC WD40EZRX-00S

 

Once again, it looks like no drives are connected to the 1430.

 

Can anyone give me advice on how to make it so that the drives connected to the controller are listed below the controller?

 

The results are beautiful on my server, so I didn't know this would be a challenge:

Bus info          Device      Class          Description
========================================================
pci@0000:04:00.0  scsi1       Storage  HighPoint Technologies, Inc.
scsi@1:0.0.0      /dev/sdb       Disk    3TB WDC WD30EFRX-68A
scsi@1:0.1.0      /dev/sdc       Disk    3TB WDC WD30EFRX-68A
scsi@1:0.2.0      /dev/sdd       Disk    1TB Samsung SSD 840
scsi@1:0.3.0      /dev/sde       Disk    3TB WDC WD30EFRX-68A
scsi@1:0.4.0      /dev/sdf       Disk    3TB WDC WD30EFRX-68A
scsi@1:0.5.0      /dev/sdg       Disk    3TB WDC WD30EFRX-68A
scsi@1:0.6.0      /dev/sdh       Disk    3TB WDC WD30EFRX-68E
pci@0000:05:00.0  scsi2       Storage  HighPoint Technologies, Inc.
scsi@2:0.2.0      /dev/sdk       Disk    3TB WDC WD30EFRX-68E
scsi@2:0.3.0      /dev/sdl       Disk    3TB WDC WD30EFRX-68A
scsi@2:0.4.0      /dev/sdm       Disk    3TB WDC WD30EFRX-68A
scsi@2:0.5.0      /dev/sdn       Disk    3TB WDC WD30EFRX-68A
scsi@2:0.6.0      /dev/sdo       Disk    3TB WDC WD30EFRX-68A
scsi@2:0.7.0      /dev/sdp       Disk    3TB WDC WD30EFRX-68A
scsi@2:0.0.0      /dev/sdi       Disk    3TB WDC WD30EFRX-68A
scsi@2:0.1.0      /dev/sdj       Disk    3TB WDC WD30EFRX-68A
pci@0000:06:00.0  scsi3       Storage  HighPoint Technologies, Inc.
scsi@3:0.0.0      /dev/sdq       Disk    3TB WDC WD30EFRX-68A
scsi@3:0.1.0      /dev/sdr       Disk    3TB WDC WD30EFRX-68A
scsi@3:0.2.0      /dev/sds       Disk    3TB WDC WD30EFRX-68A
scsi@3:0.3.0      /dev/sdt       Disk    3TB WDC WD30EFRX-68A
usb@1:1.4         scsi0       Storage  
scsi@0:0.0.0      /dev/sda       Disk    4005MB Patriot Memory
                  /dev/sda       Disk    4005MB 

 

Thanks,

Paul

Link to comment

I know this was a Short test so the results aren't that accurate, but none of the results came close to your current baseline.

 

I have to ask about the ratios you are using.  For the tests, I'm using num_stripes = 2x sync_window, but you've gone 384 higher.  Any rationale behind that? 

 

Also, while you're using a high value for nr_requests, you're not using sync_window-1, but sync_window-48.  How did you get to these values?

 

A Normal length test is required to see if this is just a fluke or if you've truly got better values, but color me interested.

 

Thanks,

Paul

 

 

Basically I found those setting worked great on all my servers, independent of the controllers used.

 

At the time I used your old script, and because it didn't test for md_sync_thresh I manually entered round values, e.g., I would do 4 runs with sync_thresh manually set at 500, 1000, 1500 and 2000 and picked the value with the best result.

 

I also found that in most cases there was only a noticeable difference when the value approached half sync_window, e.g, with a sync_window set at 2048, performance was very similar with sync_thresh set at 1500, 2000 or 2047.

 

Very interesting, thank you for sharing.

 

Do you recall what happened with the 25% value, 500? 

 

I'm guessing that because you've gone with high values on most (all?) your servers that 500 was just pants.  But it makes me wonder, if a server works better at 50% than 99%, would it work even better at 25%, or 10%.

Link to comment

Beta Testers, v4b3 will definitely be dropping today.  I'm going to start coding the changes now.

 

Planned changes:

remove mdcmd path

reduce the number of test points in a Short Auto

change the lshw output to try it without the -businfo flag

tweak the report layout to reduce the total # of lines/chars, since the report has gotten just way too big.

 

If there are any other bugs/changes/requests, please let me know.

 

johnnie.black, if you have any idea on why those drive specific nr_requests values were null on Tower4, please let me know.

 

And if anyone has suggestions on how I can better list the controllers and their attached drives, I would be most appreciative.

 

Thanks,

Paul

Link to comment

Do you recall what happened with the 25% value, 500? 

 

I'm guessing that because you've gone with high values on most (all?) your servers that 500 was just pants.  But it makes me wonder, if a server works better at 50% than 99%, would it work even better at 25%, or 10%.

 

IIRC, 25% was very slow, most of my controllers worked better with sync_thresh close to md_sync_window, except the SASLP and the SAS2LP if nr_request was set at default, as using nr_requests=8 "fixed" the SAS2LP I could then set a high sync_thresh, and so I found that theses values were almost universally good and my go to defaults.

 

Don't remember why some have a higher num_stripes, I believe results were similar with 4400 or 4096, my usual default is 4096/2048/2000 with nr_requests=8, but on a server without any Marvell based controller nr_requests can be left at default.

 

I believe my servers are running close to maximum speed, they are disk limited (CPU limited for the SSD server), I wanted to participate more to help refine the script so more people can benefit, still wouldn't mind getting a few more MB/s out of them.  :P

Link to comment

I wanted to participate more to help refine the script so more people can benefit

 

Your help is definitely appreciated.

 

It seems that in the 3 years since I first wrote this script, we've yet to unravel exactly what these values are doing.  If anyone knows, it's probably Tom, but perhaps not even him. 

 

What we're doing is more akin to alchemy than science.  :o

Link to comment

I wonder if you have the preclear plugin installed?  I see that it contains this code, which might be why /root/mdcmd exists on your server:

if [[ ! -e /root/mdcmd ]]; then
  ln -sf /usr/local/sbin/mdcmd /root/mdcmd
fi

I currently don't have the preclear plugin installed.

 

Amazing detective work. 

 

Well... let's not overstate it :) I just grep'd my /boot/config folder for instances of "mdcmd" and found the symlink code under plugins-removed.

 

But I'm glad this explains why some 6.2 servers have /root/mdcmd and some don't!  Thanks Squid and bonienl for helping confirm.

 

 

One other comment while I wait for the normal scan to finish... I really like the RAM column, thank you very much for adding it to the report!  It makes it much more clear that there are tradeoffs to tweaking these variables.

Link to comment

johnnie.black, if you have any idea on why those drive specific nr_requests values were null on Tower4, please let me know.

 

I'm not home now and server is off, but I'll check when I get home, though I'm not sure what to check...

 

Nevermind, I found the bug.  When I was grabbing the values, I accidentally hard coded to only pull the value from disk "sdj", and used that value for all disks.

 

This means two things:  1) The values reported per disk were wrong at the top of the report, and 2) The routine that restores all the values to their original state at the end of the script did so incorrectly.

 

The good news is that, even if you SAVED the values, writing nr_requests to disks.ini, if you had individual per-disk values (perhaps in the extra or go file, not sure where they would go), then that data was untouched, and a server reboot will restore those values.

 

Apologies if this affected anyone.

 

-Paul

Link to comment

And if anyone has suggestions on how I can better list the controllers and their attached drives, I would be most appreciative.

 

What we use in the diagnostics is lspci -knn and lsscsi -vgl (that's a lowercase L).  All you need (I think) is lspci and lsscsi -v.  Unfortunately, it's not straightforward, as some PCI devices are used directly, and others are used through other PCI devices.  For example, given the following from mine -

root@JacoBack:/boot# lspci
00:00.0 RAM memory: NVIDIA Corporation MCP55 Memory Controller (rev a1)
00:01.0 ISA bridge: NVIDIA Corporation MCP55 LPC Bridge (rev a2)
00:01.1 SMBus: NVIDIA Corporation MCP55 SMBus Controller (rev a2)
00:01.2 RAM memory: NVIDIA Corporation MCP55 Memory Controller (rev a2)
00:02.0 USB controller: NVIDIA Corporation MCP55 USB Controller (rev a1)
00:02.1 USB controller: NVIDIA Corporation MCP55 USB Controller (rev a2)
00:04.0 IDE interface: NVIDIA Corporation MCP55 IDE (rev a1)
00:05.0 IDE interface: NVIDIA Corporation MCP55 SATA Controller (rev a2)
00:05.1 IDE interface: NVIDIA Corporation MCP55 SATA Controller (rev a2)
00:05.2 IDE interface: NVIDIA Corporation MCP55 SATA Controller (rev a2)
00:06.0 PCI bridge: NVIDIA Corporation MCP55 PCI bridge (rev a2)
00:0a.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a2)
00:0b.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a2)
00:0d.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a2)
00:0e.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a2)
00:0f.0 PCI bridge: NVIDIA Corporation MCP55 PCI Express bridge (rev a2)
00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration
00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] K8 [Athlon64/Opteron] Address Map
00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] K8 [Athlon64/Opteron] DRAM Controller
00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] K8 [Athlon64/Opteron] Miscellaneous Control
02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
03:00.0 SATA controller: JMicron Technology Corp. JMB363 SATA/IDE Controller (rev 02)
03:00.1 IDE interface: JMicron Technology Corp. JMB363 SATA/IDE Controller (rev 02)
04:00.0 Mass storage controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01)
05:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)
06:00.0 VGA compatible controller: NVIDIA Corporation NV44 [GeForce 7100 GS] (rev a1)
root@JacoBack:/boot# lsscsi -v
[0:0:0:0]    disk    PNY      USB 2.0 FD       PMAP  /dev/sda
  dir: /sys/bus/scsi/devices/0:0:0:0  [/sys/devices/pci0000:00/0000:00:02.1/usb1/1-2/1-2:1.0/host0/target0:0:0/0:0:0:0]
[2:0:0:0]    disk    ATA      ST3250620AS      E     /dev/sdd
  dir: /sys/bus/scsi/devices/2:0:0:0  [/sys/devices/pci0000:00/0000:00:0d.0/0000:04:00.0/ata3/host2/target2:0:0/2:0:0:0]
[4:0:0:0]    disk    ATA      WDC WD6401AALS-0 3B01  /dev/sdb
  dir: /sys/bus/scsi/devices/4:0:0:0  [/sys/devices/pci0000:00/0000:00:05.0/ata5/host4/target4:0:0/4:0:0:0]
[5:0:0:0]    disk    ATA      ST3500630AS      K     /dev/sdc
  dir: /sys/bus/scsi/devices/5:0:0:0  [/sys/devices/pci0000:00/0000:00:05.0/ata6/host5/target5:0:0/5:0:0:0]
[6:0:0:0]    disk    ATA      SAMSUNG SP2504C  0-33  /dev/sde
  dir: /sys/bus/scsi/devices/6:0:0:0  [/sys/devices/pci0000:00/0000:00:0d.0/0000:04:00.0/ata4/host6/target6:0:0/6:0:0:0]
[7:0:0:0]    disk    ATA      ST2000DM001-9YN1 CC47  /dev/sdf
  dir: /sys/bus/scsi/devices/7:0:0:0  [/sys/devices/pci0000:00/0000:00:0b.0/0000:03:00.0/ata7/host7/target7:0:0/7:0:0:0]
[8:0:0:0]    disk    ATA      ST3500630AS      E     /dev/sdh
  dir: /sys/bus/scsi/devices/8:0:0:0  [/sys/devices/pci0000:00/0000:00:05.1/ata9/host8/target8:0:0/8:0:0:0]
[9:0:0:0]    disk    ATA      ST2000DM001-9YN1 CC49  /dev/sdg
  dir: /sys/bus/scsi/devices/9:0:0:0  [/sys/devices/pci0000:00/0000:00:0b.0/0000:03:00.0/ata8/host9/target9:0:0/9:0:0:0]
[10:0:0:0]   disk    ATA      ST3000DM001-1CH1 CC27  /dev/sdi
  dir: /sys/bus/scsi/devices/10:0:0:0  [/sys/devices/pci0000:00/0000:00:05.1/ata10/host10/target10:0:0/10:0:0:0]
[11:0:0:0]   disk    ATA      TOSHIBA MD04ACA5 FP2A  /dev/sdj
  dir: /sys/bus/scsi/devices/11:0:0:0  [/sys/devices/pci0000:00/0000:00:0e.0/0000:05:00.0/ata11/host11/target11:0:0/11:0:0:0]
[14:0:0:0]   disk    ATA      Hitachi HDS72101 A39C  /dev/sdl
  dir: /sys/bus/scsi/devices/14:0:0:0  [/sys/devices/pci0000:00/0000:00:05.2/ata15/host14/target14:0:0/14:0:0:0]
[15:0:0:0]   disk    ATA      TOSHIBA DT01ACA3 ABB0  /dev/sdk
  dir: /sys/bus/scsi/devices/15:0:0:0  [/sys/devices/pci0000:00/0000:00:0e.0/0000:05:00.0/ata12/host15/target15:0:0/15:0:0:0]
[16:0:0:0]   disk    ATA      TOSHIBA DT01ACA3 ABB0  /dev/sdm
  dir: /sys/bus/scsi/devices/16:0:0:0  [/sys/devices/pci0000:00/0000:00:05.2/ata16/host16/target16:0:0/16:0:0:0]

 

The drive sdb is on a motherboard port.  The drive sdk is on the Asmedia card, through the 0000:00:0e.0 bridge.

Link to comment

I'd be curious what kind of power numbers you get on your collection of servers.  My 42TB array, with a couple parity and an ssd cache (17 total drives) consumes about 140W all spun up.  I worked really hard to make it very energy efficient (for a server of this size).

 

Power consumption is also important to me, most of my servers are storage only, I turn them on once a week to move data, the smallest server, Tower7, is my VM and docker server and the only one that's always on, the SSD server also has more usage since it's where I store ongoing TV seasons, when the season it's complete it's archived to one of the other servers, after checksums and par2s are created.

 

These are approximate numbers since it's been a while since I measured, with all disks spun up:

 

Tower1 and 6 (22HDDs): 180w

Tower2 and 3 (14HDDs): 130w

Tower4 (8HDDs) and Tower5 (30SSDs, LSI controllers and the expander are the big users, ~10W each): 90w

Tower7 (6HDDs + 6SSDs): 90W (~60W during normal use with all or all but one disk spun down)

 

I only have 2 900VA/540W UPSes for all the servers plus my desktop, with everything on they get close to 500W load each,  besides the very low runtime, all servers are in a smallish office type room, so it gets pretty hot quickly, it's nice in the winter  :)

 

 

 

Link to comment

What we use in the diagnostics is lspci -knn and lsscsi -vgl (that's a lowercase L).  All you need (I think) is lspci and lsscsi -v.  Unfortunately, it's not straightforward, as some PCI devices are used directly, and others are used through other PCI devices. 

 

Thanks RobJ.

 

I was just looking at lsscsi myself.  I like this, which is giving me the driver (I think), a nice touch:  "lsscsi -H"

[0]    usb-storage
[1]    mvsas
[2]    mvsas
[3]    mvsas

 

combined with this:  "lsscsi -s"

[0:0:0:0]    disk             Patriot Memory   PMAP  /dev/sda   4.00GB
[1:0:0:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdb   3.00TB
[1:0:1:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdc   3.00TB
[1:0:2:0]    disk    ATA      Samsung SSD 840  BB6Q  /dev/sdd   1.00TB
[1:0:3:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sde   3.00TB
[1:0:4:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdf   3.00TB
[1:0:5:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdg   3.00TB
[1:0:6:0]    disk    ATA      WDC WD30EFRX-68E 0A82  /dev/sdh   3.00TB
[2:0:0:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdi   3.00TB
[2:0:1:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdj   3.00TB
[2:0:2:0]    disk    ATA      WDC WD30EFRX-68E 0A82  /dev/sdk   3.00TB
[2:0:3:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdl   3.00TB
[2:0:4:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdm   3.00TB
[2:0:5:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdn   3.00TB
[2:0:6:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdo   3.00TB
[2:0:7:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdp   3.00TB
[3:0:0:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdq   3.00TB
[3:0:1:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdr   3.00TB
[3:0:2:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sds   3.00TB
[3:0:3:0]    disk    ATA      WDC WD30EFRX-68A 0A80  /dev/sdt   3.00TB

 

But none of it is exactly what I want.  I think the only way to get the output I want is to grab all the values, put them in an array, and generate the output myself.  Another PITA.

 

Here's my goal - all the data is available, just not in one spot:

[0] scsi0    usb-storage  
[0:0:0:0]    Flash        Patriot Memory   PMAP  /dev/sda   4.00GB

[1] scsi1    mvsas        HighPoint Technologies, Inc.
[1:0:0:0]    Disk17       WDC WD30EFRX-68A 0A80  /dev/sdb   3.00TB
[1:0:1:0]    Disk18       WDC WD30EFRX-68A 0A80  /dev/sdc   3.00TB
[1:0:2:0]    Cache        Samsung SSD 840  BB6Q  /dev/sdd   1.00TB
[1:0:3:0]    Parity2      WDC WD30EFRX-68A 0A80  /dev/sde   3.00TB
[1:0:4:0]    Unassigned   WDC WD30EFRX-68A 0A80  /dev/sdf   3.00TB
[1:0:5:0]    Unassigned   WDC WD30EFRX-68A 0A80  /dev/sdg   3.00TB
[1:0:6:0]    Parity       WDC WD30EFRX-68E 0A82  /dev/sdh   3.00TB

[2] scsi2    mvsas        HighPoint Technologies, Inc.
[2:0:0:0]    Disk1        WDC WD30EFRX-68A 0A80  /dev/sdi   3.00TB
[2:0:1:0]    Disk2        WDC WD30EFRX-68A 0A80  /dev/sdj   3.00TB
[2:0:2:0]    Disk3        WDC WD30EFRX-68E 0A82  /dev/sdk   3.00TB
[2:0:3:0]    Disk4        WDC WD30EFRX-68A 0A80  /dev/sdl   3.00TB
[2:0:4:0]    Disk5        WDC WD30EFRX-68A 0A80  /dev/sdm   3.00TB
[2:0:5:0]    Disk6        WDC WD30EFRX-68A 0A80  /dev/sdn   3.00TB
[2:0:6:0]    Disk7        WDC WD30EFRX-68A 0A80  /dev/sdo   3.00TB
[2:0:7:0]    Disk8        WDC WD30EFRX-68A 0A80  /dev/sdp   3.00TB

[3] scsi3    mvsas        HighPoint Technologies, Inc.
[3:0:0:0]    Disk9        WDC WD30EFRX-68A 0A80  /dev/sdq   3.00TB
[3:0:1:0]    Disk10       WDC WD30EFRX-68A 0A80  /dev/sdr   3.00TB
[3:0:2:0]    Disk11       WDC WD30EFRX-68A 0A80  /dev/sds   3.00TB
[3:0:3:0]    Disk12       WDC WD30EFRX-68A 0A80  /dev/sdt   3.00TB

 

That's just a starting point too.  It would be even better if the controllers showed the model or chipset, instead of just a generic identifier.  I'm running a 2620A, a detail I've never seen anywhere in the system.

 

Looking at the above, my first thought is:  would I get better performance if I moved Parity2 to a different controller?

 

I think this is 3 or 4 different sources of data, some of which I'm already grabbing and putting into arrays, but if someone wants to help me with the logic, I'd be much obliged.

 

-Paul

Link to comment

Here are the results on my boring :) server.

 

       unRAID Tunables Tester v4.0b2 by Pauven (for unRAID v6.2)

        Tunables Report produced Wed Aug 24 19:38:43 PDT 2016

                         Run on server: Tower

                  Normal Automatic Parity Sync Test


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.


Current Values:  md_num_stripes=1408, md_sync_window=512, md_sync_thresh=192
                 Global nr_requests=128
                    sdl nr_requests=128
                    sdi nr_requests=128
                    sdj nr_requests=128
                    sdk nr_requests=128


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 5min Duration)---

Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh |   Speed 
-----------------------------------------------------------------------------
   1  |  31 |     1408    |     512     |   128   |      192    |  63.7 MB/s 


--- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 10min Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |     1536    |     768     |     128     |      767    |  87.5 MB/s 
  2   |     1536    |     768     |     128     |      384    | 135.5 MB/s 
  3   |     1536    |     768     |       8     |      767    | 155.0 MB/s 
  4   |     1536    |     768     |       8     |      384    | 153.4 MB/s 

Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 155.0 MB/s

This nr_requests value will be used for the next test.


--- FULLY AUTOMATIC TEST PASS 1a (Rough - 13 Sample Points @ 5min Duration)---

Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh |   Speed 
-----------------------------------------------------------------------------
   1a |  16 |      768    |     384     |     8   |      383    | 149.3 MB/s 
   1b |  16 |      768    |     384     |     8   |      192    | 148.2 MB/s 
   2a |  19 |      896    |     448     |     8   |      447    | 149.8 MB/s 
   2b |  19 |      896    |     448     |     8   |      224    | 148.9 MB/s 
   3a |  22 |     1024    |     512     |     8   |      511    | 150.2 MB/s 
   3b |  22 |     1024    |     512     |     8   |      256    | 148.7 MB/s 
   4a |  25 |     1152    |     576     |     8   |      575    | 150.6 MB/s 
   4b |  25 |     1152    |     576     |     8   |      288    | 148.9 MB/s 
   5a |  28 |     1280    |     640     |     8   |      639    | 151.2 MB/s 
   5b |  28 |     1280    |     640     |     8   |      320    | 149.8 MB/s 
   6a |  31 |     1408    |     704     |     8   |      703    | 151.7 MB/s 
   6b |  31 |     1408    |     704     |     8   |      352    | 150.0 MB/s 
   7a |  33 |     1536    |     768     |     8   |      767    | 152.1 MB/s 
   7b |  33 |     1536    |     768     |     8   |      384    | 149.7 MB/s 
   8a |  36 |     1664    |     832     |     8   |      831    | 152.5 MB/s 
   8b |  36 |     1664    |     832     |     8   |      416    | 150.6 MB/s 
   9a |  39 |     1792    |     896     |     8   |      895    | 152.9 MB/s 
   9b |  39 |     1792    |     896     |     8   |      448    | 150.9 MB/s 
  10a |  42 |     1920    |     960     |     8   |      959    | 153.1 MB/s 
  10b |  42 |     1920    |     960     |     8   |      480    | 151.0 MB/s 
  11a |  45 |     2048    |    1024     |     8   |     1023    | 153.8 MB/s 
  11b |  45 |     2048    |    1024     |     8   |      512    | 150.8 MB/s 
  12a |  47 |     2176    |    1088     |     8   |     1087    | 154.3 MB/s 
  12b |  47 |     2176    |    1088     |     8   |      544    | 151.2 MB/s 
  13a |  50 |     2304    |    1152     |     8   |     1151    | 154.4 MB/s 
  13b |  50 |     2304    |    1152     |     8   |      576    | 151.1 MB/s 

--- FULLY AUTOMATIC TEST PASS 1c (Rough - 18 Sample Points @ 5min Duration)---

Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh |   Speed 
-----------------------------------------------------------------------------
   1a |  53 |     2432    |    1216     |     8   |     1215    | 154.9 MB/s 
   1b |  53 |     2432    |    1216     |     8   |      608    | 152.1 MB/s 
   2a |  56 |     2560    |    1280     |     8   |     1279    | 155.0 MB/s 
   2b |  56 |     2560    |    1280     |     8   |      640    | 152.4 MB/s 
   3a |  59 |     2688    |    1344     |     8   |     1343    | 155.0 MB/s 
   3b |  59 |     2688    |    1344     |     8   |      672    | 152.4 MB/s 
   4a |  62 |     2816    |    1408     |     8   |     1407    | 155.4 MB/s 
   4b |  62 |     2816    |    1408     |     8   |      704    | 152.2 MB/s 
   5a |  64 |     2944    |    1472     |     8   |     1471    | 155.4 MB/s 
   5b |  64 |     2944    |    1472     |     8   |      736    | 155.4 MB/s 
   6a |  67 |     3072    |    1536     |     8   |     1535    | 155.3 MB/s 
   6b |  67 |     3072    |    1536     |     8   |      768    | 155.2 MB/s 
   7a |  70 |     3200    |    1600     |     8   |     1599    | 155.4 MB/s 
   7b |  70 |     3200    |    1600     |     8   |      800    | 155.6 MB/s 
   8a |  73 |     3328    |    1664     |     8   |     1663    | 155.6 MB/s 
   8b |  73 |     3328    |    1664     |     8   |      832    | 155.5 MB/s 
   9a |  76 |     3456    |    1728     |     8   |     1727    | 154.9 MB/s 
   9b |  76 |     3456    |    1728     |     8   |      864    | 155.2 MB/s 
  10a |  79 |     3584    |    1792     |     8   |     1791    | 155.4 MB/s 
  10b |  79 |     3584    |    1792     |     8   |      896    | 155.4 MB/s 
  11a |  81 |     3712    |    1856     |     8   |     1855    | 155.5 MB/s 
  11b |  81 |     3712    |    1856     |     8   |      928    | 155.3 MB/s 
  12a |  84 |     3840    |    1920     |     8   |     1919    | 154.9 MB/s 
  12b |  84 |     3840    |    1920     |     8   |      960    | 155.5 MB/s 
  13a |  87 |     3968    |    1984     |     8   |     1983    | 155.3 MB/s 
  13b |  87 |     3968    |    1984     |     8   |      992    | 155.4 MB/s 
  14a |  90 |     4096    |    2048     |     8   |     2047    | 155.5 MB/s 
  14b |  90 |     4096    |    2048     |     8   |     1024    | 155.2 MB/s 
  15a |  93 |     4224    |    2112     |     8   |     2111    | 155.2 MB/s 
  15b |  93 |     4224    |    2112     |     8   |     1056    | 155.4 MB/s 
  16a |  95 |     4352    |    2176     |     8   |     2175    | 155.1 MB/s 
  16b |  95 |     4352    |    2176     |     8   |     1088    | 155.6 MB/s 
  17a |  98 |     4480    |    2240     |     8   |     2239    | 155.4 MB/s 
  17b |  98 |     4480    |    2240     |     8   |     1120    | 155.4 MB/s 
  18a | 101 |     4608    |    2304     |     8   |     2303    | 155.0 MB/s 
  18b | 101 |     4608    |    2304     |     8   |     1152    | 155.5 MB/s 

--- FULLY AUTOMATIC TEST PASS 1d (Rough - 18 Sample Points @ 5min Duration)---

Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh |   Speed 
-----------------------------------------------------------------------------
   1a | 104 |     4736    |    2368     |     8   |     2367    | 155.5 MB/s 
   1b | 104 |     4736    |    2368     |     8   |     1184    | 155.5 MB/s 
   2a | 107 |     4864    |    2432     |     8   |     2431    | 155.5 MB/s 
   2b | 107 |     4864    |    2432     |     8   |     1216    | 155.2 MB/s 
   3a | 110 |     4992    |    2496     |     8   |     2495    | 155.3 MB/s 
   3b | 110 |     4992    |    2496     |     8   |     1248    | 155.3 MB/s 
   4a | 112 |     5120    |    2560     |     8   |     2559    | 155.3 MB/s 
   4b | 112 |     5120    |    2560     |     8   |     1280    | 155.3 MB/s 
   5a | 115 |     5248    |    2624     |     8   |     2623    | 155.4 MB/s 
   5b | 115 |     5248    |    2624     |     8   |     1312    | 155.1 MB/s 
   6a | 118 |     5376    |    2688     |     8   |     2687    | 155.3 MB/s 
   6b | 118 |     5376    |    2688     |     8   |     1344    | 155.6 MB/s 
   7a | 121 |     5504    |    2752     |     8   |     2751    | 155.6 MB/s 
   7b | 121 |     5504    |    2752     |     8   |     1376    | 155.5 MB/s 
   8a | 124 |     5632    |    2816     |     8   |     2815    | 155.4 MB/s 
   8b | 124 |     5632    |    2816     |     8   |     1408    | 155.1 MB/s 
   9a | 127 |     5760    |    2880     |     8   |     2879    | 155.4 MB/s 
   9b | 127 |     5760    |    2880     |     8   |     1440    | 155.6 MB/s 
  10a | 129 |     5888    |    2944     |     8   |     2943    | 155.4 MB/s 
  10b | 129 |     5888    |    2944     |     8   |     1472    | 155.2 MB/s 
  11a | 132 |     6016    |    3008     |     8   |     3007    | 155.5 MB/s 
  11b | 132 |     6016    |    3008     |     8   |     1504    | 155.2 MB/s 
  12a | 135 |     6144    |    3072     |     8   |     3071    | 155.0 MB/s 
  12b | 135 |     6144    |    3072     |     8   |     1536    | 155.4 MB/s 
  13a | 138 |     6272    |    3136     |     8   |     3135    | 155.4 MB/s 
  13b | 138 |     6272    |    3136     |     8   |     1568    | 155.4 MB/s 
  14a | 141 |     6400    |    3200     |     8   |     3199    | 155.2 MB/s 
  14b | 141 |     6400    |    3200     |     8   |     1600    | 155.2 MB/s 
  15a | 143 |     6528    |    3264     |     8   |     3263    | 155.3 MB/s 
  15b | 143 |     6528    |    3264     |     8   |     1632    | 155.5 MB/s 
  16a | 146 |     6656    |    3328     |     8   |     3327    | 155.5 MB/s 
  16b | 146 |     6656    |    3328     |     8   |     1664    | 154.8 MB/s 
  17a | 149 |     6784    |    3392     |     8   |     3391    | 131.7 MB/s 
  17b | 149 |     6784    |    3392     |     8   |     1696    | 123.8 MB/s 
  18a | 152 |     6912    |    3456     |     8   |     3455    | 123.1 MB/s 
  18b | 152 |     6912    |    3456     |     8   |     1728    | 118.1 MB/s 

--- Targeting Fastest Result of md_sync_window 1600 bytes for Final Pass ---


--- FULLY AUTOMATIC nr_requests TEST 2 (4 Sample Points @ 10min Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |     3200    |    1600     |     128     |     1599    | 119.9 MB/s 
  2   |     3200    |    1600     |     128     |      800    | 128.0 MB/s 
  3   |     3200    |    1600     |       8     |     1599    | 130.6 MB/s 
  4   |     3200    |    1600     |       8     |      800    | 123.3 MB/s 

Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 130.6 MB/s

This nr_requests value will be used for the next test.

--- FULLY AUTOMATIC TEST PASS 2 (Fine - 33 Sample Points @ 5min Duration)---

Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh |   Speed 
-----------------------------------------------------------------------------
   1a |  64 |     2944    |    1472     |     8   |     1471    | 119.3 MB/s 
   1b |  64 |     2944    |    1472     |     8   |      736    | 110.9 MB/s 
   2a |  65 |     2960    |    1480     |     8   |     1479    | 122.9 MB/s 
   2b |  65 |     2960    |    1480     |     8   |      740    | 120.2 MB/s 
   3a |  65 |     2976    |    1488     |     8   |     1487    | 111.7 MB/s 
   3b |  65 |     2976    |    1488     |     8   |      744    | 133.5 MB/s 
   4a |  65 |     2992    |    1496     |     8   |     1495    | 124.7 MB/s 
   4b |  65 |     2992    |    1496     |     8   |      748    | 129.5 MB/s 
   5a |  66 |     3008    |    1504     |     8   |     1503    | 129.2 MB/s 
   5b |  66 |     3008    |    1504     |     8   |      752    | 126.1 MB/s 
   6a |  66 |     3024    |    1512     |     8   |     1511    | 114.0 MB/s 
   6b |  66 |     3024    |    1512     |     8   |      756    | 120.7 MB/s 
   7a |  67 |     3040    |    1520     |     8   |     1519    | 120.0 MB/s 
   7b |  67 |     3040    |    1520     |     8   |      760    | 121.8 MB/s 
   8a |  67 |     3056    |    1528     |     8   |     1527    | 143.8 MB/s 
   8b |  67 |     3056    |    1528     |     8   |      764    | 155.3 MB/s 
   9a |  67 |     3072    |    1536     |     8   |     1535    | 155.5 MB/s 
   9b |  67 |     3072    |    1536     |     8   |      768    | 155.4 MB/s 
  10a |  68 |     3088    |    1544     |     8   |     1543    | 155.4 MB/s 
  10b |  68 |     3088    |    1544     |     8   |      772    | 155.2 MB/s 
  11a |  68 |     3104    |    1552     |     8   |     1551    | 155.3 MB/s 
  11b |  68 |     3104    |    1552     |     8   |      776    | 155.4 MB/s 
  12a |  68 |     3120    |    1560     |     8   |     1559    | 155.5 MB/s 
  12b |  68 |     3120    |    1560     |     8   |      780    | 155.4 MB/s 
  13a |  69 |     3136    |    1568     |     8   |     1567    | 155.2 MB/s 
  13b |  69 |     3136    |    1568     |     8   |      784    | 155.5 MB/s 
  14a |  69 |     3152    |    1576     |     8   |     1575    | 155.6 MB/s 
  14b |  69 |     3152    |    1576     |     8   |      788    | 155.4 MB/s 
  15a |  69 |     3168    |    1584     |     8   |     1583    | 155.3 MB/s 
  15b |  69 |     3168    |    1584     |     8   |      792    | 155.4 MB/s 
  16a |  70 |     3184    |    1592     |     8   |     1591    | 155.2 MB/s 
  16b |  70 |     3184    |    1592     |     8   |      796    | 155.4 MB/s 
  17a |  70 |     3200    |    1600     |     8   |     1599    | 155.4 MB/s 
  17b |  70 |     3200    |    1600     |     8   |      800    | 155.3 MB/s 
  18a |  70 |     3216    |    1608     |     8   |     1607    | 155.4 MB/s 
  18b |  70 |     3216    |    1608     |     8   |      804    | 155.3 MB/s 
  19a |  71 |     3232    |    1616     |     8   |     1615    | 155.2 MB/s 
  19b |  71 |     3232    |    1616     |     8   |      808    | 155.3 MB/s 
  20a |  71 |     3248    |    1624     |     8   |     1623    | 155.5 MB/s 
  20b |  71 |     3248    |    1624     |     8   |      812    | 155.4 MB/s 
  21a |  71 |     3264    |    1632     |     8   |     1631    | 155.4 MB/s 
  21b |  71 |     3264    |    1632     |     8   |      816    | 155.3 MB/s 
  22a |  72 |     3280    |    1640     |     8   |     1639    | 155.2 MB/s 
  22b |  72 |     3280    |    1640     |     8   |      820    | 155.4 MB/s 
  23a |  72 |     3296    |    1648     |     8   |     1647    | 155.5 MB/s 
  23b |  72 |     3296    |    1648     |     8   |      824    | 155.4 MB/s 
  24a |  73 |     3312    |    1656     |     8   |     1655    | 155.3 MB/s 
  24b |  73 |     3312    |    1656     |     8   |      828    | 155.3 MB/s 
  25a |  73 |     3328    |    1664     |     8   |     1663    | 155.3 MB/s 
  25b |  73 |     3328    |    1664     |     8   |      832    | 155.3 MB/s 
  26a |  73 |     3344    |    1672     |     8   |     1671    | 155.3 MB/s 
  26b |  73 |     3344    |    1672     |     8   |      836    | 155.5 MB/s 
  27a |  74 |     3360    |    1680     |     8   |     1679    | 155.4 MB/s 
  27b |  74 |     3360    |    1680     |     8   |      840    | 155.4 MB/s 
  28a |  74 |     3376    |    1688     |     8   |     1687    | 155.4 MB/s 
  28b |  74 |     3376    |    1688     |     8   |      844    | 155.5 MB/s 
  29a |  74 |     3392    |    1696     |     8   |     1695    | 155.6 MB/s 
  29b |  74 |     3392    |    1696     |     8   |      848    | 155.4 MB/s 
  30a |  75 |     3408    |    1704     |     8   |     1703    | 155.4 MB/s 
  30b |  75 |     3408    |    1704     |     8   |      852    | 155.4 MB/s 
  31a |  75 |     3424    |    1712     |     8   |     1711    | 155.3 MB/s 
  31b |  75 |     3424    |    1712     |     8   |      856    | 155.4 MB/s 
  32a |  75 |     3440    |    1720     |     8   |     1719    | 155.2 MB/s 
  32b |  75 |     3440    |    1720     |     8   |      860    | 155.4 MB/s 
  33a |  76 |     3456    |    1728     |     8   |     1727    | 155.3 MB/s 
  33b |  76 |     3456    |    1728     |     8   |      864    | 155.5 MB/s 

The results below do NOT include the Basline test of current values.

The Fastest Sync Speed tested was md_sync_window=1576 at 155.6 MB/s
     Tunable (md_num_stripes): 3152
     Tunable (md_sync_window): 1576
     Tunable (md_sync_thresh): 1575
     Tunable (nr_requests): 8
This will consume 69 MB with md_num_stripes=3152, 2x md_sync_window.
This is 38MB more than your current utilization of 31MB.

The Thriftiest Sync Speed tested was md_sync_window=384 at 149.3 MB/s
     Tunable (md_num_stripes): 768
     Tunable (md_sync_window): 384
     Tunable (md_sync_thresh): 383
     Tunable (nr_requests): 8
This will consume 16 MB with md_num_stripes=768, 2x md_sync_window.
This is 15MB less than your current utilization of 31MB.

The Recommended Sync Speed is md_sync_window=1216 at 154.9 MB/s
     Tunable (md_num_stripes): 2432
     Tunable (md_sync_window): 1216
     Tunable (md_sync_thresh): 1215
     Tunable (nr_requests): 8
This will consume 53 MB with md_num_stripes=2432, 2x md_sync_window.
This is 22MB more than your current utilization of 31MB.

NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Completed: 15 Hrs 51 Min 46 Sec.


System Info:  Tower
              unRAID version 6.2.0-rc4
                   md_num_stripes=1408
                   md_sync_window=512
                   md_sync_thresh=192
                   nr_requests=128 (Global Setting)
                   sbNumDisks=5
              CPU: Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
              RAM: 16GiB System Memory

Outputting lshw information for Drives and Controllers:

<snip> because this is too long to submit!</snip>

Array Devices:
    Disk0 sdl is a Parity drive named parity
    Disk1 sdi is a Data drive named disk1
    Disk2 sdj is a Data drive named disk2
    Disk3 sdk is a Data drive named disk3

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:       16464376      175904    15185584      484584     1102888    15376812
Low:       16464376     1278792    15185584
High:             0           0           0
Swap:             0           0           0


                      *** END OF REPORT ***

 

I find the "baseline" test confusing, because it says my current values should give a speed of 63.7 MB/s, yet my actual parity check history shows much better values:

 

2016-08-01, 08:43:21	8 hr, 43 min, 20 sec	127.4 MB/s	OK
2016-07-01, 08:32:08	8 hr, 32 min, 6 sec	130.2 MB/s	OK
2016-06-01, 08:58:34	8 hr, 58 min, 32 sec	123.8 MB/s	OK

 

This script added a few "cancelled" entries in parity check history as well, it would be nice if there were a way to prevent that:

 

2016-08-25, 11:30:29	5 min, 14 sec	Unavailable	Canceled
2016-08-24, 19:32:34	31 sec	Unavailable	Canceled

 

Any other thoughts on the results?

Link to comment

Here are the results on my boring :) server.

 

--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 5min Duration)---

Test | RAM | num_stripes | sync_window | nr_reqs | sync_thresh |   Speed 
-----------------------------------------------------------------------------
   1  |  31 |     1408    |     512     |   128   |      192    |  63.7 MB/s 

 

I find the "baseline" test confusing, because it says my current values should give a speed of 63.7 MB/s, yet my actual parity check history shows much better values:

 

2016-08-01, 08:43:21	8 hr, 43 min, 20 sec	127.4 MB/s	OK
2016-07-01, 08:32:08	8 hr, 32 min, 6 sec	130.2 MB/s	OK
2016-06-01, 08:58:34	8 hr, 58 min, 32 sec	123.8 MB/s	OK

 

 

I guess your server isn't so boring after all, eh?  Yes, that is confusing.  It helps to keep in mind that, technically, we are measuring two different things. 

 

The parity check results on your server are measuring how long it took to read the full 4TB from your drives, and calculating the average speed from a single 8-9 hour test.  Nice speeds by the way.

 

When running the Normal Auto, the script is measuring how much data was processed in 5 minutes, from almost the beginning of your drives, not the entire thing.  We measure from the beginning of the drive for two reasons:  1) it's a lot easier, and 2) typically your fastest speeds occur here, so optimizing here hopefully optimizes for the entire run.  There are exception cases (having an old and slow 500 GB drive in the mix with fast 8TB drives) that would prevent the beginning from being the fastest, but those scenarios don't apply to your server.

 

The goal isn't to make the actual average parity check speed match these tests, but rather to find what parameters produce higher speeds in the test, which hopefully will produce faster real-world parity check speeds.

 

That said, I would typically expect these tests to report higher speeds, not lower, and certainly not by half.

 

It's certainly possible that "something" is slowing down the very beginning of your parity checks.  If this is true, I would imagine you would see this in the GUI as well.  Start up a parity check, and refresh the GUI about every 30 seconds.  See how long it takes before the GUI reports high speeds.  It might crawl along for a minute or two, then jump up to 150MB/s, but in a 5 minute test, a couple minutes of crawling can really drop the tested speeds.

 

If it turns out that the beginning of your parity checks really are slow (and if the script doesn't give you values to fix it), it's probably smart to run a SMART report, to see if one of your drives is smartin'.  Could be some bad sectors at the beginning of a drive slowing things down.  This is unlikely, and I'm not trying to scare you, but it may be worth double-checking.

 

This script added a few "cancelled" entries in parity check history as well, it would be nice if there were a way to prevent that:

 

2016-08-25, 11:30:29	5 min, 14 sec	Unavailable	Canceled
2016-08-24, 19:32:34	31 sec	Unavailable	Canceled

 

 

I agree.  Out of my hands, sorry.  I have noticed that, regardless of how many parity checks get started and canceled, I never see more than one History row per day Just noticed I have more than one per day now.  Not sure if it takes the first or last daily result, but I'm guessing it is one of the two.

 

Any other thoughts on the results?

 

I see a some inconsistency from the first pass to the second pass, mostly in the lower ranges.  It does appear to me that your server likes higher md_sync_window values (1536 looks good), nr_requests=8, and higher md_sync_thresh values (1535).  Use 3072 for md_num_stripes, and run a parity check.  Perhaps we can drop that already excellent parity check time.

 

But you may want to run that other test first, starting a parity check in the GUI and watching the first few minutes to see how it behaves.

 

Thanks,

Paul

Link to comment

UTT v4 Beta Testers, v4b3 is now available for download.

 

Changes:

  • Removed obsolete path from mdcmd
  • Fixed drive specific nr_request values querying bug
  • Changed lshw flag from -businfo to -short
  • Drastically reduced the # of test points in a Short Auto, now completes in <21 min
  • Removed Fastest/Thriftiest/Recommend subreport from Short Auto
  • Combined a/b Test rows in the report into a single line, easier to read and shorter!

 

Enjoy,

Paul

Link to comment

So, I ran the short test on 4b3 before running the normal test overnight. I got very different results than the previous short test.

 

       unRAID Tunables Tester v4.0b3 by Pauven (for unRAID v6.2)

        Tunables Report produced Thu Aug 25 20:05:40 CDT 2016

                         Run on server: nas

                   Short Automatic Parity Sync Test


Current Values:  md_num_stripes=560, md_sync_window=280, md_sync_thresh=140
                 Global nr_requests=8
                    sdo nr_requests=8
                    sdn nr_requests=8
                    sdm nr_requests=8
                    sdl nr_requests=8
                    sdh nr_requests=8
                    sdk nr_requests=8
                    sdj nr_requests=8
                    sdi nr_requests=8
                    sdg nr_requests=8
                    sdf nr_requests=8
                    sde nr_requests=8
                    sdd nr_requests=8
                    sdc nr_requests=8


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s 
-------------------------------------------------------
   1  |  32 |    560  |   280  |   8  |   140  | 140.1 

--- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 60sec Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |     1536    |     768     |     128     |      767    |  87.1 MB/s 
  2   |     1536    |     768     |     128     |      384    | 133.2 MB/s 
  3   |     1536    |     768     |       8     |      767    | 111.5 MB/s 
  4   |     1536    |     768     |       8     |      384    | 108.1 MB/s 

Fastest vals were nr_reqs=128 and sync_thresh=50% of sync_window at 133.2 MB/s

This nr_requests value will be used for the next test.


--- FULLY AUTOMATIC TEST PASS 1a (Rough - 4 Sample Points @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |  43 |    768  |   384  | 128  |   383  |  99.0 |   192  | 149.3 
   2  |  73 |   1280  |   640  | 128  |   639  | 139.9 |   320  | 126.6 
   3  | 102 |   1792  |   896  | 128  |   895  | 126.6 |   448  | 115.2 
   4  | 131 |   2304  |  1152  | 128  |  1151  | 130.4 |   576  | 104.4 

--- FULLY AUTOMATIC TEST PASS 1b (Rough - 2 Sample Points @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |   7 |    128  |    64  | 128  |    63  | 110.3 |    32  | 142.2 
   2  |  36 |    640  |   320  | 128  |   319  | 111.6 |   160  | 144.3 

--- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE NORMAL AUTO ---

If the speeds changed with different values you should run a NORMAL AUTO test.

Completed: 0 Hrs 11 Min 12 Sec.


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.


System Info:  nas
              unRAID version 6.2.0-rc4
                   md_num_stripes=560
                   md_sync_window=280
                   md_sync_thresh=140
                   nr_requests=8 (Global Setting)
                   sbNumDisks=14
              CPU: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
  Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
  CPU
  CPU
  CPU
  CPU
  CPU
  CPU
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
              RAM: 32GiB System Memory

Outputting lshw information for Drives and Controllers:

H/W path             Device     Class      Description
======================================================
/0/100/7.1                      storage    82371AB/EB/MB PIIX4 IDE
/0/100/15/0          scsi3      storage    SAS1068 PCI-X Fusion-MPT SAS
/0/100/15/0/0.0.0    /dev/sda   disk       209MB Virtual disk
/0/100/16/0          scsi2      storage    SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
/0/100/16/0/0.0.0    /dev/sdb   disk       512GB Crucial_CT512M55
/0/100/16/0/0.1.0    /dev/sdc   disk       4TB HGST HDN724040AL
/0/100/16/0/0.2.0    /dev/sdd   disk       4TB Hitachi HDS72404
/0/100/16/0/0.3.0    /dev/sde   disk       4TB HGST HDN724040AL
/0/100/16/0/0.4.0    /dev/sdf   disk       4TB Hitachi HDS72404
/0/100/16/0/0.5.0    /dev/sdg   disk       4TB HGST HDS724040AL
/0/100/17/0          scsi4      storage    SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
/0/100/17/0/0.3.0    /dev/sdk   disk       4TB HGST HDS724040AL
/0/100/17/0/0.4.0    /dev/sdl   disk       4TB HGST HDS724040AL
/0/100/17/0/0.5.0    /dev/sdm   disk       4TB HGST HDS724040AL
/0/100/17/0/0.6.0    /dev/sdn   disk       4TB HGST HDS724040AL
/0/100/17/0/0.7.0    /dev/sdo   disk       4TB Hitachi HDS72404
/0/100/17/0/0.0.0    /dev/sdh   disk       4TB HGST HDS724040AL
/0/100/17/0/0.1.0    /dev/sdi   disk       4TB Hitachi HDS72404
/0/100/17/0/0.2.0    /dev/sdj   disk       4TB HGST HDS724040AL
/0/1                 scsi5      storage    
/0/1/0.0.0           /dev/sdp   disk       15GB Reader     SD/MS
/0/1/0.0.0/0         /dev/sdp   disk       15GB 
/0/1/0.0.1           /dev/sdq   disk       Reader  MicSD/M2
/0/1/0.0.1/0         /dev/sdq   disk       

Array Devices:
    Disk0 sdo is a Parity drive named parity
    Disk1 sdn is a Data drive named disk1
    Disk2 sdm is a Data drive named disk2
    Disk3 sdl is a Data drive named disk3
    Disk4 sdh is a Data drive named disk4
    Disk5 sdk is a Data drive named disk5
    Disk6 sdj is a Data drive named disk6
    Disk7 sdi is a Data drive named disk7
    Disk8 sdg is a Data drive named disk8
    Disk9 sdf is a Data drive named disk9
    Disk10 sde is a Data drive named disk10
    Disk11 sdd is a Data drive named disk11
    Disk12 sdc is a Data drive named disk12

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:       32950372     5156136    24595120      415304     3199116    26628120
Low:       32950372     8355252    24595120
High:             0           0           0
Swap:             0           0           0


                      *** END OF REPORT ***

Link to comment

I guess your server isn't so boring after all, eh? 

 

LOL I guess not :)

 

That said, I would typically expect these tests to report higher speeds, not lower, and certainly not by half.

 

It's certainly possible that "something" is slowing down the very beginning of your parity checks.  If this is true, I would imagine you would see this in the GUI as well.  Start up a parity check, and refresh the GUI about every 30 seconds.  See how long it takes before the GUI reports high speeds.  It might crawl along for a minute or two, then jump up to 150MB/s, but in a 5 minute test, a couple minutes of crawling can really drop the tested speeds.

 

Well, I started a parity check and was seeing numbers hover around 157 MB/s, so it doesn't seem like a standard parity check started off slow like we anticipated.

 

I then changed my settings as you suggested:

 

nr_requests 128 -> 8
md_sync_window 512 -> 1536
md_sync_thresh 192 -> 1535
md_num_stripes 1408 -> 3072

 

And started a parity check and was seeing numbers a bit higher, about 160 MB/s.

 

I cancelled that and ran the short test using v4.0b3.  The baseline with the new numbers was still a bit lower than I would have expected, but no as bad as before.  Actually, *all* of the numbers were slightly lower than before (none of them beat 148), but perhaps that is the difference between a 30 second test and a 5 minute test. Or maybe I should have rebooted between tests?

 

       unRAID Tunables Tester v4.0b3 by Pauven (for unRAID v6.2)

        Tunables Report produced Thu Aug 25 19:20:21 PDT 2016

                         Run on server: Tower

                   Short Automatic Parity Sync Test


Current Values:  md_num_stripes=3072, md_sync_window=1536, md_sync_thresh=1535
                 Global nr_requests=8
                    sdl nr_requests=8
                    sdi nr_requests=8
                    sdj nr_requests=8
                    sdk nr_requests=8


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s 
-------------------------------------------------------
   1  |  67 |   3072  |  1536  |   8  |  1535  | 121.1 

--- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 60sec Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |     1536    |     768     |     128     |      767    | 115.7 MB/s 
  2   |     1536    |     768     |     128     |      384    | 144.8 MB/s 
  3   |     1536    |     768     |       8     |      767    | 145.6 MB/s 
  4   |     1536    |     768     |       8     |      384    | 143.5 MB/s 

Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 145.6 MB/s

This nr_requests value will be used for the next test.


--- FULLY AUTOMATIC TEST PASS 1a (Rough - 4 Sample Points @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |  16 |    768  |   384  |   8  |   383  | 143.9 |   192  | 143.3 
   2  |  28 |   1280  |   640  |   8  |   639  | 145.9 |   320  | 143.9 
   3  |  39 |   1792  |   896  |   8  |   895  | 146.6 |   448  | 144.0 
   4  |  50 |   2304  |  1152  |   8  |  1151  | 147.9 |   576  | 145.4 

--- FULLY AUTOMATIC TEST PASS 1c (Rough - 5 Sample Points @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |  53 |   2432  |  1216  |   8  |  1215  | 148.4 |   608  | 144.3 
   2  |  64 |   2944  |  1472  |   8  |  1471  | 148.1 |   736  | 147.1 
   3  |  76 |   3456  |  1728  |   8  |  1727  | 148.2 |   864  | 148.4 
   4  |  87 |   3968  |  1984  |   8  |  1983  | 148.7 |   992  | 147.4 
   5  |  98 |   4480  |  2240  |   8  |  2239  | 148.9 |  1120  | 148.5 

--- FULLY AUTOMATIC TEST PASS 1d (Rough - 5 Sample Points @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  | 101 |   4608  |  2304  |   8  |  2303  | 149.0 |  1152  | 148.3 
   2  | 112 |   5120  |  2560  |   8  |  2559  | 148.7 |  1280  | 148.8 
   3  | 124 |   5632  |  2816  |   8  |  2815  | 147.7 |  1408  | 148.3 
   4  | 135 |   6144  |  3072  |   8  |  3071  | 148.0 |  1536  | 146.1 
   5  | 146 |   6656  |  3328  |   8  |  3327  | 148.5 |  1664  | 148.4 

--- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE NORMAL AUTO ---

If the speeds changed with different values you should run a NORMAL AUTO test.

Completed: 0 Hrs 19 Min 5 Sec.


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.


System Info:  Tower
              unRAID version 6.2.0-rc4
                   md_num_stripes=3072
                   md_sync_window=1536
                   md_sync_thresh=1535
                   nr_requests=8 (Global Setting)
                   sbNumDisks=5
              CPU: Intel(R) Xeon(R) CPU E3-1240 v3 @ 3.40GHz
              RAM: 16GiB System Memory

Outputting lshw information for Drives and Controllers:

H/W path         Device     Class          Description
======================================================
/0/100/1f.2                 storage        8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode]
/0/1             scsi0      storage        
/0/1/0.0.0       /dev/sda   disk           16GB Cruzer Fit
/0/2             scsi1      storage        
/0/2/0.0.0       /dev/sdb   disk           USB3.0 CRW-CF/MD
/0/2/0.0.0/0     /dev/sdb   disk           
/0/2/0.0.1       /dev/sdc   disk           USB3.0 CRW-SM/xD
/0/2/0.0.1/0     /dev/sdc   disk           
/0/2/0.0.2       /dev/sdd   disk           USB3.0 CRW-SD
/0/2/0.0.2/0     /dev/sdd   disk           
/0/2/0.0.3       /dev/sde   disk           USB3.0 CRW-MS
/0/2/0.0.3/0     /dev/sde   disk           
/0/2/0.0.4       /dev/sdf   disk           USB3.0 CRW-SD/MS
/0/2/0.0.4/0     /dev/sdf   disk           
/0/3             scsi6      storage        
/0/3/0.0.0       /dev/sdk   disk           4TB ST4000VN000-1H41
/0/4             scsi8      storage        
/0/4/0.0.0       /dev/sdl   disk           4TB ST4000VN000-1H41
/0/5             scsi2      storage        
/0/5/0.0.0       /dev/sdg   disk           15GB Patriot Memory
/0/5/0.0.0/0     /dev/sdg   disk           15GB 
/0/6             scsi3      storage        
/0/6/0.0.0       /dev/sdh   disk           512GB Samsung SSD 850
/0/9             scsi4      storage        
/0/9/0.0.0       /dev/sdi   disk           4TB ST4000VN000-1H41
/0/a             scsi5      storage        
/0/a/0.0.0       /dev/sdj   disk           4TB ST4000VN000-1H41

Array Devices:
    Disk0 sdl is a Parity drive named parity
    Disk1 sdi is a Data drive named disk1
    Disk2 sdj is a Data drive named disk2
    Disk3 sdk is a Data drive named disk3

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:       16464376      178796    15181456      484652     1104124    15374096
Low:       16464376     1282920    15181456
High:             0           0           0
Swap:             0           0           0


                      *** END OF REPORT ***

 

I rebooted and started a full parity check.  We'll see how that compares.

 

Update...  Wow, it started off around 160 MB/s, but I'm 20 minutes in and it has consistently been 170-175!  Can't wait to see the final numbers :)

 

Update 2... it must have really slowed down at the end. The final was an improvement, but it only knocked about 20 minutes off my previous best:

2016-08-26, 04:06:57	8 hr, 12 min, 31 sec	135.4 MB/s	OK

2016-08-01, 08:43:21	8 hr, 43 min, 20 sec	127.4 MB/s	OK
2016-07-01, 08:32:08	8 hr, 32 min, 6 sec	130.2 MB/s	OK
2016-06-01, 08:58:34	8 hr, 58 min, 32 sec	123.8 MB/s	OK

Link to comment

Overnight did a normal test for Tower7, previous short test report for this server is here.

 

 

 

       unRAID Tunables Tester v4.0b3 by Pauven (for unRAID v6.2)

        Tunables Report produced Fri Aug 26 00:11:11 BST 2016

                         Run on server: Tower7

                  Normal Automatic Parity Sync Test


Current Values:  md_num_stripes=4096, md_sync_window=2048, md_sync_thresh=2000
                 Global nr_requests=8
                    sdc nr_requests=8
                    sdd nr_requests=8
                    sde nr_requests=8
                    sdf nr_requests=8
                    sdg nr_requests=8


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 5min Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s 
-------------------------------------------------------
   1  | 106 |   4096  |  2048  |   8  |  2000  | 182.4 

--- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 10min Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |     1536    |     768     |     128     |      767    | 185.9 MB/s 
  2   |     1536    |     768     |     128     |      384    | 184.6 MB/s 
  3   |     1536    |     768     |       8     |      767    | 184.7 MB/s 
  4   |     1536    |     768     |       8     |      384    | 184.7 MB/s 

Fastest vals were nr_reqs=128 and sync_thresh=99% of sync_window at 185.9 MB/s

This nr_requests value will be used for the next test.


--- FULLY AUTOMATIC TEST PASS 1a (Rough - 13 Sample Points @ 5min Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |  19 |    768  |   384  | 128  |   383  | 184.2 |   192  | 187.6 
   2  |  23 |    896  |   448  | 128  |   447  | 184.6 |   224  | 183.1 
   3  |  26 |   1024  |   512  | 128  |   511  | 183.7 |   256  | 184.6 
   4  |  29 |   1152  |   576  | 128  |   575  | 181.3 |   288  | 184.7 
   5  |  33 |   1280  |   640  | 128  |   639  | 187.4 |   320  | 184.7 
   6  |  36 |   1408  |   704  | 128  |   703  | 184.4 |   352  | 185.0 
   7  |  39 |   1536  |   768  | 128  |   767  | 183.0 |   384  | 181.8 
   8  |  43 |   1664  |   832  | 128  |   831  | 183.2 |   416  | 184.5 
   9  |  46 |   1792  |   896  | 128  |   895  | 187.5 |   448  | 181.2 
  10  |  49 |   1920  |   960  | 128  |   959  | 182.9 |   480  | 184.6 
  11  |  53 |   2048  |  1024  | 128  |  1023  | 183.8 |   512  | 182.0 
  12  |  56 |   2176  |  1088  | 128  |  1087  | 181.9 |   544  | 184.7 
  13  |  59 |   2304  |  1152  | 128  |  1151  | 184.7 |   576  | 182.5 

--- FULLY AUTOMATIC TEST PASS 1b (Rough - 5 Sample Points @ 5min Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |   3 |    128  |    64  | 128  |    63  | 184.3 |    32  | 159.9 
   2  |   6 |    256  |   128  | 128  |   127  | 184.5 |    64  | 168.0 
   3  |   9 |    384  |   192  | 128  |   191  | 184.4 |    96  | 181.0 
   4  |  13 |    512  |   256  | 128  |   255  | 184.5 |   128  | 189.5 
   5  |  16 |    640  |   320  | 128  |   319  | 184.5 |   160  | 184.6 

--- Targeting Fastest Result of md_sync_window 256 bytes for Final Pass ---


--- FULLY AUTOMATIC nr_requests TEST 2 (4 Sample Points @ 10min Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |      512    |     256     |     128     |      255    | 184.4 MB/s 
  2   |      512    |     256     |     128     |      128    | 184.3 MB/s 
  3   |      512    |     256     |       8     |      255    | 184.7 MB/s 
  4   |      512    |     256     |       8     |      128    | 184.1 MB/s 

Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 184.7 MB/s

This nr_requests value will be used for the next test.



--- FULLY AUTOMATIC TEST PASS 2 (Fine - 33 Sample Points @ 5min Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |   6 |    256  |   128  |   8  |   127  | 184.2 |    64  | 166.5 
   2  |   7 |    272  |   136  |   8  |   135  | 184.4 |    68  | 167.9 
   3  |   7 |    288  |   144  |   8  |   143  | 184.4 |    72  | 166.7 
   4  |   7 |    304  |   152  |   8  |   151  | 182.9 |    76  | 175.3 
   5  |   8 |    320  |   160  |   8  |   159  | 184.9 |    80  | 171.5 
   6  |   8 |    336  |   168  |   8  |   167  | 185.0 |    84  | 172.0 
   7  |   9 |    352  |   176  |   8  |   175  | 184.4 |    88  | 171.2 
   8  |   9 |    368  |   184  |   8  |   183  | 184.4 |    92  | 173.7 
   9  |   9 |    384  |   192  |   8  |   191  | 184.4 |    96  | 175.1 
  10  |  10 |    400  |   200  |   8  |   199  | 186.8 |   100  | 180.9 
  11  |  10 |    416  |   208  |   8  |   207  | 187.4 |   104  | 182.7 
  12  |  11 |    432  |   216  |   8  |   215  | 186.2 |   108  | 182.6 
  13  |  11 |    448  |   224  |   8  |   223  | 184.4 |   112  | 184.4 
  14  |  12 |    464  |   232  |   8  |   231  | 184.5 |   116  | 187.8 
  15  |  12 |    480  |   240  |   8  |   239  | 184.5 |   120  | 184.8 
  16  |  12 |    496  |   248  |   8  |   247  | 183.5 |   124  | 184.6 
  17  |  13 |    512  |   256  |   8  |   255  | 183.8 |   128  | 183.3 
  18  |  13 |    528  |   264  |   8  |   263  | 184.5 |   132  | 187.3 
  19  |  14 |    544  |   272  |   8  |   271  | 188.1 |   136  | 184.8 
  20  |  14 |    560  |   280  |   8  |   279  | 184.8 |   140  | 184.4 
  21  |  14 |    576  |   288  |   8  |   287  | 183.9 |   144  | 184.5 
  22  |  15 |    592  |   296  |   8  |   295  | 184.5 |   148  | 187.5 
  23  |  15 |    608  |   304  |   8  |   303  | 184.7 |   152  | 184.5 
  24  |  16 |    624  |   312  |   8  |   311  | 184.4 |   156  | 184.7 
  25  |  16 |    640  |   320  |   8  |   319  | 185.9 |   160  | 182.1 
  26  |  17 |    656  |   328  |   8  |   327  | 187.4 |   164  | 184.5 
  27  |  17 |    672  |   336  |   8  |   335  | 187.6 |   168  | 184.5 
  28  |  17 |    688  |   344  |   8  |   343  | 184.4 |   172  | 184.6 
  29  |  18 |    704  |   352  |   8  |   351  | 184.5 |   176  | 184.7 
  30  |  18 |    720  |   360  |   8  |   359  | 184.6 |   180  | 187.4 
  31  |  19 |    736  |   368  |   8  |   367  | 186.5 |   184  | 184.6 
  32  |  19 |    752  |   376  |   8  |   375  | 186.1 |   188  | 184.5 
  33  |  19 |    768  |   384  |   8  |   383  | 184.5 |   192  | 184.7 

The results below do NOT include the Basline test of current values.

The Fastest Sync Speed tested was md_sync_window=272 at 188.1 MB/s
     Tunable (md_num_stripes): 544
     Tunable (md_sync_window): 272
     Tunable (md_sync_thresh): 271
     Tunable (nr_requests): 8
This will consume 14 MB with md_num_stripes=544, 2x md_sync_window.
This is 92MB less than your current utilization of 106MB.

The Thriftiest Sync Speed tested was md_sync_window=64 at 184.3 MB/s
     Tunable (md_num_stripes): 128
     Tunable (md_sync_window): 64
     Tunable (md_sync_thresh): 63
     Tunable (nr_requests): 8
This will consume 3 MB with md_num_stripes=128, 2x md_sync_window.
This is 103MB less than your current utilization of 106MB.

The Recommended Sync Speed is md_sync_window=208 at 187.4 MB/s
     Tunable (md_num_stripes): 416
     Tunable (md_sync_window): 208
     Tunable (md_sync_thresh): 207
     Tunable (nr_requests): 8
This will consume 10 MB with md_num_stripes=416, 2x md_sync_window.
This is 96MB less than your current utilization of 106MB.

NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Completed: 10 Hrs 17 Min 54 Sec.


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.


System Info:  Tower7
              unRAID version 6.2.0-rc4
                   md_num_stripes=4096
                   md_sync_window=2048
                   md_sync_thresh=2000
                   nr_requests=8 (Global Setting)
                   sbNumDisks=6
              CPU: Intel(R) Xeon(R) CPU E31220 @ 3.10GHz
              RAM: 32GiB System Memory

Outputting lshw information for Drives and Controllers:

H/W path            Device       Class      Description
=======================================================
/0/100/6/0          scsi1        storage    ASC-1405 Unified Serial HBA
/0/100/6/0/0.1.0    /dev/sdk     disk       120GB KINGSTON SV300S3
/0/100/6/0/0.2.0    /dev/sdl     disk       120GB KINGSTON SV300S3
/0/100/6/0/0.3.0    /dev/sdm     disk       120GB KINGSTON SV300S3
/0/100/6/0/0.0.0    /dev/sdj     disk       120GB KINGSTON SV300S3
/0/100/1c/0                      storage    ASM1062 Serial ATA Controller
/0/100/1f.2                      storage    6 Series/C200 Series Chipset Family SATA AHCI Controller
/0/1                scsi0        storage    
/0/1/0.0.0          /dev/sda     disk       7864MB DataTraveler 2.0
/0/1/0.0.0/0        /dev/sda     disk       7864MB 
/0/2                scsi2        storage    
/0/2/0.0.0          /dev/sdb     disk       512GB TS512GSSD370S
/0/3                scsi3        storage    
/0/3/0.0.0          /dev/sdc     disk       3TB TOSHIBA DT01ACA3
/0/8                scsi4        storage    
/0/8/0.0.0          /dev/sdd     disk       3TB TOSHIBA DT01ACA3
/0/9                scsi5        storage    
/0/9/0.0.0          /dev/sde     disk       3TB TOSHIBA DT01ACA3
/0/a                scsi6        storage    
/0/a/0.0.0          /dev/sdf     disk       3TB TOSHIBA DT01ACA3
/0/b                scsi7        storage    
/0/b/0.0.0          /dev/sdg     disk       3TB TOSHIBA DT01ACA3
/0/c                scsi8        storage    
/0/c/0.0.0          /dev/sdh     disk       180GB INTEL SSDSC2CT18
/0/d                scsi9        storage    
/0/d/0.0.0          /dev/sdi     disk       500GB TOSHIBA MK5055GS

Array Devices:
    Disk0 sdc is a Parity drive named parity
    Disk1 sdd is a Data drive named disk1
    Disk2 sde is a Data drive named disk2
    Disk3 sdf is a Data drive named disk3
    Disk4 sdg is a Data drive named disk4

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:       32991160     5420312    25549684      430264     2021164    26694448
Low:       32991160     7441476    25549684
High:             0           0           0
Swap:             0           0           0


                      *** END OF REPORT ***

 

 

Note the much more consistent results, I'm now running a parity check with the fastest values found, I don't expect much speed improvement as this is a very simple server, but if it remains similar as before it means I had exaggeratedly high values and was wasting a lot of RAM.

 

Suggestion: Since the short test is now much faster, how about doubling (or even tripling) each sample time, I believe this would make the short test much more accurate, helping each user decide if it's worth doing the 10 hour normal test.

 

P.S.: could there be a difference in how speeds are reported between the script and unRAID, like one is using MiB/s and other MB/s? I noticed this with the original script, reported unRAID speed is usually about 8% higher than the script, not that it really matters, since the point is finding the best reported speed, but it can look like the script results are slower when in fact they're not.

Link to comment

Finally back at the computer.  Short test results for Server A

*******************************************************************************
       unRAID Tunables Tester v4.0b3 by Pauven (for unRAID v6.2)

        Tunables Report produced Fri Aug 26 07:07:43 EDT 2016

                         Run on server: Server_A

                   Short Automatic Parity Sync Test


Current Values:  md_num_stripes=1280, md_sync_window=384, md_sync_thresh=192
                 Global nr_requests=128
                    sdg nr_requests=128
                    sdl nr_requests=128
                    sdc nr_requests=128
                    sde nr_requests=128
                    sdf nr_requests=128
                    sdh nr_requests=128
                    sdk nr_requests=128
                    sdi nr_requests=128
                    sdm nr_requests=128
                    sdj nr_requests=128
                    sdd nr_requests=128


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s
-------------------------------------------------------
   1  |  63 |   1280  |   384  | 128  |   192  |  61.8

--- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 60sec Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed
---------------------------------------------------------------------------
  1   |     1536    |     768     |     128     |      767    |  76.2 MB/s
  2   |     1536    |     768     |     128     |      384    |  81.6 MB/s
  3   |     1536    |     768     |       8     |      767    |  95.3 MB/s
  4   |     1536    |     768     |       8     |      384    |  76.4 MB/s

Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 95.3 MB/s

This nr_requests value will be used for the next test.


--- FULLY AUTOMATIC TEST PASS 1a (Rough - 4 Sample Points @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s
------------------------------------------------------------------------
   1  |  37 |    768  |   384  |   8  |   383  |  54.3 |   192  |  43.8
   2  |  63 |   1280  |   640  |   8  |   639  |  74.3 |   320  |  64.5
   3  |  88 |   1792  |   896  |   8  |   895  | 102.3 |   448  |  90.0
   4  | 113 |   2304  |  1152  |   8  |  1151  | 115.2 |   576  | 115.1

--- FULLY AUTOMATIC TEST PASS 1c (Rough - 5 Sample Points @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s
------------------------------------------------------------------------
   1  | 120 |   2432  |  1216  |   8  |  1215  | 118.6 |   608  | 116.1
   2  | 145 |   2944  |  1472  |   8  |  1471  | 115.9 |   736  | 103.2
   3  | 170 |   3456  |  1728  |   8  |  1727  | 115.4 |   864  | 111.9
   4  | 195 |   3968  |  1984  |   8  |  1983  | 117.1 |   992  | 108.9
   5  | 221 |   4480  |  2240  |   8  |  2239  | 116.0 |  1120  | 110.5

--- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE NORMAL AUTO ---

If the speeds changed with different values you should run a NORMAL AUTO test.

Completed: 0 Hrs 14 Min 29 Sec.


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.


System Info:  Server_A
              unRAID version 6.2.0-rc4
                   md_num_stripes=1280
                   md_sync_window=384
                   md_sync_thresh=192
                   nr_requests=128 (Global Setting)
                   sbNumDisks=12
              CPU: AMD A8-6600K APU with Radeon(tm) HD Graphics
              RAM: 12GiB System Memory

Outputting lshw information for Drives and Controllers:

H/W path            Device       Class       Description
========================================================
/0/100/3/0          scsi3        storage     88SE9485 SAS/SATA 6Gb/s controller
/0/100/3/0/0.5.0    /dev/sdk     disk        4TB ST4000DM000-1F21
/0/100/3/0/0.6.0    /dev/sdl     disk        3TB ST3000DM001-1CH1
/0/100/3/0/0.7.0    /dev/sdm     disk        3TB ST3000DM001-9YN1
/0/100/3/0/0.0.0    /dev/sdf     disk        3TB ST3000DM001-1CH1
/0/100/3/0/0.1.0    /dev/sdg     disk        4TB ST4000DM000-1F21
/0/100/3/0/0.2.0    /dev/sdh     disk        3TB ST3000DM001-1CH1
/0/100/3/0/0.3.0    /dev/sdi     disk        3TB ST3000DM001-1CH1
/0/100/3/0/0.4.0    /dev/sdj     disk        3TB ST3000DM001-1ER1
/0/100/11                        storage     FCH SATA Controller [AHCI mode]
/0/1                scsi0        storage
/0/1/0.0.0          /dev/sda     disk        7756MB DataTraveler SE9
/0/1/0.0.0/0        /dev/sda     disk        7756MB
/0/2                scsi1        storage
/0/2/0.0.0          /dev/sdb     disk        240GB Corsair Force LE
/0/3                scsi2        storage
/0/3/0.0.0          /dev/sdc     disk        3TB ST3000DM001-9YN1
/0/4                scsi4        storage
/0/4/0.0.0          /dev/sdd     disk        3TB WDC WD30EZRX-00S
/0/5                scsi5        storage
/0/5/0.0.0          /dev/sde     disk        3TB WDC WD30EZRX-00M

Array Devices:
    Disk0 sdg is a Parity drive named parity
    Disk1 sdl is a Data drive named disk1
    Disk2 sdc is a Data drive named disk2
    Disk3 sde is a Data drive named disk3
    Disk4 sdf is a Data drive named disk4
    Disk5 sdh is a Data drive named disk5
    Disk6 sdk is a Data drive named disk6
    Disk7 sdi is a Data drive named disk7
    Disk8 sdm is a Data drive named disk8
    Disk9 sdj is a Data drive named disk9
    Disk10 sdd is a Data drive named disk10

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:       12182180     1699264     9952928      384640      529988     9827008
Low:       12182180     2229252     9952928
High:             0           0           0
Swap:             0           0           0

Starting the Normal Test

TLDR The values being returned I don't believe are entirely accurate.  IE:  The baseline test done at the start of the script comes up with 61.8MB/s  and yet my average speed on a complete parity check is 98.6MB/s (and checking dynamix for progress during the first half hour or so always comes up with ~140MB/s

Link to comment

Server B results (upgraded it to 6.2 just for this, will have to revert back to 6.1.9 due to CA required to run on both 6.2 and 6.1)

       unRAID Tunables Tester v4.0b3 by Pauven (for unRAID v6.2)

        Tunables Report produced Fri Aug 26 07:22:39 EDT 2016

                         Run on server: Server_B

                   Short Automatic Parity Sync Test


Current Values:  md_num_stripes=1280, md_sync_window=384, md_sync_thresh=192
                 Global nr_requests=128
                    sdd nr_requests=128
                    sdf nr_requests=128
                    sdb nr_requests=128
                    sdc nr_requests=128
                    sde nr_requests=128
                    sdg nr_requests=128
                    sdl nr_requests=128
                    sdh nr_requests=128
                    sdi nr_requests=128
                    sdj nr_requests=128
                    sdk nr_requests=128
                    sdm nr_requests=128


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s 
-------------------------------------------------------
   1  |  68 |   1280  |   384  | 128  |   192  |  37.7 

--- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 60sec Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |     1536    |     768     |     128     |      767    |  94.6 MB/s 
  2   |     1536    |     768     |     128     |      384    |  54.3 MB/s 
  3   |     1536    |     768     |       8     |      767    |  96.4 MB/s 
  4   |     1536    |     768     |       8     |      384    |  53.4 MB/s 

Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 96.4 MB/s

This nr_requests value will be used for the next test.


--- FULLY AUTOMATIC TEST PASS 1a (Rough - 4 Sample Points @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |  40 |    768  |   384  |   8  |   383  |  80.7 |   192  |  35.7 
   2  |  68 |   1280  |   640  |   8  |   639  |  97.9 |   320  |  49.4 
   3  |  95 |   1792  |   896  |   8  |   895  |  89.6 |   448  |  57.8 
   4  | 122 |   2304  |  1152  |   8  |  1151  |  96.4 |   576  |  71.4 

--- FULLY AUTOMATIC TEST PASS 1c (Rough - 5 Sample Points @ 30sec Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  | 129 |   2432  |  1216  |   8  |  1215  | 105.7 |   608  |  75.0 
   2  | 156 |   2944  |  1472  |   8  |  1471  |  97.8 |   736  |  99.0 
   3  | 184 |   3456  |  1728  |   8  |  1727  |  99.6 |   864  |  98.4 
   4  | 211 |   3968  |  1984  |   8  |  1983  |  92.2 |   992  |  99.2 
   5  | 238 |   4480  |  2240  |   8  |  2239  |  97.4 |  1120  |  97.8 

--- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE NORMAL AUTO ---

If the speeds changed with different values you should run a NORMAL AUTO test.

Completed: 0 Hrs 14 Min 25 Sec.


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.


System Info:  Server_B
              unRAID version 6.2.0-rc4
                   md_num_stripes=1280
                   md_sync_window=384
                   md_sync_thresh=192
                   nr_requests=128 (Global Setting)
                   sbNumDisks=13
              CPU: AMD Sempron(tm) 3850 APU with Radeon(tm) R3
   91xx Config
              RAM: 4GiB System Memory

Outputting lshw information for Drives and Controllers:

H/W path              Device     Class       Description
========================================================
/0/100/2.1/0          scsi11     storage     SAS1068E PCI-Express Fusion-MPT SAS
/0/100/2.1/0/0.4.0    /dev/sdj   disk        1TB ST31000333AS
/0/100/2.1/0/0.5.0    /dev/sdk   disk        2TB Hitachi HDS72202
/0/100/2.1/0/0.6.0    /dev/sdl   disk        2TB WDC WD20EFRX-68E
/0/100/2.1/0/0.7.0    /dev/sdm   disk        1TB ST31000528AS
/0/100/2.1/0/0.0.0    /dev/sdf   disk        2TB ST2000DM001-1CH1
/0/100/2.1/0/0.1.0    /dev/sdg   disk        2TB ST2000DM001-1CH1
/0/100/2.1/0/0.2.0    /dev/sdh   disk        1TB Hitachi HDT72101
/0/100/2.1/0/0.3.0    /dev/sdi   disk        1TB ST31000524AS
/0/100/2.2/0                     storage     88SE9123 PCIe SATA 6.0 Gb/s controller
/0/100/11                        storage     FCH SATA Controller [AHCI mode]
/0/1                  scsi0      storage     
/0/1/0.0.0            /dev/sda   disk        31GB USB Flash Drive
/0/1/0.0.0/0          /dev/sda   disk        31GB 
/0/2                  scsi1      storage     
/0/2/0.0.0            /dev/sdb   disk        2TB ST2000DM001-1CH1
/0/2/0.0.0/0          /dev/sdb   disk        2TB 
/0/3                  scsi2      storage     
/0/3/0.0.0            /dev/sdc   disk        3TB ST3000DM001-1CH1
/0/3/0.0.0/0          /dev/sdc   disk        3TB 
/0/4                  scsi3      storage     
/0/4/0.0.0            /dev/sdd   disk        3TB ST3000DM001-1CH1
/0/5                  scsi4      storage     
/0/5/0.0.0            /dev/sde   disk        2TB ST2000DL003-9VT1
/0/6                  scsi10     storage     

Array Devices:
    Disk0 sdd is a Parity drive named parity
    Disk1 sdf is a Data drive named disk1
    Disk2 sdb is a Data drive named disk2
    Disk3 sdc is a Data drive named disk3
    Disk4 sde is a Data drive named disk4
    Disk5 sdg is a Data drive named disk5
    Disk6 sdl is a Data drive named disk6
    Disk7 sdh is a Data drive named disk7
    Disk8 sdi is a Data drive named disk8
    Disk9 sdj is a Data drive named disk9
    Disk10 sdk is a Data drive named disk10
    Disk11 sdm is a Data drive named disk11

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:        3961340      146748     3415092      356832      399500     3254348
Low:        3961340      546248     3415092
High:             0           0           0
Swap:             0           0           0


                      *** END OF REPORT ***

One suggestion.  Do a TODOS on the resulting txt file.  Make things easy for people to read the resulting text file.

 

Also not sure if reported yet, but if you save the values, its saving null values which kinda messes things up

 

      Current in disk.cfg   |      New Setting
    -----------------------------------------------
     md_num_stripes="1280"  | md_num_stripes="0"
     md_sync_window="384"   | md_sync_window="0"
     md_sync_thresh="192"   | md_sync_thresh=""
     nr_requests="128"      | nr_requests="8"

Link to comment

Random idea for this script... since you pretty much have to run it in screen (or on the console) maybe you could detect when it is not in screen and display a warning or even force it to load in screen (assuming screen is installed)?

 

I found some potentially useful code here:

  https://unix.stackexchange.com/questions/162133/run-script-in-a-screen

 

Awesome ideas, thanks ljm42!

 

Quick question:  On unRAID v5, I used unMenu to handle installing my extras like Screen.  Now on 6.x, I don't use unMenu anymore.  I looked for a plugin to install screen for me, but didn't see one. 

 

Did I just miss it?  Any guidance?

 

Thanks,

Paul

Link to comment

Normal test completed.

 


       unRAID Tunables Tester v4.0b3 by Pauven (for unRAID v6.2)

        Tunables Report produced Thu Aug 25 20:22:39 CDT 2016

                         Run on server: nas

                  Normal Automatic Parity Sync Test


Current Values:  md_num_stripes=768, md_sync_window=384, md_sync_thresh=50
                 Global nr_requests=128
                    sdo nr_requests=128
                    sdn nr_requests=128
                    sdm nr_requests=128
                    sdl nr_requests=128
                    sdh nr_requests=128
                    sdk nr_requests=128
                    sdj nr_requests=128
                    sdi nr_requests=128
                    sdg nr_requests=128
                    sdf nr_requests=128
                    sde nr_requests=128
                    sdd nr_requests=128
                    sdc nr_requests=128


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 5min Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s 
-------------------------------------------------------
   1  |  43 |    768  |   384  | 128  |    50  | 129.7 

--- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 10min Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |     1536    |     768     |     128     |      767    | 138.6 MB/s 
  2   |     1536    |     768     |     128     |      384    | 130.5 MB/s 
  3   |     1536    |     768     |       8     |      767    |  90.1 MB/s 
  4   |     1536    |     768     |       8     |      384    | 118.4 MB/s 

Fastest vals were nr_reqs=128 and sync_thresh=99% of sync_window at 138.6 MB/s

This nr_requests value will be used for the next test.


--- FULLY AUTOMATIC TEST PASS 1a (Rough - 13 Sample Points @ 5min Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |  43 |    768  |   384  | 128  |   383  | 120.3 |   192  |  90.2 
   2  |  51 |    896  |   448  | 128  |   447  |  80.7 |   224  | 104.1 
   3  |  58 |   1024  |   512  | 128  |   511  | 127.2 |   256  | 144.5 
   4  |  65 |   1152  |   576  | 128  |   575  | 122.7 |   288  |  92.6 
   5  |  73 |   1280  |   640  | 128  |   639  |  87.5 |   320  |  78.1 
   6  |  80 |   1408  |   704  | 128  |   703  | 124.2 |   352  | 125.1 
   7  |  87 |   1536  |   768  | 128  |   767  | 101.1 |   384  |  88.0 
   8  |  95 |   1664  |   832  | 128  |   831  |  61.8 |   416  |  39.6 
   9  | 102 |   1792  |   896  | 128  |   895  | 100.2 |   448  |  84.8 
  10  | 109 |   1920  |   960  | 128  |   959  |  82.6 |   480  |  80.0 
  11  | 117 |   2048  |  1024  | 128  |  1023  |  78.6 |   512  |  81.2 
  12  | 124 |   2176  |  1088  | 128  |  1087  |  80.3 |   544  |  86.8 
  13  | 131 |   2304  |  1152  | 128  |  1151  |  83.4 |   576  |  85.5 

--- Targeting Fastest Result of md_sync_window 512 bytes for Final Pass ---


--- FULLY AUTOMATIC nr_requests TEST 2 (4 Sample Points @ 10min Duration)---

Test | num_stripes | sync_window | nr_requests | sync_thresh |   Speed 
---------------------------------------------------------------------------
  1   |     1024    |     512     |     128     |      511    |  83.2 MB/s 
  2   |     1024    |     512     |     128     |      256    | 121.3 MB/s 
  3   |     1024    |     512     |       8     |      511    | 101.7 MB/s 
  4   |     1024    |     512     |       8     |      256    | 123.3 MB/s 

Fastest vals were nr_reqs=8 and sync_thresh=50% of sync_window at 123.3 MB/s

This nr_requests value will be used for the next test.



--- FULLY AUTOMATIC TEST PASS 2 (Fine - 33 Sample Points @ 5min Duration)---

Test | RAM | stripes | window | reqs | thresh |  MB/s | thresh |  MB/s 
------------------------------------------------------------------------
   1  |  43 |    768  |   384  |   8  |   383  |  79.0 |   192  |  71.7 
   2  |  44 |    784  |   392  |   8  |   391  |  95.2 |   196  | 135.2 
   3  |  45 |    800  |   400  |   8  |   399  |  92.6 |   200  |  75.5 
   4  |  46 |    816  |   408  |   8  |   407  | 101.6 |   204  | 125.9 
   5  |  47 |    832  |   416  |   8  |   415  | 113.7 |   208  | 115.3 
   6  |  48 |    848  |   424  |   8  |   423  |  77.3 |   212  |  63.8 
   7  |  49 |    864  |   432  |   8  |   431  |  98.6 |   216  |  97.3 
   8  |  50 |    880  |   440  |   8  |   439  |  90.9 |   220  |  66.8 
   9  |  51 |    896  |   448  |   8  |   447  |  70.9 |   224  |  41.6 
  10  |  52 |    912  |   456  |   8  |   455  |  61.8 |   228  |  48.7 
  11  |  53 |    928  |   464  |   8  |   463  |  76.8 |   232  |  44.1 
  12  |  54 |    944  |   472  |   8  |   471  |  71.3 |   236  |  68.3 
  13  |  54 |    960  |   480  |   8  |   479  |  77.9 |   240  |  54.1 
  14  |  55 |    976  |   488  |   8  |   487  |  70.6 |   244  |  56.5 
  15  |  56 |    992  |   496  |   8  |   495  |  92.0 |   248  |  83.1 
  16  |  57 |   1008  |   504  |   8  |   503  |  88.0 |   252  |  58.2 
  17  |  58 |   1024  |   512  |   8  |   511  |  77.3 |   256  |  70.7 
  18  |  59 |   1040  |   520  |   8  |   519  |  98.8 |   260  | 139.4 
  19  |  60 |   1056  |   528  |   8  |   527  |  97.0 |   264  |  74.2 
  20  |  61 |   1072  |   536  |   8  |   535  |  81.2 |   268  | 114.5 
  21  |  62 |   1088  |   544  |   8  |   543  |  77.8 |   272  |  87.9 
  22  |  63 |   1104  |   552  |   8  |   551  |  69.7 |   276  |  68.3 
  23  |  64 |   1120  |   560  |   8  |   559  |  68.9 |   280  |  74.4 
  24  |  64 |   1136  |   568  |   8  |   567  |  72.9 |   284  |  70.1 
  25  |  65 |   1152  |   576  |   8  |   575  |  72.6 |   288  |  62.3 
  26  |  66 |   1168  |   584  |   8  |   583  |  73.4 |   292  |  67.3 
  27  |  67 |   1184  |   592  |   8  |   591  |  81.2 |   296  |  83.4 
  28  |  68 |   1200  |   600  |   8  |   599  |  81.1 |   300  |  79.1 
  29  |  69 |   1216  |   608  |   8  |   607  |  81.6 |   304  |  83.9 
  30  |  70 |   1232  |   616  |   8  |   615  |  81.2 |   308  |  81.5 
  31  |  71 |   1248  |   624  |   8  |   623  |  68.7 |   312  |  81.6 
  32  |  72 |   1264  |   632  |   8  |   631  |  78.5 |   316  |  77.9 
  33  |  73 |   1280  |   640  |   8  |   639  |  83.6 |   320  |  79.9 

The results below do NOT include the Basline test of current values.

The Fastest Sync Speed tested was md_sync_window=520 at 139.4 MB/s
     Tunable (md_num_stripes): 1040
     Tunable (md_sync_window): 520
     Tunable (md_sync_thresh): 260
     Tunable (nr_requests): 8
This will consume 59 MB with md_num_stripes=1040, 2x md_sync_window.
This is 16MB more than your current utilization of 43MB.

The Thriftiest Sync Speed tested was md_sync_window=392 at 135.2 MB/s
     Tunable (md_num_stripes): 784
     Tunable (md_sync_window): 392
     Tunable (md_sync_thresh): 196
     Tunable (nr_requests): 8
This will consume 44 MB with md_num_stripes=784, 2x md_sync_window.
This is 1MB more than your current utilization of 43MB.

The Recommended Sync Speed is md_sync_window=520 at 139.4 MB/s
     Tunable (md_num_stripes): 1040
     Tunable (md_sync_window): 520
     Tunable (md_sync_thresh): 260
     Tunable (nr_requests): 8
This will consume 59 MB with md_num_stripes=1040, 2x md_sync_window.
This is 16MB more than your current utilization of 43MB.

NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Completed: 10 Hrs 8 Min 38 Sec.


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.


System Info:  nas
              unRAID version 6.2.0-rc4
                   md_num_stripes=768
                   md_sync_window=384
                   md_sync_thresh=50
                   nr_requests=128 (Global Setting)
                   sbNumDisks=14
              CPU: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
  Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
  CPU
  CPU
  CPU
  CPU
  CPU
  CPU
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
  CPU [empty]
              RAM: 32GiB System Memory

Outputting lshw information for Drives and Controllers:

H/W path             Device     Class      Description
======================================================
/0/100/7.1                      storage    82371AB/EB/MB PIIX4 IDE
/0/100/15/0          scsi3      storage    SAS1068 PCI-X Fusion-MPT SAS
/0/100/15/0/0.0.0    /dev/sda   disk       209MB Virtual disk
/0/100/16/0          scsi2      storage    SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
/0/100/16/0/0.0.0    /dev/sdb   disk       512GB Crucial_CT512M55
/0/100/16/0/0.1.0    /dev/sdc   disk       4TB HGST HDN724040AL
/0/100/16/0/0.2.0    /dev/sdd   disk       4TB Hitachi HDS72404
/0/100/16/0/0.3.0    /dev/sde   disk       4TB HGST HDN724040AL
/0/100/16/0/0.4.0    /dev/sdf   disk       4TB Hitachi HDS72404
/0/100/16/0/0.5.0    /dev/sdg   disk       4TB HGST HDS724040AL
/0/100/17/0          scsi4      storage    SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
/0/100/17/0/0.3.0    /dev/sdk   disk       4TB HGST HDS724040AL
/0/100/17/0/0.4.0    /dev/sdl   disk       4TB HGST HDS724040AL
/0/100/17/0/0.5.0    /dev/sdm   disk       4TB HGST HDS724040AL
/0/100/17/0/0.6.0    /dev/sdn   disk       4TB HGST HDS724040AL
/0/100/17/0/0.7.0    /dev/sdo   disk       4TB Hitachi HDS72404
/0/100/17/0/0.0.0    /dev/sdh   disk       4TB HGST HDS724040AL
/0/100/17/0/0.1.0    /dev/sdi   disk       4TB Hitachi HDS72404
/0/100/17/0/0.2.0    /dev/sdj   disk       4TB HGST HDS724040AL
/0/1                 scsi5      storage    
/0/1/0.0.0           /dev/sdp   disk       15GB Reader     SD/MS
/0/1/0.0.0/0         /dev/sdp   disk       15GB 
/0/1/0.0.1           /dev/sdq   disk       Reader  MicSD/M2
/0/1/0.0.1/0         /dev/sdq   disk       

Array Devices:
    Disk0 sdo is a Parity drive named parity
    Disk1 sdn is a Data drive named disk1
    Disk2 sdm is a Data drive named disk2
    Disk3 sdl is a Data drive named disk3
    Disk4 sdh is a Data drive named disk4
    Disk5 sdk is a Data drive named disk5
    Disk6 sdj is a Data drive named disk6
    Disk7 sdi is a Data drive named disk7
    Disk8 sdg is a Data drive named disk8
    Disk9 sdf is a Data drive named disk9
    Disk10 sde is a Data drive named disk10
    Disk11 sdd is a Data drive named disk11
    Disk12 sdc is a Data drive named disk12

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:       32950372     4713036    25198628      415548     3038708    27065324
Low:       32950372     7751744    25198628
High:             0           0           0
Swap:             0           0           0


                      *** END OF REPORT ***

[/cpde]

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.