unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables


Recommended Posts

How much lowmem does your system report?

free -l

There's your limit.  But keep in mind that the system needs a good chunk of that lowmem for other things, so things will start to break much earlier than that limit.

 

Total lowmem?  857184, and that's after a fresh boot.  unRAID 5.0 + unMenu + screen + apcupsd + pciutils + dmidecode + bwm-ng

 

root@Tower:~# free -l
             total       used       free     shared    buffers     cached
Mem:       4043520    1132884    2910636          0     335060     247944
Low:        867184     858224       8960
High:      3176336     274660    2901676
-/+ buffers/cache:     549880    3493640
Swap:            0          0          0

 

Is total lowmem a value I should check, and should I use it to limit how far the tests go?  If so, how much spare lowmem should I leave, and would the formula be something like this?

 

    $SpareLowMem > ($TotLowMem - ( $md_num_stripes * 4 * $HighestDriveNum ))

 

Link to comment

Thanks for creating this, I've been following this thread with interest  ;)

 

Just in case it proves useful to you, here is my output after running the FULLAUTO option. All my drives are 3TB WD Reds, except 1 which is a Hitachi 3TB.  They are all connected to a M1015:

 

Thanks for sharing jack0w.  Looks like your server is also immune to influence from the md_sync_window.

 

I really need to add 384 back into the FULLAUTO routine so I can see if there's any improvement over stock.  I was skipping it based upon the assumption that higher values are faster, but so many results have proven that assumption wrong.

 

-Paul

 

I'll happily re-test once you put up the next version!  Thanks for creating and updating this for the good of the community, I appreciate it, and I'm sure plenty of others do too :)

Link to comment

Your total lowmem won't be any higher, even if you install 10 times more physical RAM. 

 

I understand the lowmem is static after boot, based upon the configuration you booted with.  That's not what I was asking.

 

What % of lowmem should I draw a line at for allocating memory to stripes?  Obviously if I allocated 100%, we got problems.  I could allocate up to 50%, and leave a nice buffer, or I could be more aggresive.

 

I understand that there will be other processes competing for the same memory, so there's no hard and fast rule.  I'm just looking for general guidance like "don't allocate more than 80% of total lowmem to stripes".  This would primarily be to stop a FULLAUTO test at lower stripe values in case a particular server booted with significantly less lowmem (for whatever reason).

Link to comment

Your total lowmem won't be any higher, even if you install 10 times more physical RAM. 

 

I understand the lowmem is static after boot, based upon the configuration you booted with.  That's not what I was asking.

 

What % of lowmem should I draw a line at for allocating memory to stripes?  Obviously if I allocated 100%, we got problems.  I could allocate up to 50%, and leave a nice buffer, or I could be more aggresive.

 

I understand that there will be other processes competing for the same memory, so there's no hard and fast rule.  I'm just looking for general guidance like "don't allocate more than 80% of total lowmem to stripes".  This would primarily be to stop a FULLAUTO test at lower stripe values in case a particular server booted with significantly less lowmem (for whatever reason).

 

Whatever you allocate leaves that much less for any addons.  And since a swap partition or swap file is not default, being too aggressive will cause people problems once they re-enable their addons.  One of those notes or disclaimers for the first or second post and let them decide.

Link to comment

Stock Settings vs Automatic Mode "Best Bang" Settings (2 Servers)

60MB/s to 113MB/s

53MB/s to 83MB/s

 

I am a bit worried about low memory though since they are much higher than my old settings. During parity sync I am showing this for free low memory:

Server 1: 172024 which was set to 4736/2176/2176

Server 2: 544440 which was set to 3328/1536/1536

 

I should be fine then? This is while running my only 4 plugins: SickBeard, SABnzbd, Cache Dirs, and the new SF-based WebGUI. Amazing results, better than expected.

Link to comment
Since unRAID with modern disks and interfaces can easily saturate 1Gbps when reading, the only thing that matters is write speed. Before performing the unraid-tunables test I tested Weebotech's settings found here: http://lime-technology.com/forum/index.php?topic=29009.msg260640#msg260640. On the new system I was using stock settings and getting ~40MBps write speed over the network and locally. When I applied the new settings the write speed jumped to ~50MBps. But the speed is fluctuating, sometimes going over 80Mbps and some times less than 30Mbps. I will do some more experimenting this evening and reports back.

 

Still have a couple pages to read but was your test writes to the share or to the disk directly? I find that writing to the share is both slower and the speed more variable.

 

-John

Link to comment

Still have a couple pages to read but was your test writes to the share or to the disk directly? I find that writing to the share is both slower and the speed more variable.

Those figures were for writing to disk shares. Going through user shares will give you some performance penalty. For me it does not matter since I'm one of those users who always write through disk shares  and only use user shares for read-only "publishing" to the clients (Plex Media Centers and Plex Home Theaters in my case). This works very nicely if you are mostly filling disks one by one. As a bonus it prevents any accidental modification of the files by the clients.

Link to comment

Ran the script twice. First time was after a couple days of use, 2nd time was fresh after a reboot.

ETA: There's up to 5mb improvement after a reboot and I did get a different end result

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  75.8 MB/s 
   2  |    1536     |     768     |     640     |  58.1 MB/s 
   3  |    1664     |     768     |     768     |  62.3 MB/s 
   4  |    1920     |     896     |     896     |  58.0 MB/s 
   5  |    2176     |    1024     |    1024     |  59.6 MB/s 
   6  |    2560     |    1152     |    1152     |  54.9 MB/s 
   7  |    2816     |    1280     |    1280     |  55.6 MB/s 
   8  |    3072     |    1408     |    1408     |  57.6 MB/s 
   9  |    3328     |    1536     |    1536     |  53.1 MB/s 
  10  |    3584     |    1664     |    1664     |  48.1 MB/s 
  11  |    3968     |    1792     |    1792     |  51.3 MB/s 
  12  |    4224     |    1920     |    1920     |  55.9 MB/s 
  13  |    4480     |    2048     |    2048     |  48.0 MB/s 
  14  |    4736     |    2176     |    2176     |  53.2 MB/s 
  15  |    5120     |    2304     |    2304     |  51.8 MB/s 
  16  |    5376     |    2432     |    2432     |  55.1 MB/s 
  17  |    5632     |    2560     |    2560     |  52.0 MB/s 
  18  |    5888     |    2688     |    2688     |  50.6 MB/s 
  19  |    6144     |    2816     |    2816     |  42.9 MB/s 
  20  |    6528     |    2944     |    2944     |  58.6 MB/s 
--- Targeting Fastest Result of md_sync_window 512 bytes for Special Pass ---
--- FULLY AUTOMATIC TEST PASS 1b (Rough - 4 Sample Points @ 3min Duration)---
  21  |    896     |     768     |     128     |  83.2 MB/s 
  22  |    1024     |     768     |     256     |  78.6 MB/s 
  23  |    1280     |     768     |     384     |  78.1 MB/s 
  24  |    1408     |     768     |     512     |  71.8 MB/s 
--- Targeting Fastest Result of md_sync_window 128 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  25  |    856     |     768     |     8     |  72.8 MB/s 
  26  |    864     |     768     |     16     |  68.1 MB/s 
  27  |    880     |     768     |     24     |  68.7 MB/s 
  28  |    888     |     768     |     32     |  74.7 MB/s 
  29  |    896     |     768     |     40     |  74.9 MB/s 
  30  |    904     |     768     |     48     |  78.4 MB/s 
  31  |    912     |     768     |     56     |  78.9 MB/s 
  32  |    920     |     768     |     64     |  82.4 MB/s 
  33  |    928     |     768     |     72     |  80.4 MB/s 
  34  |    936     |     768     |     80     |  79.8 MB/s 
  35  |    944     |     768     |     88     |  82.0 MB/s 
  36  |    960     |     768     |     96     |  81.8 MB/s 
  37  |    968     |     768     |     104     |  81.3 MB/s 
  38  |    976     |     768     |     112     |  81.7 MB/s 
  39  |    984     |     768     |     120     |  80.5 MB/s 
  40  |    992     |     768     |     128     |  82.0 MB/s 

Completed: 2 Hrs 21 Min 56 Sec.

Best Bang for the Buck: Test 1 with a speed of 75.8 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 55MB of RAM on your hardware.


Unthrottled values for your server came from Test 32 with a speed of 82.4 MB/s

     Tunable (md_num_stripes): 920
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 64

These settings will consume 35MB of RAM on your hardware.
This is -85MB less than your current utilization of 120MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     |  75.6 MB/s 
   2  |    1536     |     768     |     640     |  66.2 MB/s 
   3  |    1664     |     768     |     768     |  60.4 MB/s 
   4  |    1920     |     896     |     896     |  59.0 MB/s 
   5  |    2176     |    1024     |    1024     |  56.7 MB/s 
   6  |    2560     |    1152     |    1152     |  53.6 MB/s 
   7  |    2816     |    1280     |    1280     |  56.1 MB/s 
   8  |    3072     |    1408     |    1408     |  59.4 MB/s 
   9  |    3328     |    1536     |    1536     |  54.4 MB/s 
  10  |    3584     |    1664     |    1664     |  54.3 MB/s 
  11  |    3968     |    1792     |    1792     |  51.5 MB/s 
  12  |    4224     |    1920     |    1920     |  58.8 MB/s 
  13  |    4480     |    2048     |    2048     |  55.8 MB/s 
  14  |    4736     |    2176     |    2176     |  53.2 MB/s 
  15  |    5120     |    2304     |    2304     |  60.7 MB/s 
  16  |    5376     |    2432     |    2432     |  46.2 MB/s 
  17  |    5632     |    2560     |    2560     |  54.1 MB/s 
  18  |    5888     |    2688     |    2688     |  53.9 MB/s 
  19  |    6144     |    2816     |    2816     |  52.9 MB/s 
  20  |    6528     |    2944     |    2944     |  67.3 MB/s 
--- Targeting Fastest Result of md_sync_window 512 bytes for Special Pass ---
--- FULLY AUTOMATIC TEST PASS 1b (Rough - 4 Sample Points @ 3min Duration)---
  21  |    896     |     768     |     128     |  81.2 MB/s 
  22  |    1024     |     768     |     256     |  83.3 MB/s 
  23  |    1280     |     768     |     384     |  83.3 MB/s 
  24  |    1408     |     768     |     512     |  78.4 MB/s 
--- Targeting Fastest Result of md_sync_window 256 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  25  |    1000     |     768     |     136     |  85.4 MB/s 
  26  |    1008     |     768     |     144     |  83.1 MB/s 
  27  |    1016     |     768     |     152     |  82.1 MB/s 
  28  |    1024     |     768     |     160     |  80.4 MB/s 
  29  |    1040     |     768     |     168     |  79.2 MB/s 
  30  |    1048     |     768     |     176     |  80.8 MB/s 
  31  |    1056     |     768     |     184     |  80.2 MB/s 
  32  |    1064     |     768     |     192     |  80.7 MB/s 
  33  |    1072     |     768     |     200     |  80.3 MB/s 
  34  |    1080     |     768     |     208     |  83.6 MB/s 
  35  |    1088     |     768     |     216     |  81.0 MB/s 
  36  |    1096     |     768     |     224     |  82.1 MB/s 
  37  |    1104     |     768     |     232     |  83.3 MB/s 
  38  |    1120     |     768     |     240     |  82.9 MB/s 
  39  |    1128     |     768     |     248     |  81.7 MB/s 
  40  |    1136     |     768     |     256     |  85.3 MB/s 

Completed: 2 Hrs 21 Min 42 Sec.

Best Bang for the Buck: Test 1 with a speed of 75.6 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 55MB of RAM on your hardware.


Unthrottled values for your server came from Test 25 with a speed of 85.4 MB/s

     Tunable (md_num_stripes): 1000
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 136

These settings will consume 39MB of RAM on your hardware.
This is -81MB less than your current utilization of 120MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Link to comment

How many drives do you have?

How many files do you have?

 

If you are using cache_dirs, then it could hit some limit.

 

Where I would see issues is using the locate package.

Locate would go through the whole system tree and capture every file.

 

Everything would be fine, until I ran that or did a massive rsync on my backup directory.

I had 17 drives, but millions of files.

I have 15 drives, with the last drive installed in position Disk18.

 

I haven't counted how many files I have in a long time, but estimate it to be less than a million.

 

I've never performed an rsync nor a locate.  The majority of my data is comprised of Blu-Ray and DVD iso's, mp3's, and large backup files produced by backup programs.  So my total number of files is somewhat limited, and I have very little use for locate or rsync.

 

Our usage models are very different.

Link to comment

These 100MB/s+ speeds are pissing me off. I need to figure out what my bottleneck is!

My guess is your Areca card... try using the parity disc on the SAS2LP as well.

Why use this RAID card anyway?

 

Well, just for grins, I moved my parity to a 6Gpbs on-board SATA port and I'm not very happy with the results.  I lost 20MB/s on my array write speeds.  I'm down to 40MB/s again.

 

I thought I would be installing my M1015 this weekend, but my existing cables are about an inch too short! Looks like it will be Friday before I'm able to migrate to the M1015s.  In the meantime, Ive been able to pull out one 1.5TB drive and Im working on the other two this week.  I replaced one with a 4TB drive and the other two are just being distributed to other drives in the array for now.

 

If the new config doesnt give me at least 60MB/s sustained array writes, I will be re-installing my Areca.  Or getting a new Areca that supports 6Gbps.

Link to comment

Stock Settings vs Automatic Mode "Best Bang" Settings (2 Servers)

60MB/s to 113MB/s

53MB/s to 83MB/s

 

I am a bit worried about low memory though since they are much higher than my old settings. During parity sync I am showing this for free low memory:

Server 1: 172024 which was set to 4736/2176/2176

Server 2: 544440 which was set to 3328/1536/1536

 

I should be fine then? This is while running my only 4 plugins: SickBeard, SABnzbd, Cache Dirs, and the new SF-based WebGUI. Amazing results, better than expected.

 

Hey tyrindor, those are some great looking results.  I see you have six of the AOC-SAS2LP-MV8 controller cards, which respond extremely well to adjusting these settings.

 

Your free lowmem is actually much higher than what I typically see.  In my post here I showed only 8MB of free lowmem:  http://lime-technology.com/forum/index.php?topic=29009.msg260911#msg260911

 

In my experience, stock unRAID has been very stable when running short of lowmem, and it seems to swap/page into the highmem area effectively enough to keep my server from throwing errors, though it can get very slow if I push it too far.

 

I know my experience is not shared with all, and it seems that users who install certain 3rd party plug-ins and add-ons most commonly have lowmem issues.

 

I think we all need to be concerned about lowmem, especially when running add-ons.  Adjusting these md_* tunables increases your risk of having lowmem issues, and unfortunately I don't believe that there is any way I can predict issues or even gauge the likelihood of issues based upon your server build.

 

This is why I came up with the idea of analyzing the results to suggest 'Best Bang' values, trying to straddle that middle ground between ultimate performance and judicious memory use.

 

Probably the best advice I can give is to keep an eye on your syslog, especially after making adjustments, for any types of errors.

 

-Paul

Link to comment

Ran the script twice. First time was after a couple days of use, 2nd time was fresh after a reboot.

ETA: There's up to 5mb improvement after a reboot and I did get a different end result

 

Hey John,

 

Were both of these tests after your recent drive rebuild, when you upgraded your 32GB SSD?  I was curious if the upgrade was going to affect your server's preference for lower md_sync_window values on parity checks.

 

I find it interesting that your parity rebuilds increased in speed with higher md_sync_window values, and your parity checks increased in speed with lower.md_sync_window values.  Your Bizarro server continues to baffle me.

 

You might even find it helpful to have two sets of values that you switch between:  one for everyday use and parity checks, and another for parity rebuilds.

 

I still see a bit of inconsistency in your results, and I'm not sure that this was absolutely a result of your reboot.  I expect that each time you run your tests, slightly different answers would result.  I too see slightly different results with my tests (and boy have I done a lot of them!) but the peaks always fall the neighborhood of 2500-2900.

 

Maybe should view these results as good enough to get you into the right ballpark, but that there is no single 'perfect value'.

 

-Paul

Link to comment

Well, just for grins, I moved my parity to a 6Gpbs on-board SATA port and I'm not very happy with the results.  I lost 20MB/s on my array write speeds.  I'm down to 40MB/s again.

 

Sounds like your Areca is not the problem.  60 MB/s write speeds is phenomenal for an unRAID server.  Hopefully we can tune your write speeds soon for even better performance!

Link to comment

Well, just for grins, I moved my parity to a 6Gpbs on-board SATA port and I'm not very happy with the results.  I lost 20MB/s on my array write speeds.  I'm down to 40MB/s again.

 

Sounds like your Areca is not the problem.  60 MB/s write speeds is phenomenal for an unRAID server.  Hopefully we can tune your write speeds soon for even better performance!

 

Yeah, not much sense in running your script right now. Ill wait until I get my "final" config. 

 

All controllers will be 6Gbps, no drives smaller that 2TB. Cache drive has been upgraded to a 180GB Intel SSD. Hopefully we will have the cache pool shortly. I have another 180GB drive ready to add to it for redundancy. 

 

Once I get everything working how I want, I will then move it to ESXi.  I have two ESXi servers already, with a fibre channel SAN. It would be nice to have one more host for failovers.

Link to comment

I just wanted to give a quick progress update and also say thank you to everyone who has been posting their results.

 

Based upon the various results, I'm continuing to tweak the FULLAUTO routine in the attempt to make it better at finding the best options.  My server produces a smooth bell curve, so basing the logic primarily on my server has made the routine less than ideal for other servers.  Continue posting your test results, as they help me refine the FULLAUTO test logic to handle more scenarios.

 

The improved routine does mean the new test routine will take longer.  Unfortunately, I don't think there's anyway around that aspect, as shorter test intervals produce inaccurate results, and a smarter routine needs to check more test points.

 

I already have the new write test coded, and will be extending it soon to perform read tests as well.  Most of my work ahead of me is designing the menus and run-time options.

 

I hope to have version 3.0 out this week.

 

-Paul

Link to comment

Here's the output from my run:

 

Tunables Report from  unRAID Tunables Tester v2.2 by Pauven

NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with unRAID,
      especially if you have any add-ons or plug-ins installed.

Test | num_stripes | write_limit | sync_window |   Speed 
--- FULLY AUTOMATIC TEST PASS 1 (Rough - 20 Sample Points @ 3min Duration)---
   1  |    1408     |     768     |     512     | 112.8 MB/s 
   2  |    1536     |     768     |     640     | 113.0 MB/s 
   3  |    1664     |     768     |     768     | 113.3 MB/s 
   4  |    1920     |     896     |     896     | 113.4 MB/s 
   5  |    2176     |    1024     |    1024     | 113.7 MB/s 
   6  |    2560     |    1152     |    1152     | 114.0 MB/s 
   7  |    2816     |    1280     |    1280     | 114.0 MB/s 
   8  |    3072     |    1408     |    1408     | 114.0 MB/s 
   9  |    3328     |    1536     |    1536     | 114.1 MB/s 
  10  |    3584     |    1664     |    1664     | 114.0 MB/s 
  11  |    3968     |    1792     |    1792     | 114.1 MB/s 
  12  |    4224     |    1920     |    1920     | 114.1 MB/s 
  13  |    4480     |    2048     |    2048     | 114.2 MB/s 
  14  |    4736     |    2176     |    2176     | 114.2 MB/s 
  15  |    5120     |    2304     |    2304     | 114.2 MB/s 
  16  |    5376     |    2432     |    2432     | 114.2 MB/s 
  17  |    5632     |    2560     |    2560     | 114.2 MB/s 
  18  |    5888     |    2688     |    2688     | 114.1 MB/s 
  19  |    6144     |    2816     |    2816     | 114.1 MB/s 
  20  |    6528     |    2944     |    2944     | 114.2 MB/s 
--- Targeting Fastest Result of md_sync_window 2048 bytes for Final Pass ---
--- FULLY AUTOMATIC TEST PASS 2 (Final - 16 Sample Points @ 4min Duration)---
  21  |    4280     |    1928     |    1928     | 113.6 MB/s 
  22  |    4296     |    1936     |    1936     | 114.0 MB/s 
  23  |    4320     |    1944     |    1944     | 114.0 MB/s 
  24  |    4336     |    1952     |    1952     | 114.0 MB/s 
  25  |    4352     |    1960     |    1960     | 113.9 MB/s 
  26  |    4368     |    1968     |    1968     | 114.0 MB/s 
  27  |    4384     |    1976     |    1976     | 114.0 MB/s 
  28  |    4408     |    1984     |    1984     | 113.9 MB/s 
  29  |    4424     |    1992     |    1992     | 113.9 MB/s 
  30  |    4440     |    2000     |    2000     | 113.9 MB/s 
  31  |    4456     |    2008     |    2008     | 113.9 MB/s 
  32  |    4480     |    2016     |    2016     | 114.0 MB/s 
  33  |    4496     |    2024     |    2024     | 113.9 MB/s 
  34  |    4512     |    2032     |    2032     | 114.0 MB/s 
  35  |    4528     |    2040     |    2040     | 113.9 MB/s 
  36  |    4544     |    2048     |    2048     | 113.9 MB/s 

Completed: 2 Hrs 8 Min 26 Sec.

Best Bang for the Buck: Test 1 with a speed of 112.8 MB/s

     Tunable (md_num_stripes): 1408
     Tunable (md_write_limit): 768
     Tunable (md_sync_window): 512

These settings will consume 22MB of RAM on your hardware.


Unthrottled values for your server came from Test 22 with a speed of 114.0 MB/s

     Tunable (md_num_stripes): 4296
     Tunable (md_write_limit): 1936
     Tunable (md_sync_window): 1936

These settings will consume 67MB of RAM on your hardware.
This is -93MB less than your current utilization of 160MB.
NOTE: Adding additional drives will increase memory consumption.

In unRAID, go to Settings > Disk Settings to set your chosen parameter values.

Link to comment

In one server I have a Plus license, one parity disk, and two data disks in slots 2&3 (slot 1 is empty).

# grep allocating /var/log/syslog
Aug 24 11:22:53 Tower kernel: unraid: allocating 23780K for 1280 stripes (4 disks)

4 disks?  Probaly higest slot number + parity.

Still, can't fugure out how it comes up with that 23780K.

 

I just spent a few minutes working the math.  I had never looked at Tom's numbers before, and sure enough the numbers didn't add up.

 

I think you are right about the Highest Disk + 1 for the Parity, as I have 15 drives (highest in slot 18) and mine reports 19 disks.

 

Sep  2 14:57:57 Tower kernel: unraid: allocating 491584K for 6256 stripes (19 disks)

 

In your case, the math behaves as if there are 206 more stripes than reported (1486 instead of 1280).  In my case, 212 more (6468 instead of 6256).  Interesting in that with the very different md_num_stripes values that we have, the core variance is very similar at about 209 stripes. 

 

You have 4.65k per stripe actual allocation, and I have 4.14k per stripe.

 

Ah, I think I found it!  The math works if you use 4.6445 for the number of disks, and same for me if I use 19.6445 disks.  There's an extra 0.6445 disks in there.  So the revised formula is:

  • (Highest Disk + 1.6445) * 4096 * md_num_stripes

Tom sure has some wonky math going on...

Link to comment

So the revised formula is:

  • (Highest Disk + 1.6445) * 4096 * md_num_stripes

Tom sure has some wonky math going on...

 

Or, it may be simpler if we just think of it as adding an extra 2640 bytes per stripe:

  • (disks * 4096 * num_stripes) + (num_stripes * 2640)

... where:  disks = (highest slot used + the parity disk)

 

Or simpler:

  • (disks * 4096 + 2640) * num_stripes

( 4 * 4096 + 2640) * 1280 =  24350720 bytes =  23780K
(19 * 4096 + 2640) * 6256 = 503382784 bytes = 491584K

 

Wonky math indeed! :D

 

 

Link to comment

  • (disks * 4096 + 2640) * num_stripes

...

Wonky math indeed! :D

 

Good work Patilan, I think you figured it out!  8)

 

It appears Tom is allocating a smidgen of extra memory, for whatever reason only he knows... about 3MB with stock settings.  Probably thought we would never notice.    ;)

 

I like you new math much better, I can easily add that formula to my code.  Thanks!

Link to comment

Were both of these tests after your recent drive rebuild, when you upgraded your 32GB SSD?  I was curious if the upgrade was going to affect your server's preference for lower md_sync_window values on parity checks.

 

It was after rebuilding my array after swapping out the 32GB SSD with a 4TB Seagate.

 

-John

Link to comment

It appears Tom is allocating a smidgen of extra memory, for whatever reason only he knows... about 3MB with stock settings.  Probably thought we would never notice.    ;)

The "whatever reason" reason becomes clear by looking in "unraid.c":

 

A stripe is  (stripe_head + (num_disks * PAGE_SIZE))  bytes long.

 

In the current version, the size of structure  stripe_head  is 2640 bytes.

 

Memory used for all stripes [in KB] is:

memory = md_num_stripes * (sizeof(struct stripe_head) + (conf->num_disks * PAGE_SIZE)) / 1024;

...where:  conf->num_disks = sb->num_disks;

 

From Bash you could:

root@Tower:~# eval $(/root/mdcmd status | grep sbNumDisks)
root@Tower:~# echo $sbNumDisks
4

For me it's 4.  On your system, that will give 19.

 

index.php?action=dlattach;topic=29009.0;attach=17546

tumblr_mpd6x4pCns1s558k3o1_500.png.b236da6f4b16b70db34794b7934623d7.png

Link to comment

The "whatever reason" reason becomes clear by looking in "unraid.c":

 

A stripe is  (stripe_head + (num_disks * PAGE_SIZE))  bytes long.

 

In the current version, the size of structure  stripe_head  is 2640 bytes.

 

Memory used for all stripes [in KB] is:

memory = md_num_stripes * (sizeof(struct stripe_head) + (conf->num_disks * PAGE_SIZE)) / 1024;

...where:  conf->num_disks = sb->num_disks;

 

For the number of disks, from Bash you can:

root@Tower:~# eval $(/root/mdcmd status | grep sbNumDisks)
root@Tower:~# echo $sbNumDisks
4

For me it's 4.  On your system, that will give 19.

 

Works a charm!  Thanks!

Link to comment

Hey Patilan (or anyone else interested),

 

I have an easy challenge for you.

 

The following doesn't work correctly, as there is a carriage return in the file that survives the evaluation: 

eval $(cat /boot/config/disk.cfg | egrep "md_num_stripes")

 

To demonstrate, if I do this:

echo "md_sync_window=x"$md_sync_window"x"

 

I get this:

xD_sync_window=x2816

 

Instead of this:

md_sync_window=x2816x

 

Which is why I did this, which works but is rather convoluted:

md_sync_window=`cat /boot/config/disk.cfg | egrep "md_sync_window" | sed 's|.*"\([0-9]*\)".*|\1|'`

 

Any easy way to do an eval that also strips off the carriage return?

 

-Paul

Link to comment

A stripe is  (stripe_head + (num_disks * PAGE_SIZE))  bytes long.

 

In the current version, the size of structure  stripe_head  is 2640 bytes.

 

Is there a way in the system I can query the size of the structure stripe_head?  I'm not sure if you simply calculated that value of retrieved it from somewhere.

 

If retrievable, I would like to avoid hard coding the value in the utility.

Link to comment

Hey Patilan (or anyone else interested),

 

I have an easy challenge for you.

 

The following doesn't work correctly, as there is a carriage return in the file that survives the evaluation: 

eval $(cat /boot/config/disk.cfg | egrep "md_num_stripes")

 

To demonstrate, if I do this:

echo "md_sync_window=x"$md_sync_window"x"

 

I get this:

xD_sync_window=x2816

 

Instead of this:

md_sync_window=x2816x

 

Which is why I did this, which works but is rather convoluted:

md_sync_window=`cat /boot/config/disk.cfg | egrep "md_sync_window" | sed 's|.*"\([0-9]*\)".*|\1|'`

 

Any easy way to do an eval that also strips off the carriage return?

 

-Paul

 

Try this one.

 

fromdos < /boot/config/disk.cfg | egrep "md_num_stripes"

 

or

 

tr -d '\r' <  /boot/config/disk.cfg | egrep "md_num_stripes"

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.