Jump to content

JorgeB

Moderators
  • Posts

    62,356
  • Joined

  • Last visited

  • Days Won

    660

Everything posted by JorgeB

  1. Nice! One suggestion, if disable, enable turbo write before starting, clearing can be up to 3 times faster. As for the progress info, there's an easy way for v6.2, if the script can check current unRAID version, you can add status=progress to the dd command.
  2. Thanks bonienl, looks cool, and certainly more useful than the reads/writes numbers that don't mean much. I also see some heavy fluctuating speeds, from ~170MB/s to over 400MB/s on my test server, maybe a higher sample time, if possible, will provide more accurate readings. Any way would could add another toggle to show the total bytes read/written (reset when pressing clear statistics)? This could also be useful. P.S.: it doesn't work on my cache disk, is this expected?
  3. I see your point, I think it's better for the script to take a few hours more but make sure it finds the optimal values.
  4. 16 hours is not that long for a test that most people only need to run once, but I wonder if running the nr_requests test before or after test 2 wouldn't give similar results in less time, i.e., after test 1 test nr_requests 8/16/128 with the best result from test 1, use the better result from then on, or as an alternative, after test 2, this way the last test would be done with a single nr_requests.
  5. How about using the first test to find only sync_window and sync_thresh? Looks to me like with nr_requests set at default there's a better chance of finding the optimal sync_thresh, also looks like the best sync_thresh is the same (or in the case of your last test, practically the same) with the various nr_request values, so after finding the optimal window and thresh values you could do a test on those changing only nr_requests, I believe this would be faster and provide better results than trying to find optimal values for all 3 settings at the same time.
  6. Agree, these settings may be optimal for some servers, and while I didn't have much time this week and intend to do more testing with different controllers later, all results point to a optimal setting that is usually a little below sync_window, just not sure if there is a set value, like -60 that is optimal for everything, ideally the script would run a short test in the beginning to try and find it. I retested a single LSI9211 with larger (and faster) SSDs in the hopes of seeing better defined results, and while they are, the optimal thresh value changed from the previous tests (previous tests where done with 2 controllers at the same time, so maybe why the difference, but I don't have 16 of largest SSDs to test with both again, and using only one controller with the smallest SSDs won't help either because results will be limited by their max speed in almost all tests) Sync_window=2048 stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 128 | 2047 | 289.7MB/s 4096 | 2048 | 128 | 2040 | 321.7MB/s 4096 | 2048 | 128 | 2036 | 335.2MB/s 4096 | 2048 | 128 | 2032 | 337.0MB/s 4096 | 2048 | 128 | 2028 | 340.5MB/s 4096 | 2048 | 128 | 2024 | 333.5MB/s 4096 | 2048 | 128 | 2016 | 330.0MB/s 4096 | 2048 | 128 | 1984 | 330.0MB/s 4096 | 2048 | 128 | 1960 | 330.0MB/s 4096 | 2048 | 128 | 1952 | 330.0MB/s 4096 | 2048 | 128 | 1920 | 330.0MB/s 4096 | 2048 | 128 | 1856 | 325.0MB/s 4096 | 2048 | 128 | 1792 | 326.6MB/s 4096 | 2048 | 128 | 1536 | 323.3MB/s 4096 | 2048 | 128 | 1280 | 320.1MB/s 4096 | 2048 | 128 | 1024 | 314.4MB/s Same sync_window but nr_requests=8 for the 4 fastest results (like before, looks like it doesn't make a big difference with LSI controllers) stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 8 | 2036 | 337.0MB/s 4096 | 2048 | 8 | 2032 | 340.5MB/s 4096 | 2048 | 8 | 2028 | 340.5MB/s 4096 | 2048 | 8 | 2024 | 335.2MB/s Sync_window=1024 and nr_request back to default: stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 2048 | 1024 | 128 | 1023 | 293.7MB/s 2048 | 1024 | 128 | 1016 | 328.3MB/s 2048 | 1024 | 128 | 1012 | 331.7MB/s 2048 | 1024 | 128 | 1008 | 333.5MB/s 2048 | 1024 | 128 | 1004 | 337.0MB/s 2048 | 1024 | 128 | 1000 | 325.0MB/s 2048 | 1024 | 128 | 996 | 316.9MB/s Sync_window=3072 stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 6144 | 3072 | 128 | 3071 | 295.0MB/s 6144 | 3072 | 128 | 3064 | 321.7MB/s 6144 | 3072 | 128 | 3056 | 335.2MB/s 6144 | 3072 | 128 | 3052 | 337.0MB/s 6144 | 3072 | 128 | 3048 | 333.5MB/s 6144 | 3072 | 128 | 3040 | 333.5MB/s 6144 | 3072 | 128 | 3032 | 331.7MB/s 6144 | 3072 | 128 | 3024 | 331.7MB/s 6144 | 3072 | 128 | 3016 | 326.6MB/s Best results were always with a thresh=sync_window-20, previous tests with 2 controllers best setting for thresh was sync_window-60.
  7. I don't know what the upper limit is, but tried up to 131072 and it works, didn't went any higher, doubt a higher number will help unRAID though, but only testing can confirm.
  8. Interesting results, look forward to the normal test results. PS: are you sure nr_requests=1 works? You can check the current value after setting it to 1, for me it never goes lower than 4. cat /sys/block/sdX/queue/nr_requests
  9. Requested values tests: stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 128 | 2047 | 72.4MB/s 4096 | 2048 | 128 | 2040 | 76.8MB/s 4096 | 2048 | 128 | 2032 | 78.3MB/s 4096 | 2048 | 128 | 2024 | 78.9MB/s 4096 | 2048 | 128 | 2016 | 80.0MB/s 4096 | 2048 | 128 | 1984 | 80.0MB/s 4096 | 2048 | 128 | 1960 | 79.8MB/s 4096 | 2048 | 128 | 1952 | 80.0MB/s 4096 | 2048 | 128 | 1920 | 79.8MB/s 4096 | 2048 | 128 | 1856 | 79.8MB/s 4096 | 2048 | 128 | 1792 | 79.8MB/s 4096 | 2048 | 128 | 1728 | 79.8MB/s 4096 | 2048 | 128 | 1664 | 78.5MB/s 4096 | 2048 | 128 | 1536 | 77.7MB/s 4096 | 2048 | 128 | 1280 | 77.5MB/s 4096 | 2048 | 128 | 1024 | 77.1MB/s 78.8 to 80.0MB/s is a single second difference in total time. stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 128 | 1024 | 77.1MB/s 4096 | 2048 | 64 | 1024 | 77.5MB/s 4096 | 2048 | 32 | 1024 | 77.3MB/s 4096 | 2048 | 16 | 1024 | 79.6MB/s 4096 | 2048 | 8 | 1024 | 79.8MB/s 4096 | 2048 | 4 | 1024 | 80.0MB/s 4096 | 2048 | 1 | 1024 | ? MB/s Although it can be set to 1 in unRAID, it will remain at 4, I believe that is the minimum possible setting.
  10. Script is not patched, don't forget you need to patch the one located in "/boot/config/plugins/preclear.disk".
  11. All my previous tests were done using 2 LSI 9211 (flashed H310), I now did some tests using a SASLP, since it's bandwidth challenged and a parity check will take more time the differences should be more noticeable, also it responds differently to the tunable changes. Only thresh was changed to find the optimal values stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 128 | 2047 | 72.4MB/s 4096 | 2048 | 128 | 2016 | 80.0MB/s 4096 | 2048 | 128 | 1984 | 80.0MB/s 4096 | 2048 | 128 | 1952 | 80.0MB/s 4096 | 2048 | 128 | 1920 | 79.8MB/s 4096 | 2048 | 128 | 1856 | 79.8MB/s 4096 | 2048 | 128 | 1792 | 79.8MB/s 4096 | 2048 | 128 | 1728 | 79.8MB/s 4096 | 2048 | 128 | 1664 | 78.5MB/s 4096 | 2048 | 128 | 1536 | 77.7MB/s 4096 | 2048 | 128 | 1280 | 77.5MB/s 4096 | 2048 | 128 | 1024 | 77.1MB/s With a sync_window of 2048 there's a big range where it works very well, from ~1728 to ~2016, with an apparent sweet spot from ~1950 to ~2000 ,and like the LSI neither sync_window-1 nor /2 provide the best results. Note also that with nr_requests=8 this controller performs always at optimal speed, making the tresh setting practically irrelevant. Of course if this controller is used with one that responds differently the trick is to find the best values with both together. Using nr_requests=8 with the 2 slowest thresh values: stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 8 | 2047 | 79.8MB/s 4096 | 2048 | 8 | 1024 | 79.8MB/s Next I'm going to test the SAS2LP, same chipset as your controller, but since I don't have a spare I'll have to use one from a server, so I'll do it as soon as I can, IIRC the results were similar to the SASLP but with much bigger differences.
  12. Exactly, although consistent this test server is good for a rough idea, a longer parity check would be better for fine tuning.
  13. stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2240 | 8 | 1960 | 206.6MB/s stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 8 | 1984 | 207.9MB/s 4096 | 2048 | 8 | 1968 | 207.9MB/s 4096 | 2048 | 8 | 1952 | 209.3MB/s 4096 | 2048 | 8 | 1920 | 207.9MB/s Note that a check with the same settings sometimes is a second shorter or longer, because it's a very small array it makes a difference of a few MB/s, so when the results are very close they could be practically considered the same, e.g.: Duration: 2 minutes, 32 seconds. Average speed: 210.6 MB/s Duration: 2 minutes, 33 seconds. Average speed: 209.3 MB/s Duration: 2 minutes, 34 seconds. Average speed: 207.9 MB/s So with a sync_window=2048 a sync_thresh from ~1900 to ~1990 gives very similar results.
  14. Here you go: Test | stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 1 | 6400 | 1960 | 8 | 1960 | 177.9MB/s 2 | 6400 | 1968 | 8 | 1960 | 205.2MB/s 3 | 6400 | 2240 | 8 | 1960 | 206.6MB/s 4 | 6400 | 2614 | 8 | 1960 | 203.9MB/s 5 | 6400 | 3920 | 8 | 1960 | 203.9MB/s 6 | 6400 | 5880 | 8 | 1960 | 202.6MB/s
  15. num_stripes=4096: 210.6 num_stripes=2096: 209.3 You're right, difference represents only one second for the total time, so we can consider the same speed.
  16. After some more testing looks like the important setting is md_sync_thresh, if set to the optimal value md_num_stripes can be left at 2 x md_sync_window. Is it possible for the script to test a few values for md_sync_thresh between md_sync_window-1 and md_sync_window/2?
  17. Script was not finding the best values for some of my servers, so I've doing some testing on my test server since I don't remember for sure how I arrived at the settings I've been using, some interesting findings, not sure if this helps or makes it more difficult, but after all there was a good reason why I chose my go to values: So, because the script only tests 2 sync_thresh settings and md_num_stripes is always 2 x md_sync_windows, it can't find the best settings. PS: test server has only LSI controllers, I think that's the reason why nr_request doesn't make much difference. Can't run the normal test on this server since the parity check finishes in 2.5minutes
  18. Check finished, original settings: Duration: 5 hours, 32 minutes, 38 seconds. Average speed: 150.3 MB/s New settings, using ~90% less RAM: Duration: 5 hours, 33 minutes, 11 seconds. Average speed: 150.1 MB/s As to the difference in speed reported, see this server as an example, script reports max speed at parity check start of ~185MB/s, unRAID reports ~200MB/s during the first 5% of the check:
  19. Overnight did a normal test for Tower7, previous short test report for this server is here. unRAID Tunables Tester v4.0b3 by Pauven (for unRAID v6.2) Tunables Report produced Fri Aug 26 00:11:11 BST 2016 Run on server: Tower7 Normal Automatic Parity Sync Test Current Values: md_num_stripes=4096, md_sync_window=2048, md_sync_thresh=2000 Global nr_requests=8 sdc nr_requests=8 sdd nr_requests=8 sde nr_requests=8 sdf nr_requests=8 sdg nr_requests=8 --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 5min Duration)--- Test | RAM | stripes | window | reqs | thresh | MB/s ------------------------------------------------------- 1 | 106 | 4096 | 2048 | 8 | 2000 | 182.4 --- FULLY AUTOMATIC nr_requests TEST 1 (4 Sample Points @ 10min Duration)--- Test | num_stripes | sync_window | nr_requests | sync_thresh | Speed --------------------------------------------------------------------------- 1 | 1536 | 768 | 128 | 767 | 185.9 MB/s 2 | 1536 | 768 | 128 | 384 | 184.6 MB/s 3 | 1536 | 768 | 8 | 767 | 184.7 MB/s 4 | 1536 | 768 | 8 | 384 | 184.7 MB/s Fastest vals were nr_reqs=128 and sync_thresh=99% of sync_window at 185.9 MB/s This nr_requests value will be used for the next test. --- FULLY AUTOMATIC TEST PASS 1a (Rough - 13 Sample Points @ 5min Duration)--- Test | RAM | stripes | window | reqs | thresh | MB/s | thresh | MB/s ------------------------------------------------------------------------ 1 | 19 | 768 | 384 | 128 | 383 | 184.2 | 192 | 187.6 2 | 23 | 896 | 448 | 128 | 447 | 184.6 | 224 | 183.1 3 | 26 | 1024 | 512 | 128 | 511 | 183.7 | 256 | 184.6 4 | 29 | 1152 | 576 | 128 | 575 | 181.3 | 288 | 184.7 5 | 33 | 1280 | 640 | 128 | 639 | 187.4 | 320 | 184.7 6 | 36 | 1408 | 704 | 128 | 703 | 184.4 | 352 | 185.0 7 | 39 | 1536 | 768 | 128 | 767 | 183.0 | 384 | 181.8 8 | 43 | 1664 | 832 | 128 | 831 | 183.2 | 416 | 184.5 9 | 46 | 1792 | 896 | 128 | 895 | 187.5 | 448 | 181.2 10 | 49 | 1920 | 960 | 128 | 959 | 182.9 | 480 | 184.6 11 | 53 | 2048 | 1024 | 128 | 1023 | 183.8 | 512 | 182.0 12 | 56 | 2176 | 1088 | 128 | 1087 | 181.9 | 544 | 184.7 13 | 59 | 2304 | 1152 | 128 | 1151 | 184.7 | 576 | 182.5 --- FULLY AUTOMATIC TEST PASS 1b (Rough - 5 Sample Points @ 5min Duration)--- Test | RAM | stripes | window | reqs | thresh | MB/s | thresh | MB/s ------------------------------------------------------------------------ 1 | 3 | 128 | 64 | 128 | 63 | 184.3 | 32 | 159.9 2 | 6 | 256 | 128 | 128 | 127 | 184.5 | 64 | 168.0 3 | 9 | 384 | 192 | 128 | 191 | 184.4 | 96 | 181.0 4 | 13 | 512 | 256 | 128 | 255 | 184.5 | 128 | 189.5 5 | 16 | 640 | 320 | 128 | 319 | 184.5 | 160 | 184.6 --- Targeting Fastest Result of md_sync_window 256 bytes for Final Pass --- --- FULLY AUTOMATIC nr_requests TEST 2 (4 Sample Points @ 10min Duration)--- Test | num_stripes | sync_window | nr_requests | sync_thresh | Speed --------------------------------------------------------------------------- 1 | 512 | 256 | 128 | 255 | 184.4 MB/s 2 | 512 | 256 | 128 | 128 | 184.3 MB/s 3 | 512 | 256 | 8 | 255 | 184.7 MB/s 4 | 512 | 256 | 8 | 128 | 184.1 MB/s Fastest vals were nr_reqs=8 and sync_thresh=99% of sync_window at 184.7 MB/s This nr_requests value will be used for the next test. --- FULLY AUTOMATIC TEST PASS 2 (Fine - 33 Sample Points @ 5min Duration)--- Test | RAM | stripes | window | reqs | thresh | MB/s | thresh | MB/s ------------------------------------------------------------------------ 1 | 6 | 256 | 128 | 8 | 127 | 184.2 | 64 | 166.5 2 | 7 | 272 | 136 | 8 | 135 | 184.4 | 68 | 167.9 3 | 7 | 288 | 144 | 8 | 143 | 184.4 | 72 | 166.7 4 | 7 | 304 | 152 | 8 | 151 | 182.9 | 76 | 175.3 5 | 8 | 320 | 160 | 8 | 159 | 184.9 | 80 | 171.5 6 | 8 | 336 | 168 | 8 | 167 | 185.0 | 84 | 172.0 7 | 9 | 352 | 176 | 8 | 175 | 184.4 | 88 | 171.2 8 | 9 | 368 | 184 | 8 | 183 | 184.4 | 92 | 173.7 9 | 9 | 384 | 192 | 8 | 191 | 184.4 | 96 | 175.1 10 | 10 | 400 | 200 | 8 | 199 | 186.8 | 100 | 180.9 11 | 10 | 416 | 208 | 8 | 207 | 187.4 | 104 | 182.7 12 | 11 | 432 | 216 | 8 | 215 | 186.2 | 108 | 182.6 13 | 11 | 448 | 224 | 8 | 223 | 184.4 | 112 | 184.4 14 | 12 | 464 | 232 | 8 | 231 | 184.5 | 116 | 187.8 15 | 12 | 480 | 240 | 8 | 239 | 184.5 | 120 | 184.8 16 | 12 | 496 | 248 | 8 | 247 | 183.5 | 124 | 184.6 17 | 13 | 512 | 256 | 8 | 255 | 183.8 | 128 | 183.3 18 | 13 | 528 | 264 | 8 | 263 | 184.5 | 132 | 187.3 19 | 14 | 544 | 272 | 8 | 271 | 188.1 | 136 | 184.8 20 | 14 | 560 | 280 | 8 | 279 | 184.8 | 140 | 184.4 21 | 14 | 576 | 288 | 8 | 287 | 183.9 | 144 | 184.5 22 | 15 | 592 | 296 | 8 | 295 | 184.5 | 148 | 187.5 23 | 15 | 608 | 304 | 8 | 303 | 184.7 | 152 | 184.5 24 | 16 | 624 | 312 | 8 | 311 | 184.4 | 156 | 184.7 25 | 16 | 640 | 320 | 8 | 319 | 185.9 | 160 | 182.1 26 | 17 | 656 | 328 | 8 | 327 | 187.4 | 164 | 184.5 27 | 17 | 672 | 336 | 8 | 335 | 187.6 | 168 | 184.5 28 | 17 | 688 | 344 | 8 | 343 | 184.4 | 172 | 184.6 29 | 18 | 704 | 352 | 8 | 351 | 184.5 | 176 | 184.7 30 | 18 | 720 | 360 | 8 | 359 | 184.6 | 180 | 187.4 31 | 19 | 736 | 368 | 8 | 367 | 186.5 | 184 | 184.6 32 | 19 | 752 | 376 | 8 | 375 | 186.1 | 188 | 184.5 33 | 19 | 768 | 384 | 8 | 383 | 184.5 | 192 | 184.7 The results below do NOT include the Basline test of current values. The Fastest Sync Speed tested was md_sync_window=272 at 188.1 MB/s Tunable (md_num_stripes): 544 Tunable (md_sync_window): 272 Tunable (md_sync_thresh): 271 Tunable (nr_requests): 8 This will consume 14 MB with md_num_stripes=544, 2x md_sync_window. This is 92MB less than your current utilization of 106MB. The Thriftiest Sync Speed tested was md_sync_window=64 at 184.3 MB/s Tunable (md_num_stripes): 128 Tunable (md_sync_window): 64 Tunable (md_sync_thresh): 63 Tunable (nr_requests): 8 This will consume 3 MB with md_num_stripes=128, 2x md_sync_window. This is 103MB less than your current utilization of 106MB. The Recommended Sync Speed is md_sync_window=208 at 187.4 MB/s Tunable (md_num_stripes): 416 Tunable (md_sync_window): 208 Tunable (md_sync_thresh): 207 Tunable (nr_requests): 8 This will consume 10 MB with md_num_stripes=416, 2x md_sync_window. This is 96MB less than your current utilization of 106MB. NOTE: Adding additional drives will increase memory consumption. In unRAID, go to Settings > Disk Settings to set your chosen parameter values. Completed: 10 Hrs 17 Min 54 Sec. NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with unRAID, especially if you have any add-ons or plug-ins installed. System Info: Tower7 unRAID version 6.2.0-rc4 md_num_stripes=4096 md_sync_window=2048 md_sync_thresh=2000 nr_requests=8 (Global Setting) sbNumDisks=6 CPU: Intel(R) Xeon(R) CPU E31220 @ 3.10GHz RAM: 32GiB System Memory Outputting lshw information for Drives and Controllers: H/W path Device Class Description ======================================================= /0/100/6/0 scsi1 storage ASC-1405 Unified Serial HBA /0/100/6/0/0.1.0 /dev/sdk disk 120GB KINGSTON SV300S3 /0/100/6/0/0.2.0 /dev/sdl disk 120GB KINGSTON SV300S3 /0/100/6/0/0.3.0 /dev/sdm disk 120GB KINGSTON SV300S3 /0/100/6/0/0.0.0 /dev/sdj disk 120GB KINGSTON SV300S3 /0/100/1c/0 storage ASM1062 Serial ATA Controller /0/100/1f.2 storage 6 Series/C200 Series Chipset Family SATA AHCI Controller /0/1 scsi0 storage /0/1/0.0.0 /dev/sda disk 7864MB DataTraveler 2.0 /0/1/0.0.0/0 /dev/sda disk 7864MB /0/2 scsi2 storage /0/2/0.0.0 /dev/sdb disk 512GB TS512GSSD370S /0/3 scsi3 storage /0/3/0.0.0 /dev/sdc disk 3TB TOSHIBA DT01ACA3 /0/8 scsi4 storage /0/8/0.0.0 /dev/sdd disk 3TB TOSHIBA DT01ACA3 /0/9 scsi5 storage /0/9/0.0.0 /dev/sde disk 3TB TOSHIBA DT01ACA3 /0/a scsi6 storage /0/a/0.0.0 /dev/sdf disk 3TB TOSHIBA DT01ACA3 /0/b scsi7 storage /0/b/0.0.0 /dev/sdg disk 3TB TOSHIBA DT01ACA3 /0/c scsi8 storage /0/c/0.0.0 /dev/sdh disk 180GB INTEL SSDSC2CT18 /0/d scsi9 storage /0/d/0.0.0 /dev/sdi disk 500GB TOSHIBA MK5055GS Array Devices: Disk0 sdc is a Parity drive named parity Disk1 sdd is a Data drive named disk1 Disk2 sde is a Data drive named disk2 Disk3 sdf is a Data drive named disk3 Disk4 sdg is a Data drive named disk4 Outputting free low memory information... total used free shared buff/cache available Mem: 32991160 5420312 25549684 430264 2021164 26694448 Low: 32991160 7441476 25549684 High: 0 0 0 Swap: 0 0 0 *** END OF REPORT *** Note the much more consistent results, I'm now running a parity check with the fastest values found, I don't expect much speed improvement as this is a very simple server, but if it remains similar as before it means I had exaggeratedly high values and was wasting a lot of RAM. Suggestion: Since the short test is now much faster, how about doubling (or even tripling) each sample time, I believe this would make the short test much more accurate, helping each user decide if it's worth doing the 10 hour normal test. P.S.: could there be a difference in how speeds are reported between the script and unRAID, like one is using MiB/s and other MB/s? I noticed this with the original script, reported unRAID speed is usually about 8% higher than the script, not that it really matters, since the point is finding the best reported speed, but it can look like the script results are slower when in fact they're not.
  20. Power consumption is also important to me, most of my servers are storage only, I turn them on once a week to move data, the smallest server, Tower7, is my VM and docker server and the only one that's always on, the SSD server also has more usage since it's where I store ongoing TV seasons, when the season it's complete it's archived to one of the other servers, after checksums and par2s are created. These are approximate numbers since it's been a while since I measured, with all disks spun up: Tower1 and 6 (22HDDs): 180w Tower2 and 3 (14HDDs): 130w Tower4 (8HDDs) and Tower5 (30SSDs, LSI controllers and the expander are the big users, ~10W each): 90w Tower7 (6HDDs + 6SSDs): 90W (~60W during normal use with all or all but one disk spun down) I only have 2 900VA/540W UPSes for all the servers plus my desktop, with everything on they get close to 500W load each, besides the very low runtime, all servers are in a smallish office type room, so it gets pretty hot quickly, it's nice in the winter
  21. I'm not home now and server is off, but I'll check when I get home, though I'm not sure what to check...
  22. IIRC, 25% was very slow, most of my controllers worked better with sync_thresh close to md_sync_window, except the SASLP and the SAS2LP if nr_request was set at default, as using nr_requests=8 "fixed" the SAS2LP I could then set a high sync_thresh, and so I found that theses values were almost universally good and my go to defaults. Don't remember why some have a higher num_stripes, I believe results were similar with 4400 or 4096, my usual default is 4096/2048/2000 with nr_requests=8, but on a server without any Marvell based controller nr_requests can be left at default. I believe my servers are running close to maximum speed, they are disk limited (CPU limited for the SSD server), I wanted to participate more to help refine the script so more people can benefit, still wouldn't mind getting a few more MB/s out of them.
  23. Basically I found those setting worked great on all my servers, independent of the controllers used. At the time I used your old script, and because it didn't test for md_sync_thresh I manually entered round values, e.g., I would do 4 runs with sync_thresh manually set at 500, 1000, 1500 and 2000 and picked the value with the best result. I also found that in most cases there was only a noticeable difference when the value approached half sync_window, e.g, with a sync_window set at 2048, performance was very similar with sync_thresh set at 1500, 2000 or 2047.
×
×
  • Create New...