Jump to content
We're Hiring! Full Stack Developer ×

JorgeB

Moderators
  • Posts

    64,060
  • Joined

  • Last visited

  • Days Won

    676

Everything posted by JorgeB

  1. Thanks, was waiting for this to update my server. I'm getting a new refresh button on the place the IO toggle used to be, is this because of UD?
  2. Doesn't work on v6.2 stable, on purpose or mistake? 6.2 stable is not lower than rc5. plugin: installing: https://raw.githubusercontent.com/bergware/dynamix/master/unRAIDv6/dynamix.bleeding.edge.plg plugin: downloading https://raw.githubusercontent.com/bergware/dynamix/master/unRAIDv6/dynamix.bleeding.edge.plg plugin: downloading: https://raw.githubusercontent.com/bergware/dynamix/master/unRAIDv6/dynamix.bleeding.edge.plg ... done plugin: installed unRAID version is too low, require at least version 6.2.0-rc5
  3. Probably because it's a SAS disk, not because there's a problem, but I'll let someone with SAS disks experience pitch in.
  4. Regarding the new notifications, I really like them, but if the icons were white/grey when there's no notifications and changed to the green/yellow/red when there's one they would be much more noticeable, I'm afraid as is it could be some time before I notice any new one, though warnings and alerts I also get an email. As an alternative you could have only one icon and the color would change according to the most serious notification while the number would indicate the total amount, e.g., white/grey - no notifications green - notices only yellow - at least one of the notifications is a warning red - at least one of the notifications is an alert Don't know if it would be an easy change, but you asked for suggestions
  5. AFAIK it's not possible at the moment, I would also like to request this, as anytime I do longer writes to my cache SSD it goes 50C+ making all fans speed up unnecessarily.
  6. Thanks for the script, very useful. One suggestion, since vdisks by default are sparse use --sparse on rsync (or cp instead) to make the backups, say you have a 120GB vdisk but only 20GB are allocated, the way it is now the backup will use 120GB, with --sparse or cp it would only use 20GB.
  7. It doesn't, though you can achieve better performance with a managed switch using VLANs, see here: http://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html The main issue with this mode, like you mentioned, is that it only works with other linux computers.
  8. Only mode 4 requires a smart or managed switch, all others work with any switch.
  9. unRAID FAQ - Can I change my cache pool to RAID0 or other modes?
  10. I've ran 5+ preclears at once without issues.
  11. Nice! One suggestion, if disable, enable turbo write before starting, clearing can be up to 3 times faster. As for the progress info, there's an easy way for v6.2, if the script can check current unRAID version, you can add status=progress to the dd command.
  12. Thanks bonienl, looks cool, and certainly more useful than the reads/writes numbers that don't mean much. I also see some heavy fluctuating speeds, from ~170MB/s to over 400MB/s on my test server, maybe a higher sample time, if possible, will provide more accurate readings. Any way would could add another toggle to show the total bytes read/written (reset when pressing clear statistics)? This could also be useful. P.S.: it doesn't work on my cache disk, is this expected?
  13. I see your point, I think it's better for the script to take a few hours more but make sure it finds the optimal values.
  14. 16 hours is not that long for a test that most people only need to run once, but I wonder if running the nr_requests test before or after test 2 wouldn't give similar results in less time, i.e., after test 1 test nr_requests 8/16/128 with the best result from test 1, use the better result from then on, or as an alternative, after test 2, this way the last test would be done with a single nr_requests.
  15. How about using the first test to find only sync_window and sync_thresh? Looks to me like with nr_requests set at default there's a better chance of finding the optimal sync_thresh, also looks like the best sync_thresh is the same (or in the case of your last test, practically the same) with the various nr_request values, so after finding the optimal window and thresh values you could do a test on those changing only nr_requests, I believe this would be faster and provide better results than trying to find optimal values for all 3 settings at the same time.
  16. Agree, these settings may be optimal for some servers, and while I didn't have much time this week and intend to do more testing with different controllers later, all results point to a optimal setting that is usually a little below sync_window, just not sure if there is a set value, like -60 that is optimal for everything, ideally the script would run a short test in the beginning to try and find it. I retested a single LSI9211 with larger (and faster) SSDs in the hopes of seeing better defined results, and while they are, the optimal thresh value changed from the previous tests (previous tests where done with 2 controllers at the same time, so maybe why the difference, but I don't have 16 of largest SSDs to test with both again, and using only one controller with the smallest SSDs won't help either because results will be limited by their max speed in almost all tests) Sync_window=2048 stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 128 | 2047 | 289.7MB/s 4096 | 2048 | 128 | 2040 | 321.7MB/s 4096 | 2048 | 128 | 2036 | 335.2MB/s 4096 | 2048 | 128 | 2032 | 337.0MB/s 4096 | 2048 | 128 | 2028 | 340.5MB/s 4096 | 2048 | 128 | 2024 | 333.5MB/s 4096 | 2048 | 128 | 2016 | 330.0MB/s 4096 | 2048 | 128 | 1984 | 330.0MB/s 4096 | 2048 | 128 | 1960 | 330.0MB/s 4096 | 2048 | 128 | 1952 | 330.0MB/s 4096 | 2048 | 128 | 1920 | 330.0MB/s 4096 | 2048 | 128 | 1856 | 325.0MB/s 4096 | 2048 | 128 | 1792 | 326.6MB/s 4096 | 2048 | 128 | 1536 | 323.3MB/s 4096 | 2048 | 128 | 1280 | 320.1MB/s 4096 | 2048 | 128 | 1024 | 314.4MB/s Same sync_window but nr_requests=8 for the 4 fastest results (like before, looks like it doesn't make a big difference with LSI controllers) stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 8 | 2036 | 337.0MB/s 4096 | 2048 | 8 | 2032 | 340.5MB/s 4096 | 2048 | 8 | 2028 | 340.5MB/s 4096 | 2048 | 8 | 2024 | 335.2MB/s Sync_window=1024 and nr_request back to default: stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 2048 | 1024 | 128 | 1023 | 293.7MB/s 2048 | 1024 | 128 | 1016 | 328.3MB/s 2048 | 1024 | 128 | 1012 | 331.7MB/s 2048 | 1024 | 128 | 1008 | 333.5MB/s 2048 | 1024 | 128 | 1004 | 337.0MB/s 2048 | 1024 | 128 | 1000 | 325.0MB/s 2048 | 1024 | 128 | 996 | 316.9MB/s Sync_window=3072 stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 6144 | 3072 | 128 | 3071 | 295.0MB/s 6144 | 3072 | 128 | 3064 | 321.7MB/s 6144 | 3072 | 128 | 3056 | 335.2MB/s 6144 | 3072 | 128 | 3052 | 337.0MB/s 6144 | 3072 | 128 | 3048 | 333.5MB/s 6144 | 3072 | 128 | 3040 | 333.5MB/s 6144 | 3072 | 128 | 3032 | 331.7MB/s 6144 | 3072 | 128 | 3024 | 331.7MB/s 6144 | 3072 | 128 | 3016 | 326.6MB/s Best results were always with a thresh=sync_window-20, previous tests with 2 controllers best setting for thresh was sync_window-60.
  17. I don't know what the upper limit is, but tried up to 131072 and it works, didn't went any higher, doubt a higher number will help unRAID though, but only testing can confirm.
  18. Interesting results, look forward to the normal test results. PS: are you sure nr_requests=1 works? You can check the current value after setting it to 1, for me it never goes lower than 4. cat /sys/block/sdX/queue/nr_requests
  19. Requested values tests: stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 128 | 2047 | 72.4MB/s 4096 | 2048 | 128 | 2040 | 76.8MB/s 4096 | 2048 | 128 | 2032 | 78.3MB/s 4096 | 2048 | 128 | 2024 | 78.9MB/s 4096 | 2048 | 128 | 2016 | 80.0MB/s 4096 | 2048 | 128 | 1984 | 80.0MB/s 4096 | 2048 | 128 | 1960 | 79.8MB/s 4096 | 2048 | 128 | 1952 | 80.0MB/s 4096 | 2048 | 128 | 1920 | 79.8MB/s 4096 | 2048 | 128 | 1856 | 79.8MB/s 4096 | 2048 | 128 | 1792 | 79.8MB/s 4096 | 2048 | 128 | 1728 | 79.8MB/s 4096 | 2048 | 128 | 1664 | 78.5MB/s 4096 | 2048 | 128 | 1536 | 77.7MB/s 4096 | 2048 | 128 | 1280 | 77.5MB/s 4096 | 2048 | 128 | 1024 | 77.1MB/s 78.8 to 80.0MB/s is a single second difference in total time. stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 128 | 1024 | 77.1MB/s 4096 | 2048 | 64 | 1024 | 77.5MB/s 4096 | 2048 | 32 | 1024 | 77.3MB/s 4096 | 2048 | 16 | 1024 | 79.6MB/s 4096 | 2048 | 8 | 1024 | 79.8MB/s 4096 | 2048 | 4 | 1024 | 80.0MB/s 4096 | 2048 | 1 | 1024 | ? MB/s Although it can be set to 1 in unRAID, it will remain at 4, I believe that is the minimum possible setting.
  20. Script is not patched, don't forget you need to patch the one located in "/boot/config/plugins/preclear.disk".
  21. All my previous tests were done using 2 LSI 9211 (flashed H310), I now did some tests using a SASLP, since it's bandwidth challenged and a parity check will take more time the differences should be more noticeable, also it responds differently to the tunable changes. Only thresh was changed to find the optimal values stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 128 | 2047 | 72.4MB/s 4096 | 2048 | 128 | 2016 | 80.0MB/s 4096 | 2048 | 128 | 1984 | 80.0MB/s 4096 | 2048 | 128 | 1952 | 80.0MB/s 4096 | 2048 | 128 | 1920 | 79.8MB/s 4096 | 2048 | 128 | 1856 | 79.8MB/s 4096 | 2048 | 128 | 1792 | 79.8MB/s 4096 | 2048 | 128 | 1728 | 79.8MB/s 4096 | 2048 | 128 | 1664 | 78.5MB/s 4096 | 2048 | 128 | 1536 | 77.7MB/s 4096 | 2048 | 128 | 1280 | 77.5MB/s 4096 | 2048 | 128 | 1024 | 77.1MB/s With a sync_window of 2048 there's a big range where it works very well, from ~1728 to ~2016, with an apparent sweet spot from ~1950 to ~2000 ,and like the LSI neither sync_window-1 nor /2 provide the best results. Note also that with nr_requests=8 this controller performs always at optimal speed, making the tresh setting practically irrelevant. Of course if this controller is used with one that responds differently the trick is to find the best values with both together. Using nr_requests=8 with the 2 slowest thresh values: stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 8 | 2047 | 79.8MB/s 4096 | 2048 | 8 | 1024 | 79.8MB/s Next I'm going to test the SAS2LP, same chipset as your controller, but since I don't have a spare I'll have to use one from a server, so I'll do it as soon as I can, IIRC the results were similar to the SASLP but with much bigger differences.
×
×
  • Create New...