Jump to content

JorgeB

Moderators
  • Posts

    63,741
  • Joined

  • Last visited

  • Days Won

    674

Everything posted by JorgeB

  1. Did you see my post above about editing VMs? Is that just happening to me?
  2. The only current way to get more than gigabit from a single transfer from windows to unRAID is by using 10GbE, it may be possible in the near future when SAMBA improves SMB multichannel support.
  3. I have a issue with latest 2016.09.18c, can't edit any VM, both edit and edit XML, nothing happens after pressing update, even if nothing is changed. Uninstalling and rebooting fixed it.
  4. Thanks, that was it, I somehow disabled that by mistake.
  5. Thanks, was waiting for this to update my server. I'm getting a new refresh button on the place the IO toggle used to be, is this because of UD?
  6. Doesn't work on v6.2 stable, on purpose or mistake? 6.2 stable is not lower than rc5. plugin: installing: https://raw.githubusercontent.com/bergware/dynamix/master/unRAIDv6/dynamix.bleeding.edge.plg plugin: downloading https://raw.githubusercontent.com/bergware/dynamix/master/unRAIDv6/dynamix.bleeding.edge.plg plugin: downloading: https://raw.githubusercontent.com/bergware/dynamix/master/unRAIDv6/dynamix.bleeding.edge.plg ... done plugin: installed unRAID version is too low, require at least version 6.2.0-rc5
  7. Probably because it's a SAS disk, not because there's a problem, but I'll let someone with SAS disks experience pitch in.
  8. Regarding the new notifications, I really like them, but if the icons were white/grey when there's no notifications and changed to the green/yellow/red when there's one they would be much more noticeable, I'm afraid as is it could be some time before I notice any new one, though warnings and alerts I also get an email. As an alternative you could have only one icon and the color would change according to the most serious notification while the number would indicate the total amount, e.g., white/grey - no notifications green - notices only yellow - at least one of the notifications is a warning red - at least one of the notifications is an alert Don't know if it would be an easy change, but you asked for suggestions
  9. AFAIK it's not possible at the moment, I would also like to request this, as anytime I do longer writes to my cache SSD it goes 50C+ making all fans speed up unnecessarily.
  10. Thanks for the script, very useful. One suggestion, since vdisks by default are sparse use --sparse on rsync (or cp instead) to make the backups, say you have a 120GB vdisk but only 20GB are allocated, the way it is now the backup will use 120GB, with --sparse or cp it would only use 20GB.
  11. It doesn't, though you can achieve better performance with a managed switch using VLANs, see here: http://louwrentius.com/achieving-450-mbs-network-file-transfers-using-linux-bonding.html The main issue with this mode, like you mentioned, is that it only works with other linux computers.
  12. Only mode 4 requires a smart or managed switch, all others work with any switch.
  13. unRAID FAQ - Can I change my cache pool to RAID0 or other modes?
  14. I've ran 5+ preclears at once without issues.
  15. Nice! One suggestion, if disable, enable turbo write before starting, clearing can be up to 3 times faster. As for the progress info, there's an easy way for v6.2, if the script can check current unRAID version, you can add status=progress to the dd command.
  16. Thanks bonienl, looks cool, and certainly more useful than the reads/writes numbers that don't mean much. I also see some heavy fluctuating speeds, from ~170MB/s to over 400MB/s on my test server, maybe a higher sample time, if possible, will provide more accurate readings. Any way would could add another toggle to show the total bytes read/written (reset when pressing clear statistics)? This could also be useful. P.S.: it doesn't work on my cache disk, is this expected?
  17. I see your point, I think it's better for the script to take a few hours more but make sure it finds the optimal values.
  18. 16 hours is not that long for a test that most people only need to run once, but I wonder if running the nr_requests test before or after test 2 wouldn't give similar results in less time, i.e., after test 1 test nr_requests 8/16/128 with the best result from test 1, use the better result from then on, or as an alternative, after test 2, this way the last test would be done with a single nr_requests.
  19. How about using the first test to find only sync_window and sync_thresh? Looks to me like with nr_requests set at default there's a better chance of finding the optimal sync_thresh, also looks like the best sync_thresh is the same (or in the case of your last test, practically the same) with the various nr_request values, so after finding the optimal window and thresh values you could do a test on those changing only nr_requests, I believe this would be faster and provide better results than trying to find optimal values for all 3 settings at the same time.
  20. Agree, these settings may be optimal for some servers, and while I didn't have much time this week and intend to do more testing with different controllers later, all results point to a optimal setting that is usually a little below sync_window, just not sure if there is a set value, like -60 that is optimal for everything, ideally the script would run a short test in the beginning to try and find it. I retested a single LSI9211 with larger (and faster) SSDs in the hopes of seeing better defined results, and while they are, the optimal thresh value changed from the previous tests (previous tests where done with 2 controllers at the same time, so maybe why the difference, but I don't have 16 of largest SSDs to test with both again, and using only one controller with the smallest SSDs won't help either because results will be limited by their max speed in almost all tests) Sync_window=2048 stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 128 | 2047 | 289.7MB/s 4096 | 2048 | 128 | 2040 | 321.7MB/s 4096 | 2048 | 128 | 2036 | 335.2MB/s 4096 | 2048 | 128 | 2032 | 337.0MB/s 4096 | 2048 | 128 | 2028 | 340.5MB/s 4096 | 2048 | 128 | 2024 | 333.5MB/s 4096 | 2048 | 128 | 2016 | 330.0MB/s 4096 | 2048 | 128 | 1984 | 330.0MB/s 4096 | 2048 | 128 | 1960 | 330.0MB/s 4096 | 2048 | 128 | 1952 | 330.0MB/s 4096 | 2048 | 128 | 1920 | 330.0MB/s 4096 | 2048 | 128 | 1856 | 325.0MB/s 4096 | 2048 | 128 | 1792 | 326.6MB/s 4096 | 2048 | 128 | 1536 | 323.3MB/s 4096 | 2048 | 128 | 1280 | 320.1MB/s 4096 | 2048 | 128 | 1024 | 314.4MB/s Same sync_window but nr_requests=8 for the 4 fastest results (like before, looks like it doesn't make a big difference with LSI controllers) stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 4096 | 2048 | 8 | 2036 | 337.0MB/s 4096 | 2048 | 8 | 2032 | 340.5MB/s 4096 | 2048 | 8 | 2028 | 340.5MB/s 4096 | 2048 | 8 | 2024 | 335.2MB/s Sync_window=1024 and nr_request back to default: stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 2048 | 1024 | 128 | 1023 | 293.7MB/s 2048 | 1024 | 128 | 1016 | 328.3MB/s 2048 | 1024 | 128 | 1012 | 331.7MB/s 2048 | 1024 | 128 | 1008 | 333.5MB/s 2048 | 1024 | 128 | 1004 | 337.0MB/s 2048 | 1024 | 128 | 1000 | 325.0MB/s 2048 | 1024 | 128 | 996 | 316.9MB/s Sync_window=3072 stripes | window | nr_reqs | thresh | Speed ----------------------------------------------------------- 6144 | 3072 | 128 | 3071 | 295.0MB/s 6144 | 3072 | 128 | 3064 | 321.7MB/s 6144 | 3072 | 128 | 3056 | 335.2MB/s 6144 | 3072 | 128 | 3052 | 337.0MB/s 6144 | 3072 | 128 | 3048 | 333.5MB/s 6144 | 3072 | 128 | 3040 | 333.5MB/s 6144 | 3072 | 128 | 3032 | 331.7MB/s 6144 | 3072 | 128 | 3024 | 331.7MB/s 6144 | 3072 | 128 | 3016 | 326.6MB/s Best results were always with a thresh=sync_window-20, previous tests with 2 controllers best setting for thresh was sync_window-60.
  21. I don't know what the upper limit is, but tried up to 131072 and it works, didn't went any higher, doubt a higher number will help unRAID though, but only testing can confirm.
  22. Interesting results, look forward to the normal test results. PS: are you sure nr_requests=1 works? You can check the current value after setting it to 1, for me it never goes lower than 4. cat /sys/block/sdX/queue/nr_requests
×
×
  • Create New...