Jump to content
We're Hiring! Full Stack Developer ×

JorgeB

Moderators
  • Posts

    64,169
  • Joined

  • Last visited

  • Days Won

    677

Everything posted by JorgeB

  1. +1 On v6.1 you could change the md_write_limit manually on disk.cfg, and I did several tests at the time and never noticed any difference in write speed, maybe the setting was needless and that's way it was removed, AFAIK Tom never explained it, nor how stripes vs window works. I find that I get best results with md_num_stripes set to aprx. twice the md_sync_window value. nr_requests is not an unRAID tunable, it's a Linux setting, it was a workaround found to fix the SAS2LP issue before the md_sync_thresh tunable was added, it's possible that changing it affects write performance, though I believe nobody ever noticed any issue, it can still be useful, e.g., if a server as both a SAS2LP and a LSI controller setting nr_request to 8 and md_sync_thresh close to md_sync_window gives the best performance.
  2. Great to see you working on the script again. A lot has changed since 5.0, the biggest difference being the addition of a new tunable: md_sync_thresh, you can read Tom's explanation here. This tunable was added to improve the parity check performance some users were having with the SAS2LP, IIRC, you controller uses the same chipset, basically the SASLP and SAS2LP work faster if this value is approximately half the md_sync_window value, other controllers like most LSI based ones, work faster with a md_sync_threash just a little lower than md_sync_window. Hope you can include testing for this new tunable in your plugin.
  3. Working great, many thanks! Would it be easy to have the sleep repeat in case of drive activity, i.e., call the sleep script again and wait another xx seconds, until there's no activity and shutdown.
  4. Disks are always spinning when I execute the script, because it's just a few minutes after turning the server on and copying a few GBs to the cache, usually once or twice a week.
  5. Thanks! No rush, I'll try this one tomorrow.
  6. If you're doing the other one I would still prefer it, avoids the server being on for an extra 15 minutes unnecessarily, the least you can set to spin down.
  7. Thanks, the easy way would work for me, but like I said I'm totally clueless with scripts, how do I add a line to check if they're all spun down? This is my current script: /usr/local/sbin/mover /sbin/fstrim -v /mnt/cache /usr/bin/sleep 180 /usr/local/sbin/powerdown I'd like to replace the sleep line with the check for spun down disks.
  8. I'm trying to make a very basic script for one of my servers, and wonder if anyone can help as I'm clueless with them. This is the scrip I need: 1-run mover 2-trim cache 3-check for array inactivity (monitor reads/writes?), if inactive for say 60 seconds go to 4 4-powerdown 1,2 and 4 are easy, any ideas how I can do 3?
  9. I was comparing your findings on one of my servers and it appears preclear beta uses much less RAM than on yours, I have the array started doing a dual disk upgrade and preclearing two disks at the same time, this server has 8GB of RAM. EDIT: Just though of a reason that can explain the difference, since these are previously used disks I'm skipping the pre and post reads, so the pre read can have higher memory utilization than the zeroing.
  10. Never tested as I only have 1 Intel expander but I would expect these speeds: using a PCIe 2.0 HBA the bottleneck is the PCIe bus, max speed ~110/125MB/s using a PCIe 3.0 HBA the bottleneck are the SAS2 links, 2200 * 2 / 24 = 185MB/s This configuration is not possible, you can't connect the same expander to more than one controller.
  11. Well, since v6.1 allows trading slots I assumed it would also allow to change the slots used, guess I was wrong, in that case you'll have to do a new config. Don't forget to check "Parity is already valid"
  12. Yes, if you're running v6.1 there's no need to do a new config, though never tried adding a new disk at the same time, so sort new order first, then add the precleared disk. There's also no problem if you're running v6.2 with single parity but you do have to do a new config and trust parity, if you have dual parity you can't change order without invalidating parity2. Parity check after is not required but always a good idea.
  13. Recently (starting with 6.2beta?) if array auto start is set to yes array starts even when there's a missing disk, IMO this can be dangerous when upgrading a disk, I'm used to upgrade a disk while leaving auto star ON, unRAID would detect a missing disk and wouldn't start the array, I just assign the new disk to begin the rebuild. Now, if say while upgrading a disk I bump a cable to another disk in a server with dual parity it will start the array with 2 missing disks, so besides the upgrade I'll have to rebuild another disk. If you want to keep this behavior then please consider setting another option for enable auto start: Always Yes, if there aren't missing disks No
  14. Should be OK, I recently connected one of my UD disks to a different controller and it was detected as usual.
  15. No, I haven't had time yet. But when I do it will just be low, medium, high for all fans. You you need help with testing feel free to ask, I have the X9SCM-F and X9SCL-F
  16. It works on a gradient, based on low and high temps selected, I understand temp is the highest HDD one. Try installing Dynamix System Temp, I believe the sensors come from that.
  17. No, I haven't had time yet. But when I do it will just be low, medium, high for all fans. I'm not sure if I posted this earlier or not, but Supermicro confirmed to me that it's not possible to adjust the fans individually or fan groups to a certain duty cycle. Dynamix auto fan is working for me, but I can only set one of the 2 groups, fan A or fans 1 to 4.
  18. Sorry, I read the entire thread but still not sure if fan control is already working on Supermicro X9 series, can you please confirm?
  19. Plugins -> Preclear Disks Beta (click icon)
  20. No need to install 7 first. johnnie, is that 1. "There's no need to install 7 first" or 2 "No, you need to install 7 first" With the latest win10 iso there's no need to install 7 (or 8 ) first, it will accept your old key.
  21. Can I change my btrfs pool to RAID0 or other modes? Yes, for now it can only be manually changed, new config will stick after a reboot, but note that changing the pool using the WebGUI, e.g., adding a device, will return cache pool to default RAID1 mode (note: starting with unRAID v6.3.3 cache pool profile in use will be maintained when a new device is added using the WebGUI, except when another device is added to a single device cache, in that case it will create a raid1 pool), you can add, replace or remove a device and maintain the profile in use following the appropriate procedure on the FAQ (remove only if it does not go below the minimum number of devices required for that specific profile). It's normal to get a "Cache pool BTRFS too many profiles" warning during the conversion, just acknowledge it. These are the available modes (enter these commands on the cache page balance window e click balance**, note if the command doesn't work type it instead of using copy/past from the forum, sometimes extra characters are pasted and the balance won't work) ** Since v6.8.3 you can chose the profile you want from the drop-down window and it's not possible to type a custom command: All the command below can still be used on the console: Single: requires 1 device only, it's also the only way of using all space from different size devices, btrfs's way of doing a JBOD spanned volume, no performance gains vs single disk or RAID1 btrfs balance start -dconvert=single -mconvert=raid1 /mnt/cache RAID0: requires 2 device, best performance, no redundancy, if used with different size devices only 2 x capacity of smallest device will be available, even if reported space is larger. btrfs balance start -dconvert=raid0 -mconvert=raid1 /mnt/cache RAID1: default, requires at least 2 devices, to use full capacity of a 2 device pool they all need to be the same size. btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/cache RAID10: requires at least 4 devices, to use full capacity of a 4 device pool they all need to be the same size. btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache RAID5/6 still has some issues and should be used with care, though most serious issues have been fixed on current kernel at this of this edit 4.14.x RAID5: requires at least 3 devices. btrfs balance start -dconvert=raid5 -mconvert=raid1 /mnt/cache RAID6: requires at least 4 devices. btrfs balance start -dconvert=raid6 -mconvert=raid1 /mnt/cache Note about raid6**: because metadata is raid1 it can only handle 1 missing device, but it can still help with a URE on a second disk during a replace, since metadata uses a very small portion of the drive, you can use raid5/6 for metadata but it's currently not recommended because of the write hole issue, it can for example blowup the entire filesystem after an unclean shutdown. ** Starting with Unraid v6.9-beta1 btrfs includes support for raid1 with 3 and 4 copies, raid1c3 and raidc4, so you can use raid1c3 for metadata to have the same redundancy as raid6 for data (but note that the pool won't mount if you downgrade to an earlier release before converting back to a supported profile on the older kernel): btrfs balance start -dconvert=raid6 -mconvert=raid1c3 /mnt/cache Obs: -d refers to the data, -m to the metadata, metadata should be left redundant, i.e., you can have a RAID0 pool with RAID1 metadata, metadata takes up very little space and the added protection can be valuable. When changing pool mode confirm that when the balance is done data is all in the new selected mode, check "btrfs filesystem df"on the cache page, this is how a RAID10 pool should look like: If there is more than one data mode displayed, do the balance again with the mode you want, for some unRAID releases and the included btrfs-tools, eg, v6.1 and v6.2 it's normal needing to run the balance twice.
×
×
  • Create New...