Jump to content

JorgeB

Moderators
  • Posts

    67,140
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. If you're doing the other one I would still prefer it, avoids the server being on for an extra 15 minutes unnecessarily, the least you can set to spin down.
  2. Thanks, the easy way would work for me, but like I said I'm totally clueless with scripts, how do I add a line to check if they're all spun down? This is my current script: /usr/local/sbin/mover /sbin/fstrim -v /mnt/cache /usr/bin/sleep 180 /usr/local/sbin/powerdown I'd like to replace the sleep line with the check for spun down disks.
  3. I'm trying to make a very basic script for one of my servers, and wonder if anyone can help as I'm clueless with them. This is the scrip I need: 1-run mover 2-trim cache 3-check for array inactivity (monitor reads/writes?), if inactive for say 60 seconds go to 4 4-powerdown 1,2 and 4 are easy, any ideas how I can do 3?
  4. I was comparing your findings on one of my servers and it appears preclear beta uses much less RAM than on yours, I have the array started doing a dual disk upgrade and preclearing two disks at the same time, this server has 8GB of RAM. EDIT: Just though of a reason that can explain the difference, since these are previously used disks I'm skipping the pre and post reads, so the pre read can have higher memory utilization than the zeroing.
  5. Never tested as I only have 1 Intel expander but I would expect these speeds: using a PCIe 2.0 HBA the bottleneck is the PCIe bus, max speed ~110/125MB/s using a PCIe 3.0 HBA the bottleneck are the SAS2 links, 2200 * 2 / 24 = 185MB/s This configuration is not possible, you can't connect the same expander to more than one controller.
  6. Well, since v6.1 allows trading slots I assumed it would also allow to change the slots used, guess I was wrong, in that case you'll have to do a new config. Don't forget to check "Parity is already valid"
  7. Yes, if you're running v6.1 there's no need to do a new config, though never tried adding a new disk at the same time, so sort new order first, then add the precleared disk. There's also no problem if you're running v6.2 with single parity but you do have to do a new config and trust parity, if you have dual parity you can't change order without invalidating parity2. Parity check after is not required but always a good idea.
  8. Recently (starting with 6.2beta?) if array auto start is set to yes array starts even when there's a missing disk, IMO this can be dangerous when upgrading a disk, I'm used to upgrade a disk while leaving auto star ON, unRAID would detect a missing disk and wouldn't start the array, I just assign the new disk to begin the rebuild. Now, if say while upgrading a disk I bump a cable to another disk in a server with dual parity it will start the array with 2 missing disks, so besides the upgrade I'll have to rebuild another disk. If you want to keep this behavior then please consider setting another option for enable auto start: Always Yes, if there aren't missing disks No
  9. Should be OK, I recently connected one of my UD disks to a different controller and it was detected as usual.
  10. No, I haven't had time yet. But when I do it will just be low, medium, high for all fans. You you need help with testing feel free to ask, I have the X9SCM-F and X9SCL-F
  11. It works on a gradient, based on low and high temps selected, I understand temp is the highest HDD one. Try installing Dynamix System Temp, I believe the sensors come from that.
  12. No, I haven't had time yet. But when I do it will just be low, medium, high for all fans. I'm not sure if I posted this earlier or not, but Supermicro confirmed to me that it's not possible to adjust the fans individually or fan groups to a certain duty cycle. Dynamix auto fan is working for me, but I can only set one of the 2 groups, fan A or fans 1 to 4.
  13. Sorry, I read the entire thread but still not sure if fan control is already working on Supermicro X9 series, can you please confirm?
  14. Plugins -> Preclear Disks Beta (click icon)
  15. No need to install 7 first. johnnie, is that 1. "There's no need to install 7 first" or 2 "No, you need to install 7 first" With the latest win10 iso there's no need to install 7 (or 8 ) first, it will accept your old key.
  16. Can I change my btrfs pool to RAID0 or other modes? Yes, for now it can only be manually changed, new config will stick after a reboot, but note that changing the pool using the WebGUI, e.g., adding a device, will return cache pool to default RAID1 mode (note: starting with unRAID v6.3.3 cache pool profile in use will be maintained when a new device is added using the WebGUI, except when another device is added to a single device cache, in that case it will create a raid1 pool), you can add, replace or remove a device and maintain the profile in use following the appropriate procedure on the FAQ (remove only if it does not go below the minimum number of devices required for that specific profile). It's normal to get a "Cache pool BTRFS too many profiles" warning during the conversion, just acknowledge it. These are the available modes (enter these commands on the cache page balance window e click balance**, note if the command doesn't work type it instead of using copy/past from the forum, sometimes extra characters are pasted and the balance won't work) ** Since v6.8.3 you can chose the profile you want from the drop-down window and it's not possible to type a custom command: All the command below can still be used on the console: Single: requires 1 device only, it's also the only way of using all space from different size devices, btrfs's way of doing a JBOD spanned volume, no performance gains vs single disk or RAID1 btrfs balance start -dconvert=single -mconvert=raid1 /mnt/cache RAID0: requires 2 device, best performance, no redundancy, if used with different size devices only 2 x capacity of smallest device will be available, even if reported space is larger. btrfs balance start -dconvert=raid0 -mconvert=raid1 /mnt/cache RAID1: default, requires at least 2 devices, to use full capacity of a 2 device pool they all need to be the same size. btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/cache RAID10: requires at least 4 devices, to use full capacity of a 4 device pool they all need to be the same size. btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache RAID5/6 still has some issues and should be used with care, though most serious issues have been fixed on current kernel at this of this edit 4.14.x RAID5: requires at least 3 devices. btrfs balance start -dconvert=raid5 -mconvert=raid1 /mnt/cache RAID6: requires at least 4 devices. btrfs balance start -dconvert=raid6 -mconvert=raid1 /mnt/cache Note about raid6**: because metadata is raid1 it can only handle 1 missing device, but it can still help with a URE on a second disk during a replace, since metadata uses a very small portion of the drive, you can use raid5/6 for metadata but it's currently not recommended because of the write hole issue, it can for example blowup the entire filesystem after an unclean shutdown. ** Starting with Unraid v6.9-beta1 btrfs includes support for raid1 with 3 and 4 copies, raid1c3 and raidc4, so you can use raid1c3 for metadata to have the same redundancy as raid6 for data (but note that the pool won't mount if you downgrade to an earlier release before converting back to a supported profile on the older kernel): btrfs balance start -dconvert=raid6 -mconvert=raid1c3 /mnt/cache Obs: -d refers to the data, -m to the metadata, metadata should be left redundant, i.e., you can have a RAID0 pool with RAID1 metadata, metadata takes up very little space and the added protection can be valuable. When changing pool mode confirm that when the balance is done data is all in the new selected mode, check "btrfs filesystem df"on the cache page, this is how a RAID10 pool should look like: If there is more than one data mode displayed, do the balance again with the mode you want, for some unRAID releases and the included btrfs-tools, eg, v6.1 and v6.2 it's normal needing to run the balance twice.
  17. I have two different size cache devices, why is the reported space incorrect? Old Unraid bug when using different size devices (fixed on v6.9-beta30), usable size in default 2 device pool RADI1 config is always equal to the smallest device. Although free pool space is incorrectly reported the cache floor setting should still work normally (at least for unRAID v6.2 and above), i.e., set it according to the real usable space. To see the usable space with 3 or more different size devices in any profile use the calculator below: http://carfax.org.uk/btrfs-usage/
  18. How do I replace/upgrade a pool disk? BTRFS A few notes: -unRAID v6.4.1 or above required, but it didn't work correctly with some releases, so make sure you are using v6.10.3 or later. -Always a good idea to backup anything important on the current pool in case something unexpected happens -On a multi device pool you can only replace/upgrade one device at a time. -You cannot directly replace/upgrade a device from a non redundant multi device btrfs pool, e.g., from a raid0 pool. -You cannot directly replace/upgrade a single device btrfs pool, this procedure can only be used to replace a device from a redundant multi device btrfs pool. -You cannot directly replace an existing device with a smaller one, only one of the same or larger size, you can add one or more smaller devices to a pool and after it's done balancing stop the array and remove the larger device(s) (one at a time if more than one), obviously only possible if data still fits on the resulting smaller pool. Procedure: stop the array if both devices are connected together: on the main page click on the pool device you want to replace/upgrade and select the new one from the drop down list (any data on the new device will be deleted) if you can only connect the new device after removing the old one: shutdown the server, disconnect old device, connect the new one, turn the server on, on main the old device will show as missing, click on it and select the new device from the drop down list (any data on the new device will be deleted) start the array a btrfs device replace will begin, wait for pool activity to stop, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are. when the pool activity stops or the stop array button is available the replacement is done. if the new device is larger than the one being replaced you might need to stop/re-start the array once the replacement is done for the new capacity to be available. ZFS A few notes: -if the pool was not created using the GUI, just imported, make sure the device assignments correspond the the zpool status order, or it will cause issues during a device replacement -unRAID v6.12-rc2 or above required. -Always a good idea to backup anything important on the current pool in case something unexpected happens -Currently on a multi device pool you can only replace/upgrade one device at a time, even if the pool redundancy allows more. -when upgrading devices, only when all the devices from that vdev have been replaced will you see the extra capacity available. -You cannot directly replace/upgrade a device from a non redundant multi device zfs pool, e.g., from a raid0 pool. -You cannot directly replace/upgrade a single device zfs pool, this procedure can only be used to replace a device from a redundant multi device zfs pool. -You cannot directly replace an existing device with a smaller one, only one of the same or larger size. Procedure if you can have both the old and new devices connected at the same time: stop the array on the main page click on the pool device you want to replace/upgrade and select the new one from the drop down list (any data on the new device will be deleted) start the array a 'zpool replace' will begin and the new device will be resilvered progress can be seen by clicking on the first pool device and scrolling down to “pool status” when done again check "pool status" page to confirm everything looks good Procedure if you cannot have both the old and new devices connected at the same time: stop the array on the main page click on the pool device you want to replace/upgrade and unassign it, select 'no device' start the array that device will be offlined from the pool shutdown the server remove the old device, install the new device turn on the server, if array auto-start is enabled stop the array on the main page assign the new pool device start the array a 'zpool replace' will begin and the new device will be resilvered progress can be seen by clicking on the first pool device and scrolling down to “pool status” when done again check "pool status" page to confirm everything looks good
  19. How do I remove a pool device? BTRFS A few notes: -unRAID v6.4.1 or above required. -Always a good idea to backup anything important on the current pool in case something unexpected happens -You can only remove devices from redundant pools (raid1, raid5/6, raid10, etc) but make sure to only remove one device at a time, i.e., you cannot remove 2 devices at the same time from any kind of pool, you can remove them one at a time after waiting for each balance to finish (as long as there's enough free space on the remaining devices). -You cannot remove devices past the minimum number required for the profile in use, e.g., 3 devices for raid1c3/raid5, 4 devices for raid6/raid10, etc, exception is removing a device from a two device raid1 pool, in this case Unraid converts the pool to single profile. Procedure: stop the array unassign pool disk to remove start the array (after checking the "Yes, I want to do this" box next to the start array button) a balance and/or a device delete will begin depending on the profile used and number of pool members remaining, wait for pool activity to stop, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are. when the pool activity stops or the stop array button is available the replacement is done. ZFS A few notes: -unRAID v6.12-rc2 or above required. -Always a good idea to backup anything important on the current pool in case something unexpected happens -Currently you can only remove devices from 3 or 4-way mirrored pools (raid1) but make sure to only remove one device at a time, i.e., if you start with a 4-way mirror, you can remove two devices making it a 2-way mirror, but they must be removed one at a time. -Currently removing a complete vdev from a mirrored pool is not supported, removing a mirror from a 2-way mirror will work but leave the mirror degraded, i.e., a new replacement device should then be added. Procedure: stop the array unassign pool disk to remove start the array (after checking the "Yes, I want to do this" box next to the start array button) the removed device will be detached at array start. check "pool status" page to confirm everything looks good
  20. How do I add a device to create a redundant pool? BTRFS A few notes: -unRAID v6.4.1 or above required. -Current pool disk filesystem must be BTRFS, you cannot create a multi device pool from an XFS or ReiserFS single device pool. -Always a good idea to backup anything important on the current pool in case something unexpected happens -When first creating a pool with 2 or more devices initial profile used will be raid1, this can be changed later, since v6.12 you can choose the initial profile -When a device is added to an existing pool the profile currently in use is maintained, e.g., if you have a two disk raid0 pool and add a third device it will become a three device raid0 pool in the end. Procedure: stop the array change pool slots to 2 or more assign new device(s) to the pool start array - a balance will begin, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are, progress can also be seen by clicking on the first pool device and scrolling down to “btrfs balance status” when balance is done or the stop array button is available the replacement is done, "btrfs balance status" should show "No balance found on '/mnt/<pool name>'", check also that "btrfs filesystem show" total devices are correct. ZFS A few notes: -unRAID v6.12-rc2 or above required. -Current pool disk filesystem must already be ZFS -Always a good idea to backup anything important on the current pool in case something unexpected happens -if the current pool is single device, in addition to adding just one device, you can also add two or three to create a 3-way or 4-way mirror. -if the current pool is for example a 2-way mirror, you can add a single device to make a 3-way mirror and later add a 4th to make a 4-way mirror, if you add two devices to a 2-way mirror it will expand the pool by creating an additional vdev, becoming a two vdev 2-way mirror -to expand an existing pool you need to add another vdev of the same width, e.g.: - two device 2-way mirror, you can 2 more devices, once that's done you can add 2 more, and so on - four device raidz1 pool, you can add 4 more devices, once that's done you can add 4 more, and so on Procedure: stop the array change pool slots to 2 (or more) assign new device(s) to the pool start array - a resilver will begin, this can take some time depending on how much data is on the pool and how fast your devices are, progress can be seen by clicking on the first pool device and scrolling down to “pool status” when the resilver is done confirm everything looks good by checking the "pool status" page.
  21. Possibly the Marvell Virtualization issue.
  22. Love the concept of the file integrity plugin from Dynamix. I installed it and its running, but there is a whole section called "disk verification tasks" with rows and columns of check boxes, but no instructions on what should be done, how to use it or how to define a 'task'? Is there documentation or a sub-forum somewhere? Turn the HELP on.
×
×
  • Create New...