Jump to content

JorgeB

Moderators
  • Posts

    63,929
  • Joined

  • Last visited

  • Days Won

    675

Everything posted by JorgeB

  1. Can I change my btrfs pool to RAID0 or other modes? Yes, for now it can only be manually changed, new config will stick after a reboot, but note that changing the pool using the WebGUI, e.g., adding a device, will return cache pool to default RAID1 mode (note: starting with unRAID v6.3.3 cache pool profile in use will be maintained when a new device is added using the WebGUI, except when another device is added to a single device cache, in that case it will create a raid1 pool), you can add, replace or remove a device and maintain the profile in use following the appropriate procedure on the FAQ (remove only if it does not go below the minimum number of devices required for that specific profile). It's normal to get a "Cache pool BTRFS too many profiles" warning during the conversion, just acknowledge it. These are the available modes (enter these commands on the cache page balance window e click balance**, note if the command doesn't work type it instead of using copy/past from the forum, sometimes extra characters are pasted and the balance won't work) ** Since v6.8.3 you can chose the profile you want from the drop-down window and it's not possible to type a custom command: All the command below can still be used on the console: Single: requires 1 device only, it's also the only way of using all space from different size devices, btrfs's way of doing a JBOD spanned volume, no performance gains vs single disk or RAID1 btrfs balance start -dconvert=single -mconvert=raid1 /mnt/cache RAID0: requires 2 device, best performance, no redundancy, if used with different size devices only 2 x capacity of smallest device will be available, even if reported space is larger. btrfs balance start -dconvert=raid0 -mconvert=raid1 /mnt/cache RAID1: default, requires at least 2 devices, to use full capacity of a 2 device pool they all need to be the same size. btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/cache RAID10: requires at least 4 devices, to use full capacity of a 4 device pool they all need to be the same size. btrfs balance start -dconvert=raid10 -mconvert=raid10 /mnt/cache RAID5/6 still has some issues and should be used with care, though most serious issues have been fixed on current kernel at this of this edit 4.14.x RAID5: requires at least 3 devices. btrfs balance start -dconvert=raid5 -mconvert=raid1 /mnt/cache RAID6: requires at least 4 devices. btrfs balance start -dconvert=raid6 -mconvert=raid1 /mnt/cache Note about raid6**: because metadata is raid1 it can only handle 1 missing device, but it can still help with a URE on a second disk during a replace, since metadata uses a very small portion of the drive, you can use raid5/6 for metadata but it's currently not recommended because of the write hole issue, it can for example blowup the entire filesystem after an unclean shutdown. ** Starting with Unraid v6.9-beta1 btrfs includes support for raid1 with 3 and 4 copies, raid1c3 and raidc4, so you can use raid1c3 for metadata to have the same redundancy as raid6 for data (but note that the pool won't mount if you downgrade to an earlier release before converting back to a supported profile on the older kernel): btrfs balance start -dconvert=raid6 -mconvert=raid1c3 /mnt/cache Obs: -d refers to the data, -m to the metadata, metadata should be left redundant, i.e., you can have a RAID0 pool with RAID1 metadata, metadata takes up very little space and the added protection can be valuable. When changing pool mode confirm that when the balance is done data is all in the new selected mode, check "btrfs filesystem df"on the cache page, this is how a RAID10 pool should look like: If there is more than one data mode displayed, do the balance again with the mode you want, for some unRAID releases and the included btrfs-tools, eg, v6.1 and v6.2 it's normal needing to run the balance twice.
  2. I have two different size cache devices, why is the reported space incorrect? Old Unraid bug when using different size devices (fixed on v6.9-beta30), usable size in default 2 device pool RADI1 config is always equal to the smallest device. Although free pool space is incorrectly reported the cache floor setting should still work normally (at least for unRAID v6.2 and above), i.e., set it according to the real usable space. To see the usable space with 3 or more different size devices in any profile use the calculator below: http://carfax.org.uk/btrfs-usage/
  3. How do I replace/upgrade a pool disk? BTRFS A few notes: -unRAID v6.4.1 or above required, but it didn't work correctly with some releases, so make sure you are using v6.10.3 or later. -Always a good idea to backup anything important on the current pool in case something unexpected happens -On a multi device pool you can only replace/upgrade one device at a time. -You cannot directly replace/upgrade a device from a non redundant multi device btrfs pool, e.g., from a raid0 pool. -You cannot directly replace/upgrade a single device btrfs pool, this procedure can only be used to replace a device from a redundant multi device btrfs pool. -You cannot directly replace an existing device with a smaller one, only one of the same or larger size, you can add one or more smaller devices to a pool and after it's done balancing stop the array and remove the larger device(s) (one at a time if more than one), obviously only possible if data still fits on the resulting smaller pool. Procedure: stop the array if both devices are connected together: on the main page click on the pool device you want to replace/upgrade and select the new one from the drop down list (any data on the new device will be deleted) if you can only connect the new device after removing the old one: shutdown the server, disconnect old device, connect the new one, turn the server on, on main the old device will show as missing, click on it and select the new device from the drop down list (any data on the new device will be deleted) start the array a btrfs device replace will begin, wait for pool activity to stop, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are. when the pool activity stops or the stop array button is available the replacement is done. if the new device is larger than the one being replaced you might need to stop/re-start the array once the replacement is done for the new capacity to be available. ZFS A few notes: -if the pool was not created using the GUI, just imported, make sure the device assignments correspond the the zpool status order, or it will cause issues during a device replacement -unRAID v6.12-rc2 or above required. -Always a good idea to backup anything important on the current pool in case something unexpected happens -Currently on a multi device pool you can only replace/upgrade one device at a time, even if the pool redundancy allows more. -when upgrading devices, only when all the devices from that vdev have been replaced will you see the extra capacity available. -You cannot directly replace/upgrade a device from a non redundant multi device zfs pool, e.g., from a raid0 pool. -You cannot directly replace/upgrade a single device zfs pool, this procedure can only be used to replace a device from a redundant multi device zfs pool. -You cannot directly replace an existing device with a smaller one, only one of the same or larger size. Procedure if you can have both the old and new devices connected at the same time: stop the array on the main page click on the pool device you want to replace/upgrade and select the new one from the drop down list (any data on the new device will be deleted) start the array a 'zpool replace' will begin and the new device will be resilvered progress can be seen by clicking on the first pool device and scrolling down to “pool status” when done again check "pool status" page to confirm everything looks good Procedure if you cannot have both the old and new devices connected at the same time: stop the array on the main page click on the pool device you want to replace/upgrade and unassign it, select 'no device' start the array that device will be offlined from the pool shutdown the server remove the old device, install the new device turn on the server, if array auto-start is enabled stop the array on the main page assign the new pool device start the array a 'zpool replace' will begin and the new device will be resilvered progress can be seen by clicking on the first pool device and scrolling down to “pool status” when done again check "pool status" page to confirm everything looks good
  4. How do I remove a pool device? BTRFS A few notes: -unRAID v6.4.1 or above required. -Always a good idea to backup anything important on the current pool in case something unexpected happens -You can only remove devices from redundant pools (raid1, raid5/6, raid10, etc) but make sure to only remove one device at a time, i.e., you cannot remove 2 devices at the same time from any kind of pool, you can remove them one at a time after waiting for each balance to finish (as long as there's enough free space on the remaining devices). -You cannot remove devices past the minimum number required for the profile in use, e.g., 3 devices for raid1c3/raid5, 4 devices for raid6/raid10, etc, exception is removing a device from a two device raid1 pool, in this case Unraid converts the pool to single profile. Procedure: stop the array unassign pool disk to remove start the array (after checking the "Yes, I want to do this" box next to the start array button) a balance and/or a device delete will begin depending on the profile used and number of pool members remaining, wait for pool activity to stop, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are. when the pool activity stops or the stop array button is available the replacement is done. ZFS A few notes: -unRAID v6.12-rc2 or above required. -Always a good idea to backup anything important on the current pool in case something unexpected happens -Currently you can only remove devices from 3 or 4-way mirrored pools (raid1) but make sure to only remove one device at a time, i.e., if you start with a 4-way mirror, you can remove two devices making it a 2-way mirror, but they must be removed one at a time. -Currently removing a complete vdev from a mirrored pool is not supported, removing a mirror from a 2-way mirror will work but leave the mirror degraded, i.e., a new replacement device should then be added. Procedure: stop the array unassign pool disk to remove start the array (after checking the "Yes, I want to do this" box next to the start array button) the removed device will be detached at array start. check "pool status" page to confirm everything looks good
  5. How do I add a device to create a redundant pool? BTRFS A few notes: -unRAID v6.4.1 or above required. -Current pool disk filesystem must be BTRFS, you cannot create a multi device pool from an XFS or ReiserFS single device pool. -Always a good idea to backup anything important on the current pool in case something unexpected happens -When first creating a pool with 2 or more devices initial profile used will be raid1, this can be changed later, since v6.12 you can choose the initial profile -When a device is added to an existing pool the profile currently in use is maintained, e.g., if you have a two disk raid0 pool and add a third device it will become a three device raid0 pool in the end. Procedure: stop the array change pool slots to 2 or more assign new device(s) to the pool start array - a balance will begin, the stop array button will be inhibited during the operation, this can take some time depending on how much data is on the pool and how fast your devices are, progress can also be seen by clicking on the first pool device and scrolling down to “btrfs balance status” when balance is done or the stop array button is available the replacement is done, "btrfs balance status" should show "No balance found on '/mnt/<pool name>'", check also that "btrfs filesystem show" total devices are correct. ZFS A few notes: -unRAID v6.12-rc2 or above required. -Current pool disk filesystem must already be ZFS -Always a good idea to backup anything important on the current pool in case something unexpected happens -if the current pool is single device, in addition to adding just one device, you can also add two or three to create a 3-way or 4-way mirror. -if the current pool is for example a 2-way mirror, you can add a single device to make a 3-way mirror and later add a 4th to make a 4-way mirror, if you add two devices to a 2-way mirror it will expand the pool by creating an additional vdev, becoming a two vdev 2-way mirror -to expand an existing pool you need to add another vdev of the same width, e.g.: - two device 2-way mirror, you can 2 more devices, once that's done you can add 2 more, and so on - four device raidz1 pool, you can add 4 more devices, once that's done you can add 4 more, and so on Procedure: stop the array change pool slots to 2 (or more) assign new device(s) to the pool start array - a resilver will begin, this can take some time depending on how much data is on the pool and how fast your devices are, progress can be seen by clicking on the first pool device and scrolling down to “pool status” when the resilver is done confirm everything looks good by checking the "pool status" page.
  6. Possibly the Marvell Virtualization issue.
  7. Love the concept of the file integrity plugin from Dynamix. I installed it and its running, but there is a whole section called "disk verification tasks" with rows and columns of check boxes, but no instructions on what should be done, how to use it or how to define a 'task'? Is there documentation or a sub-forum somewhere? Turn the HELP on.
  8. Are you using 6.2-beta? If yes use the new preclear beta plugin or patch the script.
  9. Low priority but when possible please adjust display so there's no need to scroll right to see all disks.
  10. I knew TRIM didn't work for me with a LSI controller, didn't know it does work with some models, thanks for the link.
  11. Looking at your mb manual this is the problem, SAS3 is onboard LSI, when we say onboard controller we mean the PCH ports, connect the SSD to one of those and trim should work.
  12. No, default BTRFS supports TRIM, you usually get that error when the controller doesn't support it, looks like you're virtualizing unRAID, so maybe that's the cause (if you're using the mb controller). you're right, i do virtualize unRAID, in that hardware controllers are passthrough completely to unRAID. all (to me) known/used hardware features (esp. such as sleep function of hdd) do work properly. even all operations, which do change the boot usb-stick – so no tricks used/necessary here. the SSD is on the same controller (SAS3 from MB) as other HDDs. not an TRIM expert here, but my reading showed, that they use the fstrim command on mountpoints of filesystems and not on devices directly. well, i tried that too, but same result. Is the SAS3 controller from intel PCH or other, like LSI, some LSI controllers don't support trim, e.g., 9211.
  13. No, default BTRFS supports TRIM, you usually get that error when the controller doesn't support it, looks like you're virtualizing unRAID, so maybe that's the cause (if you're using the mb controller).
  14. Correct, the beta plugin is stand-alone.
  15. It's confusing because of the many versions and because the original plugin is just a front-end for the scrip while the beta plugin is stand-alone: Working with v6.2-beta: Edited original Preclear script (command line only) Edited unofficial faster Preclear script (command line only) Preclear plugin with either of the edited(*) scripts Stand-alone beta Preclear plugin * When editing the script don't forget that the plugin uses the script located in /boot/config/plugins/preclear.disk/ directory.
  16. I could add my v6.2-beta servers, both auto and manual working. Request: Option to sort servers (or auto sort alphabetically).
  17. V1: ST8000AS0002 V2: ST8000AS0022 Differences: https://lime-technology.com/forum/index.php?topic=47509.0
  18. Possibly the Marvell virtualization issue: http://lime-technology.com/forum/index.php?topic=40683.0
  19. If you mean the Seagates speeds are normal in a dual parity setup, same as single parity.
  20. I have better results with speedtest-cli: Ping - DL - UL Browser: 3ms - 213.13 - 210.75 Plugin: 7ms - 208.58 - 50.99 stest-cli: 7ms - 208.30 - 203.80 stest-linux: 12ms - 205.12 - 170.15
  21. That's normal and OK, it can even change between reboots with same hardware, unRAID tracks disks by serial number.
  22. Same characteristics and chipset, they should perform identically.
  23. Similar: http://lime-technology.com/forum/index.php?topic=43026.0
×
×
  • Create New...