jpowell8672

Members
  • Posts

    379
  • Joined

  • Last visited

  • Days Won

    1

Report Comments posted by jpowell8672

  1. Version 6.12.0-rc3 2023-04-14

    (This is consolidated change log vs. Unraid OS 6.11)

    Upgrade notes

    If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate.

    ZFS Pools

    New in this release is the ability to create a ZFS file system in a user-defined pool. In addition you may format any data device in the unRAID array with a single-device ZFS file system.

     

    We are splitting full ZFS implementation across two Unraid OS releases. Initial support in this release includes:

    Support raid0, mirror, raidz1, raidz2 and raidz3 root profiles. Up to 4-way mirror in a mirror vdev. Multiple vdev groups.

    Support removing single device: if device still present in server, 'wipefs' is used to clear the partition table.

    Support replacing single missing device with a new device of same or larger size.

    Support scheduled trimming of ZFS pools.

    Support pool rename.

    Pool names must begin with a lowercase letter and only contain lowercase letters, digits, the underscore and dash. Pool names must not end with a digit.

    Non-root vdev cannot be configured in this release, however, they can be imported. Note: imported hybrid pools may not be expanded in this release.

    Pools created on other systems may or may not import depending on how the the pool was created. A future update will permit importing pools from any system.

     

    A ZFS pool has three variables:

    profile - the root data organization: raid0, mirror (up to 4-way), raidz1, raidz2, raidz3

    width - the number of devices per root vdev

    groups - the number of root vdevs in the pool

     

    At time of ZFS pool creation, the webGUI will present all topology options based on the number of devices assigned to the pool.

    Special treatment for root single-vdev mirrors:

    A single-device ZFS pool can be converted to multiple-device mirror by adding up to 3 additional devices in one operation.

    A 2-device mirror can be increased to 3-device by adding a single device; similarly a 3-device mirror can be increased to 4-device mirror by adding a single device.

     

    To add an additional root vdev, you must assign 'width' number of new devices to the pool at the same time. The new vdev will be created with the same 'profile' as the existing vdevs. Additional flexibility in adding/expanding vdevs will be provided in a future update.

     

    Pools created with the steini84 plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

     

    Mixed topologies are not supported. For example, a pool with both a mirror root vdev and a raidz root vdev is not recognized.

     

    Autotrim can be configured as on or off (except for single-device ZFS volumes in the unRAID array).

     

    Compression can be configured as on or off, where on selects lz4. Future update will permit specifying other algorithms/levels.

     

    When creating a new ZFS pool you may choose zfs - encrypted, which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

     

    During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g.,

    VM start/stop.

     

    Top-level user shares in a ZFS pool are created as datasets instead of ordinary directories.

  2. Fixing, adding, updating & upgrading everything people reported plus ranked high on our lists LimeTech - Unraid has been up to the difficult challenge hard at work. What a great bunch of people & community this company is, let's hope they get it all worked out soon so they can take a well earned relaxed break to enjoy the upcoming Holiday's with there Family & Friends with peace, love and no worries. Thank You and Happy Holiday's!

    • Like 3
    • Thanks 2
  3. 58 minutes ago, Marshalleq said:

    Multiple hard drives across both onboard and PCIe base controllers, two completely dead SSD's and an additional brand new SSD on top of that with read errors, mostly occurring post reboot.  It may or may not be related to this bug, however it seems to me it is related to this version.  No idea why.  Yeah it could be hardware, but again 'coincidentally' arrived with this version.

    I have been having the same problem, so add me to the list.