• [6.12-rc3] zfs pool won't mount after upgrade


    skoj
    • Closed Minor

    Pool created on rc1, working fine in rc2 and after upgrading to rc3 i get this upon array start:

    356126935_2023-04-1419_33_59-Window.jpg.18eec3f57d8f268a07ce5a881acdef26.jpg

     

    Reverted back to rc2 and the pool mounted without trouble.

    bender-diagnostics-20230414-1937.zip




    User Feedback

    Recommended Comments

    Version 6.12.0-rc3 2023-04-14

    (This is consolidated change log vs. Unraid OS 6.11)

    Upgrade notes

    If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate.

    ZFS Pools

    New in this release is the ability to create a ZFS file system in a user-defined pool. In addition you may format any data device in the unRAID array with a single-device ZFS file system.

     

    We are splitting full ZFS implementation across two Unraid OS releases. Initial support in this release includes:

    Support raid0, mirror, raidz1, raidz2 and raidz3 root profiles. Up to 4-way mirror in a mirror vdev. Multiple vdev groups.

    Support removing single device: if device still present in server, 'wipefs' is used to clear the partition table.

    Support replacing single missing device with a new device of same or larger size.

    Support scheduled trimming of ZFS pools.

    Support pool rename.

    Pool names must begin with a lowercase letter and only contain lowercase letters, digits, the underscore and dash. Pool names must not end with a digit.

    Non-root vdev cannot be configured in this release, however, they can be imported. Note: imported hybrid pools may not be expanded in this release.

    Pools created on other systems may or may not import depending on how the the pool was created. A future update will permit importing pools from any system.

     

    A ZFS pool has three variables:

    profile - the root data organization: raid0, mirror (up to 4-way), raidz1, raidz2, raidz3

    width - the number of devices per root vdev

    groups - the number of root vdevs in the pool

     

    At time of ZFS pool creation, the webGUI will present all topology options based on the number of devices assigned to the pool.

    Special treatment for root single-vdev mirrors:

    A single-device ZFS pool can be converted to multiple-device mirror by adding up to 3 additional devices in one operation.

    A 2-device mirror can be increased to 3-device by adding a single device; similarly a 3-device mirror can be increased to 4-device mirror by adding a single device.

     

    To add an additional root vdev, you must assign 'width' number of new devices to the pool at the same time. The new vdev will be created with the same 'profile' as the existing vdevs. Additional flexibility in adding/expanding vdevs will be provided in a future update.

     

    Pools created with the steini84 plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

     

    Mixed topologies are not supported. For example, a pool with both a mirror root vdev and a raidz root vdev is not recognized.

     

    Autotrim can be configured as on or off (except for single-device ZFS volumes in the unRAID array).

     

    Compression can be configured as on or off, where on selects lz4. Future update will permit specifying other algorithms/levels.

     

    When creating a new ZFS pool you may choose zfs - encrypted, which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

     

    During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g.,

    VM start/stop.

     

    Top-level user shares in a ZFS pool are created as datasets instead of ordinary directories.

    Link to comment

    That pool is not importing because it has a faulted device:


     

       pool: zfsone
         id: 10368389039804548151
      state: DEGRADED
    status: One or more devices contains corrupted data.
     action: The pool can be imported despite missing or damaged devices.  The
        fault tolerance of the pool may be compromised if imported.
       see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
     config:
    
        zfsone                   DEGRADED
          mirror-0               DEGRADED
            sdp1                 ONLINE
            2956466455825155263  FAULTED  corrupted data
            sdq1                 ONLINE

     

    It should import if you change pool slot from 2 to 3, but you should fix the pool by detaching or replacing the faulted device.

     

     

    Link to comment

    Thanks JorgeB.

     

    Definitely self-inflicted. I assumed it was a bug with rc3 since the pool mounts in rc2.

     

    For background, I created the pool with 2 drives and did some drive failure testing. Remove drive, add it back in. There was never a third drive involved. I must have done something which caused UNRAID & ZFS pool configuration to go out of sync. 

     

    I detached the failed drive from the pool but got the same error with rc3. Will delete pool & recreate.

     

    2023-04-15 08_52_05-Window.jpg

    2023-04-15 08_53_46-Window.jpg

    Edited by skoj
    Link to comment
    16 hours ago, skoj said:

    I detached the failed drive from the pool but got the same error with rc3. Will delete pool & recreate.

    It's still detecting a 3 way mirror, if you want you can just re-import the pool, no need to destroy, unassign both pool members, start array, stop array, re-assign both pool members and it should be re-imported correctly.

    Link to comment

    Had a similar issue and tried all sorts of zpool import commands and more with no success.

    All my array and pool drives are zfs formatted.

    After many tries with console and reboots, my two drives which each had an SSD disk with split cache and log partitions, were the ones that were available for import using "zpool import". Still would not mount when starting array and the log showed that it didn't recognize the file system. I'm not sure if them having cache and/or log partitions from the SSD drives caused some issues.

     

    Found a temporary workaround however.

    Went into Tools->New Config and selected all drives.

    Did a complete shutdown, started the host and then unassigned the pool drives.

    Note: Not sure if necessary, but I initially went into each drive and changed file system from "Auto" to "zfs", before starting the following process:

    To start array, had to check the box which deletes the cache.

    Started array and all my array ZFS drives worked.

    Stopped array, reassigned pool ZFS drives and they also mounted properly.

     

    I am on 6.12 RC8.

     

    EDIT: Typo

    Edited by alejoh90
    Link to comment
    7 hours ago, alejoh90 said:

    Had a similar issue and tried all sorts of zpool import commands and more with no success.

    If you still have the diagnostics please post them to see what the problem was.

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.