Jump to content

How to format only drives for a specific new pool


Recommended Posts

I'm working with 6.12 beta RC3.  I have a large, raidz2 encrypted ZFS pool and the UI doesn't know how to handle it after I added mirrored NVME ZIL and Special devices, along with a striped L2ARC.  Yes, I know how they work and when to use them and when not to with ZFS, so I'm hoping this stays on topic about the formatting question.  Anyway, the ZFS pool works exactly as I had planned, but unraid has no idea how to handle it properly when starting the array, so you have to import it manually each time you start the array and then it works fine, but the UI complains that the disks in the array are unformatted.  Annoying, but not a show stopper until I tried to add another pool.  The problem now is that I can't format the disks in the new pool because the format option wants to format ALL of the disks that it THINKS are unformatted, including the disks in my main zfs pool?  Is there a way to manually format just the disks that make up a specific pool and leave the rest untouched?

 

Alternatively, has anyone had success fixing the UI after adding ZIL, special, and L2ARC to ZFS manually?

Edited by Keliaxx
Link to comment

FWIW and to help anyone else that gets into a weird state like this, I did find a workaround that allowed me to create the second pool I needed.  I temporarily renamed the pool configuration file under /boot/config/pools to [poolname].cfg.bak and rebooted.  This way, it just saw all the disks that were previously part of that pool as unused and allowed me to format only the disks associated with the new pool.  Once they were formatted and all was in good order, I just stopped the array, put the pool cfg file back and rebooted.  I still have the issue with the zpool requiring manual import every time I reboot, but the newly added pool works fine.

Link to comment

Attached is the diagnostic report after starting the array.  At the point where this was captured, the zfs pool mainpool shows all 4 of the 20TB drives as unformatted, however they are unlocked by luks.  From this point, if I just go to the cli, I can do a zpool import mainpool and it brings up and mounts the zpool without issues.  From there, I just need to disable and re-enable VMs and everything works fine, but the UI never shows the pool correctly.

 

Here is the topology of the zfs mainpool after import from the command line:

 

  pool: mainpool
 state: ONLINE
  scan: scrub repaired 0B in 00:22:50 with 0 errors on Sun Apr 23 23:38:41 2023
config:

        NAME           STATE     READ WRITE CKSUM
        mainpool       ONLINE       0     0     0
          mirror-0     ONLINE       0     0     0
            sde1       ONLINE       0     0     0
            sdf1       ONLINE       0     0     0
          mirror-1     ONLINE       0     0     0
            sdg1       ONLINE       0     0     0
            sdh1       ONLINE       0     0     0
        special
          mirror-3     ONLINE       0     0     0
            nvme4n1p2  ONLINE       0     0     0
            nvme5n1p2  ONLINE       0     0     0
            nvme6n1p2  ONLINE       0     0     0
        logs
          mirror-2     ONLINE       0     0     0
            nvme4n1p1  ONLINE       0     0     0
            nvme5n1p1  ONLINE       0     0     0
            nvme6n1p1  ONLINE       0     0     0
        cache
          nvme4n1p3    ONLINE       0     0     0
          nvme5n1p3    ONLINE       0     0     0
          nvme6n1p3    ONLINE       0     0     0

errors: No known data errors

 

 

Edited by Keliaxx
Link to comment

I'm only seeing the 4 mirror disks assigned to the pool, they need to be all assigned, but you cannot just add them now, the pool needs to be re-imported:

 

export the pool first

stop array

unassign the 4 pool devices

start array, stop array to reset the pool config

assign all devices to the pool, including all the other vdevs

start array

Link to comment

I followed the steps mentioned and have pretty much the same results, except I now have the 3 NVME devices listed in the pool along with the 4 hard drives.  I can still manually import it after starting the array and it works, but starting the array with all 7 devices defined doesn't import it during the array startup.  I've attached the updated diagnostic report.

 

If necessary, I have the space to vacate the pool and rebuild it if there is a different process I need to follow.  Please note that that SLOG, SPECIAL, and CACHE VDEVs are pointed at partitions on the same 3 NVME devices.  ZFS doesn't care, but I wanted to point it out in case it might have been overlooked.  Because these VDEVs were added from the CLI after the pool was created from the web UI, they aren't encrypted with luks.

 

If I need to rebuild to pool, I can vacate the data and do that.  I would just need to know what steps to follow to make the UI happy with the resulting pool.

 

 

 

Edited by Keliaxx
Link to comment

Meaning *ONLY* on partition 1?  The three NVME devices that are being used each have 3 paritions,  paritition 1 on each is combined into a 3 way mirror VDEV for SLOG, partition 2 on each is a part of a 3-way mirror VDEV for SPECIAL, partition 3 on each is joined as a stripe VDEV for persistent CACHE.  This is definitely something that should be considered for support in future releases, as the fault tolerance recommendations for SLOG and SPECIAL along with extremely high IOPs and low latency of 4xpcie4 NVME makes this model of using partitions instead of entire physical devices a lot more practical.

 

I'm curious, how exactly is the zpool import being called on the back end that causes this part of it to fail?  I can understand the UI getting confused about how to present it, but array start doesn't successfully import the pool.

 

 

Link to comment
4 minutes ago, Keliaxx said:

Meaning *ONLY* on partition 1?

Correct.

 

4 minutes ago, Keliaxx said:

I'm curious, how exactly is the zpool import being called on the back end that causes this part of it to fail?  I can understand the UI getting confused about how to present it, but array start doesn't successfully import the pool.

Cannot answer that, but Unraid cannot just do a generic zpool import, I know that in the future zfs on partition #2 is expected to be supported, to allow importing pools created on TrueNAS, don't know about this case, you can always create a feature request.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...