How to import existing zfs pools?


Go to solution Solved by JorgeB,

Recommended Posts

Previously, I ran zfs with zfs companion on unraid 6.11, and it worked perfectly. I mounted my zfs pool at `/Io` and made symlinks from `/mnt/user/Io` to the actual zfs path to share it.

 

Now with unraid 6.12, I am not able to visit it. I can import the zfs pool with `zpool import`:

 

root@Unraid-HomeLab:/# zpool import
   pool: Io
     id: 2096410902201209442
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        Io          ONLINE
          raidz2-0  ONLINE
            sde     ONLINE
            sdd     ONLINE
            sdi     ONLINE
            sdf     ONLINE
            sdc     ONLINE
            sdj     ONLINE
            sdg     ONLINE
        spares
          sdh

 

 

and then `zpool import Io`:

 

root@Unraid-HomeLab:/mnt/user# zpool import Io
root@Unraid-HomeLab:/mnt/user# zpool status
  pool: Io
 state: ONLINE
  scan: resilvered 28.3M in 00:00:23 with 0 errors on Fri Jun 16 15:03:29 2023
config:

        NAME        STATE     READ WRITE CKSUM
        Io          ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sdi     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdj     ONLINE       0     0     0
            sdg     ONLINE       0     0     0
        spares
          sdh       AVAIL

errors: No known data errors

 

The symlink worked as expected, but the share did not work:

 

root@Unraid-HomeLab:/mnt/user/Io# ls -al
total 0
drwxrwxrwx 1 nobody users 133 Jun 21 00:38 ./
drwxrwxrwx 1 nobody users  30 Jun 21 00:37 ../
lrwxrwxrwx 1 root   root   15 Jun 21 00:38 Camera -> /Io/data/Camera/
lrwxrwxrwx 1 root   root   20 Jun 21 00:38 Collections -> /Io/data/Collections/
lrwxrwxrwx 1 root   root   16 Jun 21 00:38 Scripts -> /Io/data/Scripts/
lrwxrwxrwx 1 root   root   18 Jun 21 00:38 Syncthing -> /Io/data/Syncthing/
lrwxrwxrwx 1 root   root   21 Jun 21 00:38 Transmission -> /Io/data/Transmission/
lrwxrwxrwx 1 root   root   16 Jun 21 00:38 appdata -> /Io/data/appdata/
lrwxrwxrwx 1 root   root   16 Jun 21 00:38 domains -> /Io/data/domains/
lrwxrwxrwx 1 root   root   13 Jun 21 00:38 isos -> /Io/data/isos/

 

And most importantly, the `zfs import` was not permanent, and after reboot, there are again no zpools at all, which means I had to import the pool again.

 

I do not wish to destroy my data in the zfs pool, so any help?

Edited by wang1zhen
Link to comment
2 minutes ago, itimpi said:

I wonder if that is because you tried to put the pool under /mnt/user and not directly under /mnt which is where pools normally get mounted?

I don't quite believe that is the case...

 

To make it clear, my array consists of only one Optane 16G ssd, which holds the system files. The zfs pool contains 8 HDDs, and is mounted at /Io. I made several symlinks under /mount/user/Io as posted above, so that I could share the data in the zfs pool.

 

Honestly, I supposed that the new version of unraid now supports zfs, then I shall do nothing for the migration as the setup for the pool is kept the same. I would be happy to have the pool alongside the arrays and use it like what I did before, but now I get totally confused. I wonder if unraid 6.12 supports command line zfs operations? And what could I do for now?

Link to comment
8 minutes ago, JorgeB said:

You should create a new pool using the GUI, assign all pool members, including the spare, make sure the pool is exported and then start array and let Unraid import and mount the pool.

 

Here's what I have tried:

 

First I exported the pool named Io:

 

root@Unraid-HomeLab:~# zpool export Io
root@Unraid-HomeLab:~# zpool status
no pools available

 

then in the unraid GUI I stopped the array and added all these 8 drives with the following configs:

 

image.thumb.png.a94cff8ab0f10d1d78430f4b0bd52b4d.png

 

After I started the array, the zfs pool is not recognized:

 

image.thumb.png.ddf198a53e0e95ca674323eda718e6cc.png

 

and if I created the pool with only 7 drives excluding the hot spare, the result is the same.

Link to comment

Thanks for the clarification @JorgeB , this time the import was successful.

 

There is still something I cannot quite understand concerning the mechanism that unraid handles zfs pools. As I have imported the pool named "zpool-io" with a single dataset "data", i found that the pool was mounted at `/mnt/zpool-io`, and it automatically creates a share named "data" which seemed to contain all of the files inside my zpool-io/data dataset. But it's confusing that when I browsed the configs for data share:

 

image.thumb.png.7bf0e3224caa021ca25af9d752ea60e5.png

 

the primary storage for data share was array, not my zpool-io.

 

As I wish to store everything except the system data on the zpool, shall I continue to use the data share or continue my previous practice which uses the symlink to the folders under the zpool dataset?

Link to comment

It suddenly occurred to me that since unraid supports zfs now, I can just use the whole zfs pool as my only array device, and get rid of my previous poor 16G optane in the array holding the symlinks. I will give it a shot.

Edited by wang1zhen
Link to comment
49 minutes ago, wang1zhen said:

It suddenly occurred to me that since unraid supports zfs now, I can just use the whole zfs pool as my only array device, and get rid of my previous poor 16G optane in the array holding the symlinks. I will give it a shot.

It is still a requirement that you have at least one drive in the array so leave that alone.    You also want to change the share settings so that it has the ZFS pool as the primary storage with nothing as secondary storage if you want things to work correctly, and have no reference to the share (not even a symlink) on the array.

Link to comment
42 minutes ago, itimpi said:

It is still a requirement that you have at least one drive in the array so leave that alone.

ughh... that sounds sad to me. It is quite weird that unraid automatically mounts the zfs dataset while setting the primary storage of it to be the array, isn't it?

 

Gatta clear up my mind and try to figure out the best layout for me. thx for help!

Link to comment
30 minutes ago, wang1zhen said:

ughh... that sounds sad to me. It is quite weird that unraid automatically mounts the zfs dataset while setting the primary storage of it to be the array, isn't it?

Unraid finds files for read purposes that might be part of a share regardless of where they are located on the main array or a pool.   The Primary storage is where NEW files are placed.

Link to comment
  • 9 months later...

Hello,

I am migrating from TrueNAS to UnRaid. I try to import a TrueNAS ZFS pool. I have added a pool with all 12 disks via GUI (I had initially used zfs and 3groups X 4disks but after reading here above I switched it to auto):

image.thumb.png.b253761e4d8886cfd341cba1b3145b3f.png

 

On CLI for "zpool status" i get "no pools available". Starting the array does not start/mount the pool.

Importing the pool via CLI shows:

image.png.97160156dee6990386b556a02499ec34.png

 

What am I missing?

Thank you.

Edited by G3orgios
Typo-error
Link to comment
  • 2 weeks later...
On 3/29/2024 at 7:49 PM, G3orgios said:

Hello,

I am migrating from TrueNAS to UnRaid. I try to import a TrueNAS ZFS pool. I have added a pool with all 12 disks via GUI (I had initially used zfs and 3groups X 4disks but after reading here above I switched it to auto):

image.thumb.png.b253761e4d8886cfd341cba1b3145b3f.png

 

On CLI for "zpool status" i get "no pools available". Starting the array does not start/mount the pool.

Importing the pool via CLI shows:

image.png.97160156dee6990386b556a02499ec34.png

 

What am I missing?

Thank you.

In this 3 x 4disks Raidz1 configuration, in theory i can remove one disk from each Raidz1 group and the overall ZFS pool be still healthy (without redundancy any longer). I need to remove sdd, sdh and sdl disks, to start migrating ZFS to array (btrfs). HOW can I remove (unmount?) these three disks and then add them in the array?

Link to comment
12 hours ago, G3orgios said:

HOW can I remove (unmount?) these three disks and then add them in the array?

v6.12 does not support multiple devices missing from a zfs pool, even if the pool has enough redundancy, v6.13 will support that, you can still do it with v6.12, but it would have to be done manually, including mounting the degraded pool.

 

 

Link to comment

@JorgeB so, how may I degrade the ZFS pool by removing one (1) disk from each raidz1 group and making the pool as 3groups x 3disks? When you say "v6.12 does not support multiple devices missing from a zfs pool", do you refer to web-UI? Could this be done via CLI, by either un-mounting the three (3) disks OR by removing the whole ZFS pool and then add the ZFS pool with ONLY the 3groups x 3disks only? In theory, this Raidz1 pool remains active even losing up to one (1) disk from each group.

 

I even considered the hard(ware)-approach, by physically removing the 3 disks (one from each group) BUT I would prefer the soft(ware)-approach (e.g. CLI).

 

Any proposals on HOW this can be done? Thank you.

 

Link to comment
3 hours ago, G3orgios said:

how may I degrade the ZFS pool by removing one (1) disk from each raidz1 group and making the pool as 3groups x 3disks?

 

You can use

zpool offline pool_name device

 

 

3 hours ago, G3orgios said:

Could this be done via CLI

With 6.12 only with the CLI, unassign the pool from Unraid, import it manually with

zpool import pool_name

offline the devices, you can then assign those devices to a new Unraid pool, leave the other pool online and you an access it with the array started under /mnt/pool_name

Link to comment

@JorgeB As you said, via the web-UI, removing just one (1) disk kept the ZFS pool intact and accessible. Removing more disks (one per raidz1 vdev, of course!) didn't worked.

 

CLI worked fine! (zpool command reference)

  • removed the zfs pool from the web-UI
  • added it via CLI: zfs pool import [poolname]
  • deactivated one (1) disk per vdev: zpool offline [poolname] [disk]
  • accessing pool fine at /mnt/[poolname]

The zfs pool disks show as Unassigned Devices in web-UI but the pool operates fine and i have used the offlined disks to start building the array and migrating data (100TB),

Thank you and I hope the new Unraid version to have more flexible web-UI in this aspect.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.