Jump to content

How to non-irreversibly migrate from TrueNAS Core with ZFS to unRAID?


Go to solution Solved by JorgeB,

Recommended Posts

A TrueNAS pool may not yet be importable by Unraid, and if it was created using default TrueNAS settings, which uses partition #1 for swap, it won't be, at least not for now, it should be once v6.13 is out, post the partition list for the pool members, IIRC with FreeBSD it's:

gpart show

 

Link to comment
=>         40  15628053088  ada0  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>         40  15628053088  ada1  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>         40  15628053088  ada2  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>         40  15628053088  ada3  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>         40  15628053088  ada4  GPT  (7.3T)
           40           88        - free -  (44K)
          128      4194304     1  freebsd-swap  (2.0G)
      4194432  15623858696     2  freebsd-zfs  (7.3T)

=>      40  30031792  da0  GPT  (14G)
        40    532480    1  efi  (260M)
    532520  29491200    2  freebsd-zfs  (14G)
  30023720      8112       - free -  (4.0M)

=>      40  30031792  da1  GPT  (14G)
        40    532480    1  efi  (260M)
    532520  29491200    2  freebsd-zfs  (14G)
  30023720      8112       - free -  (4.0M)

 

The last two are boot drives.

 

What's ETA for 6.13? Is there any beta available?

Link to comment

Zfs is on partition #2, this is currently not supported.

 

57 minutes ago, VasiliiNorris said:

What's ETA for 6.13? Is there any beta available?

A beta should be available soon, but don't know if it's going to take a couple of weeks or a couple of months, also don't know if TrueNAS pool import will work at that time.

 

P.S. you should have no issues going form Unraid back to TrueNAS if needed.

Link to comment
1 hour ago, JorgeB said:

Zfs is on partition #2, this is currently not supported.

 

A beta should be available soon, but don't know if it's going to take a couple of weeks or a couple of months, also don't know if TrueNAS pool import will work at that time.

 

P.S. you should have no issues going form Unraid back to TrueNAS if needed.

Do you mean that if I backup my pool, create a ZFS RAIDZ1 pool in unRAID and, in case I don't like unRAID, I can just continue with a newly created unRAID ZFS pool in the TrueNAS?

 

By the way, if I remove the swap partition on all of the drives and move the ZFS partition to sector 128 (or even without moving it), will unRAID be able to handle my pool?

 

And why does these swaps get created by default anyway?

Link to comment
  • Solution
10 hours ago, VasiliiNorris said:

I can just continue with a newly created unRAID ZFS pool in the TrueNAS?

Correct, TrueNAS should have no issues importing that pool.

 

10 hours ago, VasiliiNorris said:

By the way, if I remove the swap partition on all of the drives and move the ZFS partition to sector 128 (or even without moving it), will unRAID be able to handle my pool?

Starting sector is not really important, as long as zfs is on partition #1 it should work.

 

Link to comment
On 2/11/2024 at 10:12 PM, VasiliiNorris said:

By the way, if I remove the swap partition on all of the drives and move the ZFS partition to sector 128 (or even without moving it), will unRAID be able to handle my pool?

 

I've been meaning to take a look at this, since you are not the first to ask about it, just did a quick test and everything worked correctly, in case you want to try, this should be perfectly safe to do but cannot guarantee an unforeseen issue.

 

This would need to be done with Unraid (Linux), list partitions of one of the pool devices:

 

root@Tower15:~# fdisk -l /dev/sde
Disk /dev/sde: 223.57 GiB, 240057409536 bytes, 468862128 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 22B20B36-C9D0-11EE-A9A9-F079595F9D0F

Device       Start       End   Sectors   Size Type
/dev/sde1      128   4194431   4194304     2G FreeBSD swap
/dev/sde2  4194432 468862087 464667656 221.6G FreeBSD ZFS

 

Use fdisk to delete partition 1 and make partition 2 > partition 1

 

Commands used:

d - to delete a partition

1 - to delete partition 1

x - to enter expert mode

f - to fix order, in this case make part2 > part1

r - go back to main menu

w - to write changes (any mistakes abort before w)

 

root@Tower15:~# fdisk /dev/sde

Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): d
Partition number (1,2, default 2): 1

Partition 1 has been deleted.

Command (m for help): x

Expert command (m for help): f
Partitions order fixed.

Expert command (m for help): r

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

 

 

Confirm new partition layout:

root@Tower15:~# fdisk -l /dev/sde
Disk /dev/sde: 223.57 GiB, 240057409536 bytes, 468862128 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 22B20B36-C9D0-11EE-A9A9-F079595F9D0F

Device       Start       End   Sectors   Size Type
/dev/sde1  4194432 468862087 464667656 221.6G FreeBSD ZFS

 

Repeat the procedure for the other pool devices, after all layouts are corrected Unraid can import the pool, to do that create a new pool with the number of slots needed, leave the filesystem set to auto, assign all pool devices in the same order as the zpool status output and start the array.

 

P.S. once v6.12.7 is out, this last step will no longer be needed, but it is with 6.12.6 or earlier v6.12 releases, the pool will fail to import the first time with a similar error to this:

Feb 12 18:15:05 Tower15 root: cannot import 'tank': pool was previously in use from another system.
Feb 12 18:15:05 Tower15 root: Last accessed by (hostid=46c5a671) at Mon Feb 12 18:10:11 2024
Feb 12 18:15:05 Tower15 root: The pool can be imported, use 'zpool import -f' to import the pool.

 

Type:

 

root@Tower15:~# zpool import -f tank
root@Tower15:~# zpool export tank

 

Replace tank with correct pool name, after that restart the array and Unraid will now be able to import the pool.

 

If at any point you want to go back to TrueNAS you just need to boot it with the devices attached and the pool will automatically be imported, at least it was for me, note that depending on the zfs version running on TrueNAS, the pool can show that some new features are available on Unraid, if that is the case don't upgrade the pool, or it will then fail to import with TrueNAS, currently the only way to upgrade a pool with Unraid is by typing the command manually, so not something that can happen by accident.

 

Edit to add an example:

 

I did it with my TrueNAS CORE pool, just for testing, since I want to keep TrueNAS on this server, booted with an Unraid flash drive, pool before the changes:

   pool: tank
     id: 11986576849467638030
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        tank        ONLINE
          raidz3-0  ONLINE
            sdk2    ONLINE
            sdg2    ONLINE
            sdc2    ONLINE
            sdd2    ONLINE
            sdf2    ONLINE
            sdi2    ONLINE
            sde2    ONLINE
            sdh2    ONLINE
            sdm2    ONLINE
            sdj2    ONLINE
            sdl2    ONLINE

 

After running fdisk on each device to delete parttion1 and make partition2 > partition1:

   pool: tank
     id: 11986576849467638030
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        tank        ONLINE
          raidz3-0  ONLINE
            sdk1    ONLINE
            sdg1    ONLINE
            sdc1    ONLINE
            sdd1    ONLINE
            sdf1    ONLINE
            sdi1    ONLINE
            sde1    ONLINE
            sdh1    ONLINE
            sdm1    ONLINE
            sdj1    ONLINE
            sdl1    ONLINE

 

After doing this, the pool imported normally with Unraid 6.12.8:

image.png

 

Rebooted the server and booted TrueNAS, pool imported as if nothing changed:

image.png

 

 

  • Like 1
Link to comment
  • 4 weeks later...
On 2/12/2024 at 7:33 PM, JorgeB said:

P.S. once v6.12.7 is out, this last step will no longer be needed

What is the current status of this? I am considering trying unraid by installing it on a usb drive on my current truenas machine, but I would need a safe and sound way of importing my current pools.

Link to comment

Thanks for prompt answer to my first post in this forum! 😀

 

It looks like I only have one partition for each drive in my two pools.

 

admin@truenas[/mnt/nas]$ sudo fdisk -l /dev/sd[a-e]|grep dev
Disk /dev/sda: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
/dev/sda1   4096 31251757056 31251752961 14.6T Solaris /usr & Apple ZFS
Disk /dev/sdb: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
/dev/sdb1   4096 31251757056 31251752961 14.6T Solaris /usr & Apple ZFS
Disk /dev/sdc: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
/dev/sdc1   4096 31251757056 31251752961 14.6T Solaris /usr & Apple ZFS
Disk /dev/sdd: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
/dev/sdd1   4096 31251757056 31251752961 14.6T Solaris /usr & Apple ZFS
Disk /dev/sde: 14.55 TiB, 16000900661248 bytes, 31251759104 sectors
/dev/sde1   4096 31251757056 31251752961 14.6T Solaris /usr & Apple ZFS

 

admin@truenas[/mnt/nas]$ sudo fdisk -l /dev/nvme[1-4]n1|grep dev
Disk /dev/nvme1n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
/dev/nvme1n1p1  4096 7814035456 7814031361  3.6T Solaris /usr & Apple ZFS
Disk /dev/nvme2n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
/dev/nvme2n1p1  4096 7814035456 7814031361  3.6T Solaris /usr & Apple ZFS
Disk /dev/nvme3n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
/dev/nvme3n1p1  4096 7814035456 7814031361  3.6T Solaris /usr & Apple ZFS
Disk /dev/nvme4n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
/dev/nvme4n1p1  4096 7814035456 7814031361  3.6T Solaris /usr & Apple ZFS

 

So these pools should be importable directly into unraid?

Edited by studjohey
Link to comment
  • 3 weeks later...

Getting error message "The pool cannot be imported due to damaged devices or data."

Works fine on truenas scale. Showing single partition. 4 hard drives, 3 8tb and 1 4tb. All 4 showing online but cannot import pool. Is there a way to take 2 drives off the pool in Truenas, and format them to be able to mount them in both? I cannot find anything online.

Link to comment
2 minutes ago, Skysec said:

Getting error message "The pool cannot be imported due to damaged devices or data."

Works fine on truenas scale. Showing single partition. 4 hard drives, 3 8tb and 1 4tb. All 4 showing online but cannot import pool. Is there a way to take 2 drives off the pool in Truenas, and format them to be able to mount them in both? I cannot find anything online.

"This pool uses the following feature(s) not supported by this system:
        com.klarasystems:vdev_zaps_v2" ah....

Link to comment
  • 4 months later...

Successfully tried this in a virtual environment and on my main server. If your pool is upgraded to the latest openzfs version in TrueNas you can switch to unraid 7.x beta.  Strangely you can import the pool from cli only webUI has trouble mounting those pools with SWAP partiontion (1). Anyway removing the SWAP partitions and moving the ZFS partitions to place 1 fixes this problem. Big thank you!

  • Like 1
Link to comment
  • 1 month later...
On 2/13/2024 at 2:33 AM, JorgeB said:

 

I've been meaning to take a look at this, since you are not the first to ask about it, just did a quick test and everything worked correctly, in case you want to try, this should be perfectly safe to do but cannot guarantee an unforeseen issue.

 

This would need to be done with Unraid (Linux), list partitions of one of the pool devices:

 

root@Tower15:~# fdisk -l /dev/sde
Disk /dev/sde: 223.57 GiB, 240057409536 bytes, 468862128 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 22B20B36-C9D0-11EE-A9A9-F079595F9D0F

Device       Start       End   Sectors   Size Type
/dev/sde1      128   4194431   4194304     2G FreeBSD swap
/dev/sde2  4194432 468862087 464667656 221.6G FreeBSD ZFS

 

Use fdisk to delete partition 1 and make partition 2 > partition 1

 

Commands used:

d - to delete a partition

1 - to delete partition 1

x - to enter expert mode

f - to fix order, in this case make part2 > part1

r - go back to main menu

w - to write changes (any mistakes abort before w)

 

root@Tower15:~# fdisk /dev/sde

Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): d
Partition number (1,2, default 2): 1

Partition 1 has been deleted.

Command (m for help): x

Expert command (m for help): f
Partitions order fixed.

Expert command (m for help): r

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

 

 

Confirm new partition layout:

root@Tower15:~# fdisk -l /dev/sde
Disk /dev/sde: 223.57 GiB, 240057409536 bytes, 468862128 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 22B20B36-C9D0-11EE-A9A9-F079595F9D0F

Device       Start       End   Sectors   Size Type
/dev/sde1  4194432 468862087 464667656 221.6G FreeBSD ZFS

 

Repeat the procedure for the other pool devices, after all layouts are corrected Unraid can import the pool, to do that create a new pool with the number of slots needed, leave the filesystem set to auto, assign all pool devices in the same order as the zpool status output and start the array.

 

P.S. once v6.12.7 is out, this last step will no longer be needed, but it is with 6.12.6 or earlier v6.12 releases, the pool will fail to import the first time with a similar error to this:

Feb 12 18:15:05 Tower15 root: cannot import 'tank': pool was previously in use from another system.
Feb 12 18:15:05 Tower15 root: Last accessed by (hostid=46c5a671) at Mon Feb 12 18:10:11 2024
Feb 12 18:15:05 Tower15 root: The pool can be imported, use 'zpool import -f' to import the pool.

 

Type:

 

root@Tower15:~# zpool import -f tank
root@Tower15:~# zpool export tank

 

Replace tank with correct pool name, after that restart the array and Unraid will now be able to import the pool.

 

If at any point you want to go back to TrueNAS you just need to boot it with the devices attached and the pool will automatically be imported, at least it was for me, note that depending on the zfs version running on TrueNAS, the pool can show that some new features are available on Unraid, if that is the case don't upgrade the pool, or it will then fail to import with TrueNAS, currently the only way to upgrade a pool with Unraid is by typing the command manually, so not something that can happen by accident.

 

Edit to add an example:

 

I did it with my TrueNAS CORE pool, just for testing, since I want to keep TrueNAS on this server, booted with an Unraid flash drive, pool before the changes:

   pool: tank
     id: 11986576849467638030
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        tank        ONLINE
          raidz3-0  ONLINE
            sdk2    ONLINE
            sdg2    ONLINE
            sdc2    ONLINE
            sdd2    ONLINE
            sdf2    ONLINE
            sdi2    ONLINE
            sde2    ONLINE
            sdh2    ONLINE
            sdm2    ONLINE
            sdj2    ONLINE
            sdl2    ONLINE

 

After running fdisk on each device to delete parttion1 and make partition2 > partition1:

   pool: tank
     id: 11986576849467638030
  state: ONLINE
status: The pool was last accessed by another system.
 action: The pool can be imported using its name or numeric identifier and
        the '-f' flag.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY
 config:

        tank        ONLINE
          raidz3-0  ONLINE
            sdk1    ONLINE
            sdg1    ONLINE
            sdc1    ONLINE
            sdd1    ONLINE
            sdf1    ONLINE
            sdi1    ONLINE
            sde1    ONLINE
            sdh1    ONLINE
            sdm1    ONLINE
            sdj1    ONLINE
            sdl1    ONLINE

 

After doing this, the pool imported normally with Unraid 6.12.8:

image.png

 

Rebooted the server and booted TrueNAS, pool imported as if nothing changed:

image.png

 

 

I'm currently migrating from TrueNAS Scale DragonFish to UNRAID 7.0.0-beta.2. My pool has 2 disks in mirror mode, and I've checked that both disks' zfs partitions are on partition 2:

root@BorNAS:/# fdisk -l /dev/sdb
Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: TerraMaster     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 751167D6-C434-4FA6-986B-E85F6DE4D540

Device       Start         End     Sectors  Size Type
/dev/sdb1      128     4194304     4194177    2G Linux swap
/dev/sdb2  4194432 15628053134 15623858703  7.3T Solaris /usr & Apple ZFS
root@BorNAS:/# fdisk -l /dev/sdc
Disk /dev/sdc: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: TerraMaster     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: EE0D5179-15BE-43C2-AEA9-FED314BAA4EC

Device       Start         End     Sectors  Size Type
/dev/sdc1      128     4194304     4194177    2G Linux swap
/dev/sdc2  4194432 15628053134 15623858703  7.3T Solaris /usr & Apple ZFS

And the pool could be directly imported with zpool import command: 

root@BorNAS:/# zpool status
  pool: data
 state: ONLINE
  scan: scrub repaired 0B in 15:17:25 with 0 errors on Sun Sep  8 00:17:26 2024
config:

        NAME                                      STATE     READ WRITE CKSUM
        data                                      ONLINE       0     0     0
          mirror-0                                ONLINE       0     0     0
            6dfc8125-2607-4d1e-9c63-8c9ca226bd1a  ONLINE       0     0     0
            be28ba4e-8773-4043-b506-919cbdbb9a31  ONLINE       0     0     0

errors: No known data errors

But however, create a Pool in WebGUI and add both disks still result in "Unmountable: unsupported or no file system"

I want to follow the above instruction, but found this warning when trying to remove partition 1:

root@BorNAS:/# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.40.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

The device contains 'zfs_member' signature and it will be removed by a write command. See fdisk(8) man page and --wipe option for more details.

Command (m for help): 

Will the partion removing action result in my data lost? Or it's safe to remove partition 1?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...