ZFS plugin for unRAID


steini84

850 posts in this topic Last Reply

Recommended Posts

Just a heads up for anyone using zfs with send/receive, there's an old and scary bug where files on the receive side can get silently corrupted, reported in 2017, and it's still present since there's not an easy way to fix, it only affects datasets with a record size >128K, I just checked and I'm sing the default 128K on my pool, but still scary.

Link to post
  • Replies 849
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

What is this? This plugin is a build of ZFS on Linux for unRAID 6   Installation of the plugin To install you copy the URL below into the install plugin page in your unRAID 6 web g

Built zfs-2.0.0-rc7 for unRAID-6.8.3 & 6.9.0-beta35   Great to see that unRAID is finally adding native ZFS so this might be one of the last builds from me   And yes, i´m alre

Figured it out. No need to mount through /etc/fstab.   What's missing are entries in /etc/mtab,  which are created if mounted from fstab. So a few echo into /etc/mtab is the solution. J

Posted Images

Not Sure if this is the best place to post this or not.  I am currently running FreeNAS 11.3 and am planning on moving to UnRAID for the better VM and docker support.  Currently in FreeNAS I have three pools, the boot drive, a mirrored SSD set I used for a VM and my 8 x 4TB WD Red Z2 pool.  Once on Unraid, I still want my main storage pool as ZFS Z2.  However, I don't want to risk any Data loss during the transition.  My loose plan is to sync all data to either a pair of mirrored Seagate Exos X16 16TB drives or copy the data to the two drives with them mounted as regular drives.  Then once running Unraid, rebuild my big Z2 array, copy the data over without permissions and then rebuild two of the drives with the 16TB drives swapped for two of the 4TB drives.  Is this a good plan, should I do something else?

Link to post
  • 3 weeks later...

just downgraded from unraid beta25 to 6.8.3 and had build 2 pools on the previous version. when trying to import them, I get this:

root@Tower:~# zpool import
   pool: datastore
     id: 7743322362316987465
  state: UNAVAIL
status: The pool can only be accessed in read-only mode on this system. It
        cannot be accessed in read-write mode because it uses the following
        feature(s) not supported on this system:
        com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)
action: The pool cannot be imported in read-write mode. Import the pool with
        "-o readonly=on", access the pool on a system that supports the
        required feature(s), or recreate the pool from backup.
 config:

        datastore   UNAVAIL  unsupported feature(s)
          mirror-0  ONLINE
            sdk     ONLINE
            sdn     ONLINE
          mirror-1  ONLINE
            sdl     ONLINE
            sdm     ONLINE

   pool: vmstorage
     id: 4552063121711083272
  state: UNAVAIL
status: The pool can only be accessed in read-only mode on this system. It
        cannot be accessed in read-write mode because it uses the following
        feature(s) not supported on this system:
        com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)
action: The pool cannot be imported in read-write mode. Import the pool with
        "-o readonly=on", access the pool on a system that supports the
        required feature(s), or recreate the pool from backup.
 config:

        vmstorage    UNAVAIL  unsupported feature(s)
          mirror-0   ONLINE
            nvme2n1  ONLINE
            nvme3n1  ONLINE
          mirror-1   ONLINE
            nvme0n1  ONLINE
            nvme1n1  ONLINE

 

is there a way to recover?

 

Link to post
18 hours ago, TheSkaz said:

you have built one for me before, that would be awesome, I REALLY dont want to lose that data. maybe it could help someone else too?

Here you go:

https://www.dropbox.com/s/f3fp04zsgp1g4a0/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz?dl=0

https://www.dropbox.com/s/z381hehf28k3gj5/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz.md5?dl=0

 

You can either rename and replace the files in /boot/config/plugins/unRAID6-ZFS/packages or run these commands:

#Unmount bzmodules and make rw
if mount | grep /lib/modules > /dev/null;
then
      echo "Remounting modules"
      cp -r /lib/modules /tmp
      umount -l /lib/modules/
      rm -rf  /lib/modules
      mv -f  /tmp/modules /lib
fi

#install and load the package and import pools
installpkg zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz
depmod
modprobe zfs
zpool import -a

 

Link to post

Hi,

 

I would like to thank you for this amazing plugin. It allows me to get the best filesystem along with my preferred Linux OS. An almost no-compromise solution!

 

I am currently running the latest stable release of Unraid 6.8.3 with your RC2 OpenZFS 2.0 plugin version you just posted.

I am very excited to try the new zstd compression but for some reason it won't allow me to use it. It says invalid argument. Do I need to reboot the server in order to fully support the new ZFS version? I was running 0.8.3-1 before upgrading. I can't wait to try out the different zstd levels to see which one fits my needs better.

 

Is there a way to see any disk/pool activity within Unraid using OpenZFS? So far I'm always connected via putty running "zpool iostat -v 5". I was just wondering if there is another plugin or some way to get at least the status in the GUI if one of my pools become degraded.

 

Thank you so much !!!

 

Phil

5.PNG

6.PNG

Link to post

Hello, I am running into a simple problem, yet I won't be trying anything that might damage the disk I am trying to get data from. I have backups, but recovering data would take me weeks with my current connection.

 

I am coming from FreeNAS, with a 4TB ZFS formatted drive. I got a new 4TB drive, which is already formatted in XFS and ready to get that data. Problem is: I forgot to 'export' my zpool on the FreeNAS system before, well, formatting it to install unRaid. As so, the 'zpool import' command does not work, and I am unsure on how to properly mount the ZFS drive to retrieve the data. That HDD will be properly formatted and added to the array when the transfer is done.

 

I got this far, but pressing 'mount' won't properly mount the drive. What should I do to retrieve that data?

a.png

Link to post
Hi,
 
I would like to thank you for this amazing plugin. It allows me to get the best filesystem along with my preferred Linux OS. An almost no-compromise solution!
 
I am currently running the latest stable release of Unraid 6.8.3 with your RC2 OpenZFS 2.0 plugin version you just posted.
I am very excited to try the new zstd compression but for some reason it won't allow me to use it. It says invalid argument. Do I need to reboot the server in order to fully support the new ZFS version? I was running 0.8.3-1 before upgrading. I can't wait to try out the different zstd levels to see which one fits my needs better.
 
Is there a way to see any disk/pool activity within Unraid using OpenZFS? So far I'm always connected via putty running "zpool iostat -v 5". I was just wondering if there is another plugin or some way to get at least the status in the GUI if one of my pools become degraded.
 
Thank you so much !!!
 
Phil
5.PNG.22881b1fa63d034d0aadce9c8bcea5d8.PNG
6.PNG.966425fb3d6df6ea860d3204be03ea1a.PNG

Yeah I would reboot and retry. It worked as expected on my test server after a reboot:
root@Tower:~# zpool upgrade SSDThis system supports ZFS pool feature flags.Enabled the following features on 'SSD': redaction_bookmarks redacted_datasets bookmark_written log_spacemap livelist device_rebuild zstd_compressroot@Tower:~# zfs set compression=zstd SSDroot@Tower:~# zfs get all | grep -i compressionSSD   compression           zstd                   localroot@Tower:~# 



This is just a plugin for zfs and nothing added. I would recommend setting up a Check_mk docker to monitor your server and that can send you a mail if you have a problem. For example a problem with the pool or if the pool is running out of space.


Sent from my iPhone using Tapatalk

Link to post
Hello, I am running into a simple problem, yet I won't be trying anything that might damage the disk I am trying to get data from. I have backups, but recovering data would take me weeks with my current connection.
 
I am coming from FreeNAS, with a 4TB ZFS formatted drive. I got a new 4TB drive, which is already formatted in XFS and ready to get that data. Problem is: I forgot to 'export' my zpool on the FreeNAS system before, well, formatting it to install unRaid. As so, the 'zpool import' command does not work, and I am unsure on how to properly mount the ZFS drive to retrieve the data. That HDD will be properly formatted and added to the array when the transfer is done.
 
I got this far, but pressing 'mount' won't properly mount the drive. What should I do to retrieve that data?
a.thumb.png.fcb2b0b07349d6f4122dcfc112470d4f.png

If I understand correctly you have to use the -f flag:

zpool import -f POOLNAME


Sent from my iPhone using Tapatalk
Link to post
9 hours ago, steini84 said:


If I understand correctly you have to use the -f flag:

zpool import -f POOLNAME


Sent from my iPhone using Tapatalk

Doing so gives me 

Quote

cannot import 'poolname': no such pool available

 

It there any other way to mount that drive?

 

This is the UD menu, with its pools showed when expanded.

b.png

Link to post
9 minutes ago, BRiT said:

Use the actual name of your pool.

Of course I am doing that hehe :P sorry if it looked as I literally pasted the command

 

7 minutes ago, steini84 said:

Or even try zpool import -f -a


Sent from my iPhone using Tapatalk

Quote

no pools available to import

I think I may have fucked up somehow :/ I just don't know how or why

Edited by b3lc
Link to post
2 hours ago, steini84 said:

Try booting into Freenas since you know it was working there. See if you can mount it there


Sent from my iPhone using Tapatalk

Did it, and the pool was successfully imported and shows fine on FreeNAS. The only thing is.. it is empty. Before this (yesterday, before even creating the topic) I tried 'recreating' the pool in Unraid as a part of rebuilding the pool inside. Using the same name. Is it possible that I may have fucked up the data inside the drive, even if only recreating the zpool this way?

 

It should be noted that I got it to work earlier this week. I could see my files in unraid. The only thing is.. I didn't have the drive to transfer all my data by then.

Edited by b3lc
added info
Link to post
On 9/26/2020 at 5:10 AM, steini84 said:

Here you go:

https://www.dropbox.com/s/f3fp04zsgp1g4a0/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz?dl=0

https://www.dropbox.com/s/z381hehf28k3gj5/zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz.md5?dl=0

 

You can either rename and replace the files in /boot/config/plugins/unRAID6-ZFS/packages or run these commands:


#Unmount bzmodules and make rw
if mount | grep /lib/modules > /dev/null;
then
      echo "Remounting modules"
      cp -r /lib/modules /tmp
      umount -l /lib/modules/
      rm -rf  /lib/modules
      mv -f  /tmp/modules /lib
fi

#install and load the package and import pools
installpkg zfs-2.0.0-rc2-unRAID-6.8.3.x86_64.tgz
depmod
modprobe zfs
zpool import -a

 

working beautifully. you sir, are a scholar among men (or women :) )

Link to post

@steini84 getting a weird error:

 

Sep 27 13:14:40 Tower kernel: VERIFY3(zfs_btree_find(tree, value, &where) != NULL) failed (0000000000000000 != 0000000000000000)
Sep 27 13:14:40 Tower kernel: PANIC at btree.c:1780:zfs_btree_remove()
Sep 27 13:14:40 Tower kernel: Showing stack for process 8689
Sep 27 13:14:40 Tower kernel: CPU: 54 PID: 8689 Comm: txg_sync Tainted: P           O      4.19.107-Unraid #1
Sep 27 13:14:40 Tower kernel: Hardware name: System manufacturer System Product Name/ROG ZENITH II EXTREME ALPHA, BIOS 1101 06/05/2020
Sep 27 13:14:40 Tower kernel: Call Trace:
Sep 27 13:14:40 Tower kernel: dump_stack+0x67/0x83
Sep 27 13:14:40 Tower kernel: spl_panic+0xcf/0xf7 [spl]
Sep 27 13:14:40 Tower kernel: ? zfs_btree_find_in_buf+0x4a/0x99 [zfs]
Sep 27 13:14:40 Tower kernel: ? zfs_btree_find_in_buf+0x4a/0x99 [zfs]
Sep 27 13:14:40 Tower kernel: ? zfs_btree_find+0x148/0x182 [zfs]
Sep 27 13:14:40 Tower kernel: zfs_btree_remove+0x57/0x7d [zfs]
Sep 27 13:14:40 Tower kernel: range_tree_add_impl+0x4f3/0xa97 [zfs]
Sep 27 13:14:40 Tower kernel: ? _cond_resched+0x1b/0x1e
Sep 27 13:14:40 Tower kernel: ? __kmalloc_node+0x11e/0x12f
Sep 27 13:14:40 Tower kernel: ? range_tree_remove_impl+0xad5/0xad5 [zfs]
Sep 27 13:14:40 Tower kernel: range_tree_vacate+0x16a/0x1b3 [zfs]
Sep 27 13:14:40 Tower kernel: metaslab_sync_done+0x327/0x4c2 [zfs]
Sep 27 13:14:40 Tower kernel: ? _cond_resched+0x1b/0x1e
Sep 27 13:14:40 Tower kernel: vdev_sync_done+0x42/0x66 [zfs]
Sep 27 13:14:40 Tower kernel: spa_sync+0xbd1/0xd6a [zfs]
Sep 27 13:14:40 Tower kernel: txg_sync_thread+0x246/0x3f2 [zfs]
Sep 27 13:14:40 Tower kernel: ? txg_thread_exit.isra.0+0x50/0x50 [zfs]
Sep 27 13:14:40 Tower kernel: thread_generic_wrapper+0x67/0x6f [spl]
Sep 27 13:14:40 Tower kernel: ? __thread_exit+0xe/0xe [spl]
Sep 27 13:14:40 Tower kernel: kthread+0x10c/0x114
Sep 27 13:14:40 Tower kernel: ? kthread_park+0x89/0x89
Sep 27 13:14:40 Tower kernel: ret_from_fork+0x22/0x40

 

froze up my vms that are stored on the zfs pools. does this make any sense? is there any way to recover without rebooting?

Edited by TheSkaz
Link to post
14 minutes ago, cadamwil said:

I have two pools from FreeNAS I want to import to /mnt/POOLNAME

 

what is the correct syntax?

 

zpool import -f -m /mnt/POOLNAME POOLNAME

 

or

 

zpool import -f POOLNAME -m /mnt/POOLNAME

 

or

 

zpool import -f -m /mnt/POOLNAME POOLNAME

If someone wants to answer this, that's awesome, otherwise, I got this to achieve the same goal by running

 

zpool import -f POOLNAME

zfs set mountpoint=/mnt/POOLNAME POOLNAME

Link to post

I'm sure I am moronic and missing something obvious.  But how would I create a share with my ZFS arrays?

symlink on a disk?

 

or should it be

zfs set sharesmb=on POOLNAME

 

If so, I am getting

root@Tower:/mnt/SixteenMirror# zfs set sharesmb=on SixteenMirror
cannot share 'SixteenMirror: system error': smb add share failed
cannot share 'SixteenMirror/Docker: system error': smb add share failed

I have already set permissions & owner for /mnt/SixteenMirror by running the following

root@Tower:/mnt/SixteenMirror# chmod -R 775 /mnt/SixteenMirror
root@Tower:/mnt/SixteenMirror# chown -R nobody:users /mnt/SixteenMirror

 

Edited by cadamwil
more info
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.