ZFS plugin for unRAID


steini84

Recommended Posts

8 hours ago, Marshalleq said:

Anyone know if I can get rid of my USB key that is used to boot the unraid array yet?

If by this you mean no longer have at least 1 drive in the main array, then the answer is no.   It was never intended that this requirement would be removed in the 6.12 series releases.

Link to comment

I'm gonna need a proper writeup with pictures before I try that again.  haha!  It wouldn't let me reuse my pool names because it thinks I still have them as user shares.  I deleted them forever ago and just use sharenfs locally, anyone have any idea where I can completely remove mentions of old user shares for next time I try this upgrade out?  

 

Thoughts on why it may have caused issues...I still had my drives "passed through" so I didn't accidentally mount them via Unassigned Disks.  

 

****ZFS update scare - Rolled-back & zpool import -a + export/import to revert back pool names and set mountpoint back to /zfs/<pool>****

 

I really hope I didn't bork up my perfectly running 11.5 system...

 

I followed these exact steps - "Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report)."

 

Drives are showing empty, and neither original pools can be imported.  It even renamed my 1st partition the new pool names (since I couldn't reuse my existing names - nvme & hdd because unraid thinks I have user shares named that still).  I'm trying to roll back now, but OMG...haha!

 

I didn't see any pop-up like above warning of formatting, so hoping it's just a "fun" scare.   Should I have tried importing it via zpool import???

 

 

 

 

 

 

Edited by OneMeanRabbit
Link to comment
6 hours ago, OneMeanRabbit said:

Drives are showing empty, and neither original pools can be imported.  It even renamed my 1st partition the new pool names (since I couldn't reuse my existing names - nvme & hdd because unraid thinks I have user shares named that still).  I'm trying to roll back now, but OMG...haha!

 

I didn't see any pop-up like above warning of formatting, so hoping it's just a "fun" scare.   Should I have tried importing it via zpool import???

Create a new thread in the general support forum and post the diagnostics after an import attempt.

Link to comment
On 5/7/2023 at 4:38 AM, JorgeB said:

Create a new thread in the general support forum and post the diagnostics after an import attempt.

I used the gui, and it wouldn't let me use the existing pool names.  So I used new ones - but then it changed their pool names.  Just I just imported them after downgrading, and then exported <old name> and imported <new name>.  Do we have clear instructions on migrating?  It seems like it would have been fine to stay on 12, and just do the same thing...

 

It just wasn't what was documented.  :D

Link to comment
  • 1 month later...
On 9/21/2015 at 7:03 AM, steini84 said:

******************************************************

This plugin is depricated since Unraid 6.12 has native ZFS!

Since this thread was written I have moved my snapshots/bakups/replication over to Sanoid/Syncoid which I like even more, but will keep the original thread unchanged since ZnapZend is still a valid option:

******************************************************

 What is this?

This plugin is a build of ZFS on Linux for unRAID 6

 

Installation of the plugin

To install you copy the URL below into the install plugin page in your unRAID 6 web gui or install through the Community Applications.

https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/unRAID6-ZFS.plg

 

WHY ZFS and unRAID?

I wanted to put down a little explanation and a mini "guide" that explains how and why I use ZFS with unRAID 

* I use sdx, sdy & sdz as example devices but you have to change that to your device names. 

* SSD is just the name I like to use for my pool, but you use what you like

 

My use-case for ZFS is a really simple, but a powerful way to make unRAID the perfect setup for me. In the past I have ran ESXI with unRAID as a guest with pci pass-through and Omnios+napp-it as a data store. Then I tried Proxmox that had native ZFS and unRAID again as a guest, but both of these solutions were a little bit fragile. When unRAID started to have great support for Vms and Docker I wanted to have that as the host system and stop relying on a hypervisor. The only thing missing was ZFS and even though I gave btrfs a good chance it did not feel right for me. I built ZFS for unraid in 2015 and as of March 2023 the original setup of 3x SSD + 2x HDD is still going strong running 24/7. That means  7 years of rock solid and problem free up time.

 

You might think a ZFS fanboy like my self would I like to use Freenas or other ZFS based solution, but I really like unRAID for it´s flexible ability to mix and match hard drives for media files. I use ZFS to compliment unRAID and think I get the best of both worlds with this setup.

 

I run a 3 SSD disk pool in raidz that i use for Docker and Vms.  I run automatic snapshots every 15 minutes and replicate every day to a 2x2TB mirror that connects over USB as a backup. I also use that backup pool to rsync my most valuable data to from unRAID (photos etc) which has the added bonus of being protected witch checksums (no bit rot).

 

I know btrfs can probably solve all of this, but I decided to go with ZFS. The great thing about open source is that you have the choice to choose. 

 

Disclamer/Limitations

  • The plugin needs to be rebuilt when a update includes a new Linux Kernel (there is a automated system that makes new builds so there should not be a long delay - thanks Ich777)
  • This plugin does not allow you to use ZFS as a part of the array or a cache pool (which would be awesome by the way).
  • This is not supported by Limetech. I cant take any responsibility for your data, but it should be fine as it's just the official ZFS on Linux packages built on unRAID(thanks to gfjardim for making that awesome script to setup the build environment). The plugin installs the packages, loads the kernel module and imports all pools.

 

How to create a pool?

First create a zfs pool and mount it somewhere under /mnt

 

Examples:

Single disk pool

zpool create -m /mnt/SSD SSD sdx

2 disk mirror

zpool create -m /mnt/SSD SSD mirror sdx sdy

3 disk raidz pool

zpool create -m /mnt/SSD SSD raidz sdx sdy sdz

 

Tweaks

After creating the pool I like to make some adjustments. They are not needed, but give my server better performance:

 

My pool is all SSD so I want to enable trim

zpool set autotrim=on SSD

Next I add these lines to my go file to limit the ARC memory usage of ZFS (i like to limit it to 8GB on my 32GB box, but you can adjust that to your needs)

echo "#Adjusting ARC memory usage (limit 8GB)" >> /boot/config/go
echo "echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max" >> /boot/config/go

I also like to enable compression. "This may sound counter-intuitive, but turning on ZFS compression not only saves space, but also improves performance. This is because the time it takes to compress and decompress the data is quicker than then time it takes to read and write the uncompressed data to disk (at least on newer laptops with multi-core chips)." -Oracle

 

to enable compression you need to write this command: (only applies to block written after enabling compression)

zfs set compression=lz4 SSD

and lastly I like to disable access time

zfs set atime=off SSD

File systems

Now we could just use one file system (/mnt/SSD/), but I like to make separate file systems for Docker and Vms 

zfs create SSD/Vms
zfs create SSD/Docker

Now we should have something like this:

root@Tower:~# zfs list
NAME         USED  AVAIL     REFER  MOUNTPOINT
SSD          170K   832M       24K  /mnt/SSD
SSD/Docker    24K   832M       24K  /mnt/SSD/Docker
SSD/Vms       24K   832M       24K  /mnt/SSD/Vms

Now we have Dockers and Vms separated and that gives us more flexibility. For example can we have different ZFS features turned on for the file systems and we can snapshot, restore and replicate them separately.

 

To have even more flexibility I like to create a separate file system for every Vm and every Docker container. By that I can work with a single vm or a single container without interfering with the rest. In other words, I can mess up a single docker, and rollback, without affecting the rest of the server.

 

Lets start with a single Ubuntu Vm and a Home Assistant container. While we are at it lets create a file system for libvirt.img

 

*The trick is to add the file system before you create a vm/container in unRAID, but with some moving around you can copy the data directory from an existing docker in to a zfs file system after the fact.

Now we have this structure and each and everyone of these file systems can be worked with as a group, subgroup or individually (snapshots, clones, replications, rollbacks etc):

root@Tower:~# zfs list
NAME                       USED  AVAIL     REFER  MOUNTPOINT
SSD                        309K   832M       24K  /mnt/SSD
SSD/Docker                  48K   832M       24K  /mnt/SSD/Docker
SSD/Docker/HomeAssistant    24K   832M       24K  /mnt/SSD/Docker/HomeAssistant
SSD/Vms                     72K   832M       24K  /mnt/SSD/Vms
SSD/Vms/Ubuntu              24K   832M       24K  /mnt/SSD/Vms/Ubuntu
SSD/Vms/libvirt             24K   832M       24K  /mnt/SSD/Vms/libvirt

 

unRAID settings

From here you can navigate to the unRAID webgui and set the default folders for Dockers and Vms to /mnt/SSD/Docker and /mnt/SSD/Vms: 

 

543198345_Screenshotfrom2019-12-0113-55-20.png.16bf9f9bf183c592b1df2430cc59a386.png

**** There have been reported issues with keeping docker.img on ZFS 2.1 (will be default on unRAID 6.10.0). The system can lock up so I reccomend you keep docker.img on the cache drive if you run into any troubles*****

 

1597209724_Screenshotfrom2019-12-0113-54-50.png.3268bd752a0d7d8c548a22159a6714c5.png

 

Now when you add and new app via docker you choose the newly created folder as the config directory

243534393_Screenshotfrom2019-12-0114-15-01.thumb.png.4ed938910295d9adefd2b5d1c269992d.png

Same with the Vms

1595305000_Screenshotfrom2019-12-0114-23-55.png.8abc597f03b402a477c27612799b436b.png

 

Snapshots and rollbacks 

Now this is where the magic happens.

 

You can snapshot the whole pool or you can snapshot a subset.

Lets try to snapshot the whole thing, then just Docker (and it's child file systems) then one snapshot just for the Ubuntu Vm

oot@Tower:/mnt/SSD/Vms/Ubuntu# zfs list -t snapshot
no datasets available
root@Tower:/mnt/SSD/Vms/Ubuntu# zfs snapshot -r SSD@everything
root@Tower:/mnt/SSD/Vms/Ubuntu# zfs snapshot -r SSD/Docker@just_docker
root@Tower:/mnt/SSD/Vms/Ubuntu# zfs snapshot -r SSD/Vms/Ubuntu@ubuntu_snapshot

root@Tower:/mnt/SSD/Vms/Ubuntu# zfs list -r -t snapshot
NAME                                   USED  AVAIL     REFER  MOUNTPOINT
SSD@everything                           0B      -       24K  -
SSD/Docker@everything                    0B      -       24K  -
SSD/Docker@just_docker                   0B      -       24K  -
SSD/Docker/HomeAssistant@everything      0B      -       24K  -
SSD/Docker/HomeAssistant@just_docker     0B      -       24K  -
SSD/Vms@everything                       0B      -       24K  -
SSD/Vms@ubuntu_snapshot                  0B      -       24K  -
SSD/Vms/Ubuntu@everything                0B      -       24K  -
SSD/Vms/Ubuntu@ubuntu_snapshot           0B      -       24K  -
SSD/Vms/libvirt@everything               0B      -       24K  -

You can see that at first we did not have any snapshots, but after creating the first recursive snapshot we can see that we have the "@everything" snapshot on every level, we only have the "@just_docker" for the docker related folders and the only one that has a ubuntu_snapshot is the Ubuntu Vm folder

 

Lets say we make a snapshot, then destroy the Ubuntu VM with a misguided update, we can just power it off and run 

zfs rollback -r SSD/Vms@ubuntu_snapshot

and we are at the state the vm was before we ran the update.

 

One can also access the snapshot (read only) via a hidden folder called .zfs

root@Tower:~# ls /mnt/SSD/Vms/Ubuntu/.zfs/snapshot/
everything/  ubuntu_snapshot/

 

Automatic snapshots

If you want automatic snapshots I recommend ZnapZend and I made a plugin available for the here: ZnapZend

 

There is more information in the plugin thread, but to get up and running you can install this via the plugin page in the unRAID gui or install through the Community Applications.

https://raw.githubusercontent.com/Steini1984/unRAID6-ZnapZend/master/unRAID6-ZnapZend.plg

Then run these two commands to start and auto-start the program on boot:

znapzend --logto=/var/log/znapzend.log --daemonize
touch /boot/config/plugins/unRAID6-ZnapZend/auto_boot_on

Then you can turn on automatic snapshots with this command: 

znapzendzetup create --recursive SRC '7d=>1h,30d=>4h,90d=>1d' SSD

The setup is pretty readable, but this example makes automatic snapshots and keeps 24 backups a day for 7 days, 6 backups a day for a month a then a single snapshot every day for 90 days.

 

The snapshots are also named in a easy to read format:

root@Tower:~# zfs list -t snapshot SSD/Docker/HomeAssistant
NAME                                         USED  AVAIL     REFER  MOUNTPOINT
SSD/Docker/HomeAssistant@2019-11-12-000000  64.4M      -     90.8M  -
SSD/Docker/HomeAssistant@2019-11-13-000000  46.4M      -     90.9M  -
SSD/Docker/HomeAssistant@2019-11-13-070000  28.4M      -     92.5M  -
SSD/Docker/HomeAssistant@2019-11-13-080000  22.5M      -     92.6M  -
SSD/Docker/HomeAssistant@2019-11-13-090000  29.7M      -     92.9M  -
......
SSD/Docker/HomeAssistant@2019-11-15-094500  14.4M      -     93.3M  -
SSD/Docker/HomeAssistant@2019-11-15-100000  14.4M      -     93.4M  -
SSD/Docker/HomeAssistant@2019-11-15-101500  17.2M      -     93.5M  -
SSD/Docker/HomeAssistant@2019-11-15-103000  26.8M      -     93.7M  -

Lets say that we need to go back in time to a good configuration. We know we made a mistake after 10:01 so we can rollback to 10:00:

zfs rollback -r SSD/Docker/HomeAssistant@2019-11-15-100000

 

Backups

I have a USB connected Buffalo drive station with 2x 2tb drives which i have added for backups.

I decided on a mirror and created it with this command:

zpool create External mirror sdb sdc

Then I created a couple of file systems:

zfs create External/Backups
zfs create External/Backups/Docker
zfs create External/Backups/Vms
zfs create External/Backups/Music
zfs create External/Backups/Nextcloud
zfs create External/Backups/Pictures

I use Rsync for basic files (Music, Nextcloud & Pictures) and run this in my crontab:

#Backups
0 12 * * * rsync -av --delete /mnt/user/Nextcloud/ /External/Backups/Nextcloud >> /dev/null
0 26 * * * rsync -av --delete /mnt/user/Music/ /External/Backups/Music >> /dev/null
1 26 * * * rsync -av --delete /mnt/user/Pictures/ /External/Backups/Pictures >> /dev/null

Then I run automatic snapshots on the usb pool: (keep a years worth).

znapzendzetup create --recursive SRC '14days=>1days,365days=>1weeks' External

The automatic snapshots on the ZFS side make sure that I have backups of the files that are deleted between snapshots (files that are created and deleted within the day will get lost if accidentally deleted in unRAID)

 

Replication

ZnapZend supports automatic replication and I send my daily snapshots to the usb pool with these commands

I have not ran into space issues... yet. But this command means a snapshot retention on the usb pool for 10 years (lets see when I need to reconsider)

znapzendzetup create --send-delay=21600 --recursive SRC '7d=>1h,30d=>4h,90d=>1d' SSD/Vms DST:a '90days=>1days,1years=>1weeks,10years=>1months'  External/Backups/Vms

znapzendzetup create --send-delay=21600 --recursive SRC '7d=>1h,30d=>4h,90d=>1d' SSD/Docker DST:a '90days=>1days,1years=>1weeks,10years=>1months'  External/Backups/Docker

 

Scrub 

Scrubs are used to maintain the pool, kinda like parity checks and I run them from a cronjob 

#ZFS Scrub
30 6 * * 0 zpool scrub SSD >> /dev/null
4 2 4 * * zpool scrub External >> /dev/null

 

New ZFS versions:

The plugin check on each reboot if there is a newer version for ZFS available, download it and install it (on default settings the update check is active).

 

If you want to disable this feature simply run this command from a unRAID terminal:

sed -i '/check_for_updates=/c\check_for_updates=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

 

If you have disabled this feature already and you want to enable it run this command from a unRAID terminal:

sed -i '/check_for_updates=/c\check_for_updates=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

Please note that this feature needs an active internet connection on boot.

If you run for example AdGuard/PiHole/pfSense/... on unRAID it is very most likely to happen that you have no active internet connection on boot so that the update check will fail and plugin will fall back to install the current available local package from ZFS.

 

New unRAID versions:

Please also keep in mind that for every new unRAID version ZFS has to be compiled.

I would recommend to wait at least two hours after a new version from unRAID is released before upgrading unRAID (Tools -> Update OS -> Update) because of the involved compiling/upload process.

 

Currently the process is fully automated for all plugins who need packages for each individual Kernel version.

 

The Plugin Update Helper will also inform you if a download failed when you upgrade to a newer unRAID version, this is most likely to happen when the compilation isn't finished yet or some error occurred at the compilation.

If you get a error from the Plugin Update Helper I would recommend to create a post here and don't reboot yet.

image.png.bf1c2b2afa5b60d5d41b2f07aa3bd226.png

 

 Unstable builds 

Now with the ZFS 2.0.0 RC series I have enabled unstable builds for those who want to try them out:

*ZFS 2.0.0 is out so no need to use these builds anymore.. 

 

If you want to enable unstable builds simply run this command from a unRAID terminal:

sed -i '/unstable_packages=/c\unstable_packages=true' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

 

If you have enabled this feature already and you want to disable it run this command from a unRAID terminal:

sed -i '/unstable_packages=/c\unstable_packages=false' "/boot/config/plugins/unRAID6-ZFS/settings.cfg"

Please note that this feature also will need a active internet connection on boot like the update check (if there is no unstable package found, the plugin will automatically return this setting to false so that it is disabled to pull unstable packages - unstable packages are generally not recommended).

 

Extra reading material

This hopefully got you started, but this example was based on my setup and ZFS has so much more to offer. Here are some links I wanted to 

 

 

Is there any guideline about how to correctly migrate ZFS pool(s) from this plugin to the Unraid 6.12 official support? Or maybe I guess the below will be enough?

1. Export the pool(s).

2. Undo all the changes made through the quoted guideline.

3. Update Unraid to 16.12.

Link to comment
  • 4 weeks later...
  • 3 months later...

Hello, could I please get some help with date formats to setup Shadow Copies?

 

I'm still on 6.11.4 with a pool created in TrueNAS and wanted to get Shadow Copies working in Windows but first need to work out the correct date format for both my User Script script;

 

#!/bin/bash
DATE=$(date +%B%y)
zfs snapshot -r chimera@auto-$DATE

(this used to be "zfs snapshot -r chimera@auto-`date +%B%y`" but just updated it to the above while testing matching the format).

 

and my "shadow: format = auto-" in /boot/config/smb-extra.conf (only contains ZFS pool share)

 

If I leave "shadow: format = auto-" and "shadow: localtime = yes" then all the previous versions have the same date. Any other combo and I don't see any.

 

Here's the output of zfs list -t snapshot showing the current format;

 

zfs list -t snapshot
NAME                            USED  AVAIL     REFER  MOUNTPOINT
chimera@auto-April23              0B      -      224K  -
chimera@auto-May23                0B      -      224K  -
chimera@auto-June23               0B      -      224K  -
chimera@auto-July23               0B      -      224K  -
chimera@auto-August23             0B      -      224K  -
chimera@auto-September23          0B      -      224K  -
chimera@auto-October23            0B      -      224K  -
chimera/data@auto-April23      31.0G      -     20.0T  -
chimera/data@auto-May23         594M      -     21.1T  -
chimera/data@auto-June23       45.5G      -     21.9T  -
chimera/data@auto-July23        103G      -     22.3T  -
chimera/data@auto-August23      118G      -     22.9T  -
chimera/data@auto-September23  89.8G      -     23.5T  -
chimera/data@auto-October23    34.7G      -     24.8T  -

 

This pool is only hosting media so I didn't think it'd be worthwhile setting up zfs-auto-snapshot.sh

 

Fixed it with: "shadow: format = auto-%B%y"

Edited by Akshunhiro
FIXED!
Link to comment

Hi,

 

I use 6.11.5 and want update to 6.12

I had 2x zpool as z1 on 6.11.5 with the Plugin created.

After Update I must add a new pool and select all devices from my zpool before.

 

So two question:

1. the name of the new pool must set to the name of the zpool name or could it be a new name?

2. must I select z1 in the setup or how it works?

 

Thanks.

Edited by stubennatter
Link to comment
5 hours ago, stubennatter said:

1. the name of the new pool must set to the name of the zpool name or could it be a new name?

Unraid will automatically change the zpool name to be the same as the pool name.

 

5 hours ago, stubennatter said:

2. must I select z1 in the setup or how it works?

Leave the fs in "auto", it should import the exiting pool.

Link to comment
  • 2 weeks later...
On 6/16/2023 at 6:21 AM, Keniji said:

 

Is there any guideline about how to correctly migrate ZFS pool(s) from this plugin to the Unraid 6.12 official support? Or maybe I guess the below will be enough?

1. Export the pool(s).

2. Undo all the changes made through the quoted guideline.

3. Update Unraid to 16.12.

 

I'm sorry to sound like a complete dummy, but i'm completely unable to find out what I'm supposed to do after I update to 6.12.4 in that situation. So far, I have:

 

- created a 3-way mirror on 6.9.2 from command-line using the plugin

- exported the zpool

- upgraded to 6.12.4

- my three drives show up unassigned and "passed through"

- I stoped the array and clicked "add pool" with three slots

 

At this point, my three slots are unassigned and I have the option of assigning them to the same three drives as before.

 

Is this what I should do? Is there any chance at all that if I do that the data on the drives will be lost?

 

If I do this, should I expect that unRAID will recognize the exported mirror zpool and import it correctly once I restart the array?

 

Thanks for any pointers!

Link to comment
10 hours ago, nanoserver said:

Is this what I should do? Is there any chance at all that if I do that the data on the drives will be lost?

 

Make sure the pool is exported first

Create a new Unraid pool

Assign the 3 devices to the pool, leave the fs set to auto

Start array

 

The existing pool should be imported, and if it's not it won't be damaged.

Link to comment
  • 2 weeks later...
On 1/26/2022 at 8:12 PM, jortan said:

 

zfs import is what you wanted here, not zfs create

 

I suggest before you do anything else, you zfs export the pool (or just disconnect the drive) prevent any further writing and consider your options (but I'm not sure if there are any)

Hi, I am in a similar situation (but have not destroyed my data yet). All I want to do is copy data from an external USB drive onto my unraid pool. This is a one-shot operation. The drive was attached to a TrueNAS system as a single-drive pool, and data was copied to it. What are the steps (commands) I need to do to mount the drive? Once it is attached, I'll just cp -a the content to the unraid pool. 

Thanks!

Link to comment

There are good reasons not to upgrade. If it ain't broke, don't fix it, for one. In my case, this is a mission critical server, with a complicated and multi-function setup. It is in a remote location. It is simply not worth the risk of something not going well during an upgrade when all I want to do is get some data off of a ZFS drive. The server is just fine as it is.

 

Copying this data is a one-off situation, and is unlikely to happen again. I will not need ZFS for the foreseeable future.

 

So now that is out of the way, would you care explaining how to do it?

Link to comment

Well it's your choice, but really I would highly recommend keeping it up to date, there are also a lot of risks in not updating it.  Anyway, I'm not normally one of those people that don't answer questions because there's some other thing I don't like (I hate that), but in this case it would actually solve what I think is your problem - you don't have ZFS installed.

 

So on that, can you confirm - you are asking for how to install the ZFS plugin?  Or are you asking for how to transfer files using ZFS once you have the plugin installed?

 

Assuming the former - have you tried going into the community App Store and checking if the plugin is there, and then installing it?  If so and it's not available, you will need to get a matching version of the plugin to your installed unraid version as they are compiled for the kernel that matches the version of unraid that you have.  I would suggest start there and report back.  I'm not running an old version, so am unable to test.

 

Thanks.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.