ZFS plugin for unRAID


steini84

Recommended Posts

That doesn't seem to make a lot of sense does it.  I assume you know about zpool upgrade and that shouldn't be impacting this.  Assuming you're using the plugin, perhaps just uninstall and reinstall the zfs plugin and reboot.  Also make sure this is the latest unraid version as recent versions were problematic.  Also, if your docker is in a folder rather than in an image I would suggest trying it as an image.  Docker folders on ZFS seem to be hit and miss depending on your luck.  Can't think of anything else to try right now.

Link to comment
2 hours ago, Marshalleq said:

That doesn't seem to make a lot of sense does it.  I assume you know about zpool upgrade and that shouldn't be impacting this.  Assuming you're using the plugin, perhaps just uninstall and reinstall the zfs plugin and reboot.  Also make sure this is the latest unraid version as recent versions were problematic.  Also, if your docker is in a folder rather than in an image I would suggest trying it as an image.  Docker folders on ZFS seem to be hit and miss depending on your luck.  Can't think of anything else to try right now.

ZFS was not responsible for the problem.  I have a small cache drive and some of the files for docker and VM's still comes from there for startup.  This drive didn't show up on boot.  Powering down, and making sure this drive came up, resulted in VM and docker behaving normally.  I need to get all appdata files moved to ZFS and off this drive as I am not using it for anything else.

Link to comment

Just to confirm, is there "ANY" difference between running ZFS compiled into the kernel via ich777 kernel builder vs using the ZFS plugin ?

reason behind question:
As i need the gnif vendor reset for my AMD gpu, which is only available via the kernel build option , i moved zfs to the kernel build option some time ago at the same time for convenience.
After going through a few version upgrades , where during the process i loose zfs (as step one is to install a clean unraid build and only then compile a new kernel with the docker) and all my dockers/vm etc are on zfs which is then unavailable, i have to go through complex and error prone temporary hoops to do this comfortable.
So in my setup i think of moving back to the ZFS plugin and leave the kernel build only for the vendor reset patch part so i can keep zfs working during the entire upgrade process when i move up (or down) unraid versions.

Link to comment

Kernel build is more flexible and you can compile things in as you know. Plug-in is less flexible In that regard and simpler when kernels change. Yeah each has its pros and cons.

 

But yes as far as zfs goes I think there is no difference other than occasional version mismatches which are usually inconsequential. 

Link to comment

update:

guess i stick with kernel build as i can not get the plugin to load on a compiled 6.9.2 kernel (clean with only the gnif vendor reset as extra module) vs the stock 6.9.2 kernel where the plugin works fine. Maybe some slight version diff or something but during the modprob load of the zfs module the kernel crashes. If i include zfs during build zfs works fine (of course plugin removed then). 

Nevermind. Will stick with a slighly more complex workflow then. No huge issue as updates are not that often.

Link to comment
11 hours ago, Marshalleq said:

Ah yeah, that annoying app data folder that sometimes insists on creating itself in a location not set in the defaults!

I've attempted to move the docker image to ZFS along with appdata.  VM's are working.  Docker refuses to start.  Do I need to adjust the BTRFS image type?

 

Correction, VM's are not working once the old cache drive is disconnected. 

 

image.thumb.png.50bdd6a4cabedd82c21b2881236ad23f.png

Edited by tr0910
Link to comment
  • 3 weeks later...

Not claiming to have it setup properly (I think docker still puts something on the unraid array due to btrfs vDisk setting) - but here's my setup.  Both dockers & docker.img are on a zfs dataset & running fine with snapshots.  

 

Our setups look very similar, so the only thing I remember having to fix when I initially moved it was to delete the docker.img & let it rebuild itself - got the fix here.

 

image.png.3b8415686ac484a7e1cea6e81f1ce50b.png

Edited by OneMeanRabbit
Link to comment
Not claiming to have it setup properly (I think docker still puts something on the unraid array due to btrfs vDisk setting) - but here's my setup.  Both dockers & docker.img are on a zfs dataset & running fine with snapshots.  
 
Our setups look very similar, so the only thing I remember having to fix when I initially moved it was to delete the docker.img & let it rebuild itself - got the fix here.
 
image.png.3b8415686ac484a7e1cea6e81f1ce50b.png

The problem is with using a newer version of zfs than 2.0.0. I tried saving the img file on ZFS (2.0.4) and both btrfs and xfs locked up the system. Something to do with the loopback mount. Too bad I could not use the folder mapping and I always got an error that Docker service could not be started.

I moved the docker.img to my cache drive and I don’t care too much since it’s disposable. I think I will make latest zfs the default for the next upgrade to unRaid and add a disclaimer that you cannot save the docker.img on ZFS if the problem will still be present.


Sent from my iPhone using Tapatalk
Link to comment

So I did something either very stupid or very genius, feel free to comment:

- Installed the unraid ZFS plugin
- got familiar with the command line operations and ZFS in general
- Tuned using fio for large file storage, went for ashift 12 and recordsize 1M
- Created a raidz pool with 4 pieces of rust and a bunch of datasets (mounted at root, not /mnt/user)
- Configured docker for directory on ZFS, reinstalled my docker apps, tweaked some paths and everything up and running ok
- Limited arc size to 2 Gb of memory (sorry need my memory for docker/vm)
- Happily rsyncing terrabytes of data back

so far so good, but my fingers are getting blue, time to get some GUI, lets do something crazy ....

- installed a freebsd VM with truenas latest version, 10G virtio image and 4 Gb of Ram
- After It was up running shutdown and modified the XML to pass through my ZFS block devices (not the whole controller)
- Started the truenas VM, pretended a nose bleed and just imported the ZPOOL which is also imported and in use on unraid

- During import I got one warning: 
image.thumb.png.9cfb685dc23b6e8c5377f6f692fb46c1.png


To my utter surprise it imported and I got the nice truenas gui available for managing my snapshots.

Seems to work so far, any objections anybody?

Link to comment
9 minutes ago, praaphorst said:

So I did something either very stupid or very genius, feel free to comment:

- Installed the unraid ZFS plugin
- got familiar with the command line operations and ZFS in general
- Tuned using fio for large file storage, went for ashift 12 and recordsize 1M
- Created a raidz pool with 4 pieces of rust and a bunch of datasets (mounted at root, not /mnt/user)
- Configured docker for directory on ZFS, reinstalled my docker apps, tweaked some paths and everything up and running ok
- Limited arc size to 2 Gb of memory (sorry need my memory for docker/vm)
- Happily rsyncing terrabytes of data back

so far so good, but my fingers are getting blue, time to get some GUI, lets do something crazy ....

- installed a freebsd VM with truenas latest version, 10G virtio image and 4 Gb of Ram
- After It was up running shutdown and modified the XML to pass through my ZFS block devices (not the whole controller)
- Started the truenas VM, pretended a nose bleed and just imported the ZPOOL which is also imported and in use on unraid

- During import I got one warning: 
image.thumb.png.9cfb685dc23b6e8c5377f6f692fb46c1.png


To my utter surprise it imported and I got the nice truenas gui available for managing my snapshots.

Seems to work so far, any objections anybody?

please share ur xml !

Link to comment
2 minutes ago, Dtrain said:

please share ur xml !

Sure but remember this is highly experimental, you might wreck your pool for all I know, I've got everything backup-ed, so you are warned here.

For each block device used in your zpool add it to your VM using the following XML fragment under the <devices> section:

  <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/sdb'/>
      <target dev='hdd' bus='sata'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>

can also use /dev/by-id just be sure ;-)

Link to comment
On 5/23/2021 at 7:43 PM, steini84 said:

Of course it’s always better to turn off the VM if you are doing something like a migration.

Ok, it's better but is there any knowledge or experience on whether it's possible?

Maybe someone else with answers besides steini84?

Link to comment
2 minutes ago, JoergHH said:

Ok, it's better but is there any knowledge or experience on whether it's possible?

It's possible, but the VM will be in a crash consistent state, i.e., same thing as pulling the plug, so it can be done but there's a always a risk, I for example make daily snapshots of my VMs online (with btrfs but the principle is the same) but also do snapshots with the VMs shutdown at least once a week so I have more options.

  • Like 1
Link to comment

I forked a script called borgsnap a while ago to add some needed features for Unraid and my use-case.  This allows you to create automated incremental-forever backups using ZFS snapshots to a local and/or remote borgbackup repository.  I've posted a guide here.

 

This includes pre/post snapshot scripts so you can automate shutting down VMs briefly while the snapshot is taken.

Edited by jortan
  • Like 1
  • Thanks 1
Link to comment
On 4/29/2021 at 10:36 AM, glennv said:

update:

guess i stick with kernel build as i can not get the plugin to load on a compiled 6.9.2 kernel (clean with only the gnif vendor reset as extra module) vs the stock 6.9.2 kernel where the plugin works fine. Maybe some slight version diff or something but during the modprob load of the zfs module the kernel crashes. If i include zfs during build zfs works fine (of course plugin removed then). 

Nevermind. Will stick with a slighly more complex workflow then. No huge issue as updates are not that often.

 

Just found out today from Ich777 that the gnif/vendor-reset patch , when compiled in the kernel , prevents plugin kernel modules such as ZFS to load (can as in my case , crash the kernel). This may change soon depending ongoing dev/testing efforts. 

So for now a good to know gotcha !!

If you need ZFS together with the gnif patch, you need to compile them both at the same time in the kernel for now.

  • Like 1
Link to comment
  • 2 weeks later...

Anyone able to successfully add a ZFS path in Krusader? My pool is good and works great with an smb share. I need transfer some stuff locally though. Not sure if it’s permissions getting in the way or what but, I can’t get it to work even though the folder path is discoverable when I go to add the path to the docker.

 

Any help would be appreciated! Thanks in advance.

Link to comment
Anyone able to successfully add a ZFS path in Krusader? My pool is good and works great with an smb share. I need transfer some stuff locally though. Not sure if it’s permissions getting in the way or what but, I can’t get it to work even though the folder path is discoverable when I go to add the path to the docker.
 
Any help would be appreciated! Thanks in advance.

I dont use krusader, but i installed the binhex krusader docker for a quick test and i just define the path to a newly created test zfs dataset under the Host Path variable. Then click the edit button next to it and set the access mode to read/write -slave.
Then when you start the docker, you will find the content under the /media folder in the docker.
All working as normal. The trick may be the access mode. Forgot exactly why, but i remember i need it for anything outside the array.
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.