Jump to content

ZFS plugin for unRAID


steini84

Recommended Posts

Posted
On 3/8/2020 at 11:24 PM, steini84 said:

Updated for unRAID 6.9.0-beta1 (kernel 5.5.8)

First steini84, thanks for you amazing work!

I have just a short question.

Do you have any information about a new zfsonlinux version > 0.8.3 which supports a kernel like the 5.5 because 0.8.3 supports only 2.6.32 - 5.4.

I was not able to find a solution for this issue but I found error reports regarding 5.5 and 5.6 kernels.

Additional I think there was some discussions about a functionality change that breaks some not fully GPL compliant modules.

Thanks

Posted

Hello all,

 

anyone any expierence with zfs disk images? to use for VM's 

https://docs.oracle.com/cd/E69554_01/html/E69557/storingdiskimageswithzfs.html

 

Would be great if we can snapshots the vm's, now im snapshotting the qemu .img but im not sure that works as i think.

  • Like 1
Posted
On 4/6/2020 at 12:35 PM, Namru said:

First steini84, thanks for you amazing work!

I have just a short question.

Do you have any information about a new zfsonlinux version > 0.8.3 which supports a kernel like the 5.5 because 0.8.3 supports only 2.6.32 - 5.4.

I was not able to find a solution for this issue but I found error reports regarding 5.5 and 5.6 kernels.

Additional I think there was some discussions about a functionality change that breaks some not fully GPL compliant modules.

Thanks

I have heard some discussion about it, but to be honest I do not run the 6.9 beta with kernel 5.5x so I have not ran in to any issues. I am on stable 6.8.3 with kernel 4.19.107 which is running fine with ZOL 0.8.3

 

You can watch the progress of openzfs here https://github.com/openzfs/zfs

Posted (edited)

How stable is this plugin? I want to create a raidz2 array for critical data.

Does zfs send/receive work with it?

Thanks in advance

Edited by nerddude
Posted
How stable is this plugin? I want to create a raidz2 array for critical data.
Does zfs send/receive work with it?
Thanks in advance

It's just openzfs for Linux. Nothing taken out nothing added.

For what it's worth I have ran the same zfs pool on unraid since 2015 without any problems.


Sent from my iPhone using Tapatalk
Posted
How stable is this plugin? I want to create a raidz2 array for critical data. Does zfs send/receive work with it?

Thanks in advance

 

It's just openzfs for Linux. Nothing taken out nothing added.

 

For what it's worth I have ran the same zfs pool on unraid since 2015 without any problems.

 

Zfs send and recv work fine

 

Sent from my iPhone using Tapatalk

Posted (edited)
20 hours ago, TheSkaz said:

I previously posted about a kernel panic under heavy load, and it seems this was addressed 6 days ago:

 

https://github.com/openzfs/zfs/pull/10148

 

Is there a way that we can get this implemented, or know of a workaround?

I built a version for linux 5.5.8 for you from the latest master (https://github.com/openzfs/zfs/tree/7e3df9db128722143734a9459771365ea19c1c40)  which includes that fix you refrenced.

 

You can find that build here

https://www.dropbox.com/s/i6cuvqnka3y64vs/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz?dl=0

https://www.dropbox.com/s/6e64mb8j9ynokj8/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz.md5?dl=0

 

Copy both these files and replace with the files in /boot/config/plugins/unRAID6-ZFS/packages/ and reboot

 

To verify that you are on the correct build type: dmesg | grep ZFS and you should see "ZFS: Loaded module v0.8.0-1," (the version name from master https://github.com/openzfs/zfs/blob/7e3df9db128722143734a9459771365ea19c1c40/META

 

FYI only kernel versions up to 5.4 are officially supported according to the META file above.

 

Have fun :)

Edited by steini84
  • Like 1
Posted (edited)
On 4/8/2020 at 12:27 PM, steini84 said:

I built a version for linux 5.5.8 for you from the latest master (https://github.com/openzfs/zfs/tree/7e3df9db128722143734a9459771365ea19c1c40)  which includes that fix you refrenced.

 

You can find that build here

https://www.dropbox.com/s/i6cuvqnka3y64vs/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz?dl=0

https://www.dropbox.com/s/6e64mb8j9ynokj8/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz.md5?dl=0

 

Copy both these files and replace with the files in /boot/config/plugins/unRAID6-ZFS/packages/ and reboot

 

To verify that you are on the correct build type: dmesg | grep ZFS and you should see "ZFS: Loaded module v0.8.0-1," (the version name from master https://github.com/openzfs/zfs/blob/7e3df9db128722143734a9459771365ea19c1c40/META

 

FYI only kernel versions up to 5.4 are officially supported according to the META file above.

 

Have fun :)

Thank you so much!!!!!!!

 

[   88.290629] ZFS: Loaded module v0.8.3-1, ZFS pool version 5000, ZFS filesystem version 5

 

I have it installed, and currently stress testing. Let's hope this works!

Edited by TheSkaz
  • Like 1
Posted
6 hours ago, TheSkaz said:

Thank you so much!!!!!!!

 

[   88.290629] ZFS: Loaded module v0.8.3-1, ZFS pool version 5000, ZFS filesystem version 5

 

I have it installed, and currently stress testing. Let's hope this works!

I would double check that you overwrote the right file. You should have gotten version v0.8.0-1, (it´s lower I know, but that is the current version give on the master branch @github) you can double check by running these two command and you should get the exact same output

root@Tower:~# dmesg | grep -i ZFS
[   30.852570] ZFS: Loaded module v0.8.0-1, ZFS pool version 5000, ZFS filesystem version 5
root@Tower:~# md5sum /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz
8cdee7a7d6060138478a5d4121ac5f96  /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz

 

Posted (edited)
On 4/10/2020 at 12:38 PM, steini84 said:

I would double check that you overwrote the right file. You should have gotten version v0.8.0-1, (it´s lower I know, but that is the current version give on the master branch @github) you can double check by running these two command and you should get the exact same output


root@Tower:~# dmesg | grep -i ZFS
[   30.852570] ZFS: Loaded module v0.8.0-1, ZFS pool version 5000, ZFS filesystem version 5
root@Tower:~# md5sum /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz
8cdee7a7d6060138478a5d4121ac5f96  /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.3-unRAID-6.9.0-beta1.x86_64.tgz

 

 

What I did was upload the files using WinSCP:

Capture.PNG.e8b494afdce04d47ae10ee0916273d05.PNG

 

and then rebooted, once it comes back up, it shows this:

 

Capture1.PNG.5cdb30709b87d76d1163089dbd21615b.PNG

 

it seems to be reverting.

 

I assume I don't need to rename the files right?

Edited by TheSkaz
Posted
 

What I did was upload the files using WinSCP:

Capture.PNG.e8b494afdce04d47ae10ee0916273d05.PNG

 

and then rebooted, once it comes back up, it shows this:

 

Capture1.PNG.5cdb30709b87d76d1163089dbd21615b.PNG

 

it seems to be reverting.

 

I assume I don't need to rename the files right?

Whoops. I assumed you were on unraid 6.9 beta 1 - I will make a build for you for unraid 6.8.3 tomorrow

 

 

Sent from my iPhone using Tapatalk

Posted
7 minutes ago, steini84 said:

Whoops. I assumed you were on unraid 6.9 beta 1 - I will make a build for you for unraid 6.8.3 tomorrow

 

 

Sent from my iPhone using Tapatalk

thank you so much! sorry for the headache

Posted
On 4/15/2020 at 10:43 PM, TheSkaz said:

thank you so much! sorry for the headache

https://www.dropbox.com/s/zwq418jq6t3ingt/zfs-0.8.3-unRAID-6.8.3.x86_64.tgz?dl=0

https://www.dropbox.com/s/qdkq4c3wqc5698o/zfs-0.8.3-unRAID-6.8.3.x86_64.tgz.md5?dl=0

 

Just overwrite these files and double check that you actually have to overwrite, the files should have the same name. You have to copy both the files otherwise the md5 check will fail and the plugin will re-download the released binary. 

 

Just check after a reboot

root@Tower:~# dmesg | grep ZFS
[ 4823.737658] ZFS: Loaded module v0.8.0-1, ZFS pool version 5000, ZFS filesystem version 5

 

Posted
5 minutes ago, steini84 said:

https://www.dropbox.com/s/zwq418jq6t3ingt/zfs-0.8.3-unRAID-6.8.3.x86_64.tgz?dl=0

https://www.dropbox.com/s/qdkq4c3wqc5698o/zfs-0.8.3-unRAID-6.8.3.x86_64.tgz.md5?dl=0

 

Just overwrite these files and double check that you actually have to overwrite, the files should have the same name. You have to copy both the files otherwise the md5 check will fail and the plugin will re-download the released binary. 

 

Just check after a reboot


root@Tower:~# dmesg | grep ZFS
[ 4823.737658] ZFS: Loaded module v0.8.0-1, ZFS pool version 5000, ZFS filesystem version 5

 

root@Tower:~# dmesg | grep ZFS
[   61.092654] ZFS: Loaded module v0.8.0-1, ZFS pool version 5000, ZFS filesystem version 5

 

Thank you so much for your help and quick turnarounds. 

  • Like 1
Posted

Hi - long story short, installed ZFS plugin, worked well, but did not need it in the end, so destroyed pool, removed ZFS plugin, rebooted and I am left with some traces and somehow still have stuff mounting and/or created under /mnt. How do I get rid of this automatic creation of a folder in /mnt?

 

Full story: I originally had an XFS Unassigned Device for backup purposes which was working ok. It was mounted under /mnt/disks/ssd500gb.

Thinking I had lots of duplicate files and/or blocks on it, since I am backing up several clients to it, I installed the ZFS plugin, moved all my data elsewhere, created my pool and mounted under /mnt/ssd500gb.

This worked well for a while, until I decided to not use ZFS anymore and move back to ZFS Unassigned Device (dedup ratio was 1.05% so was not worth it in the end). I therefore destroyed the pool ssd500gb, then removed partitions on the Unassigned Device, recreated a XFS ssd500gb partition, and marked it as auto mounted from the Unraid UI.

 

The problem I have now is that I see both:

/mnt/ssd500gb (seems to be from old ZFS pool)

/mnt/disks/ssd500gb (seems to be the new XFS partition and mounted)

 

I am pretty sure the first one is from the ZFS stuff as mentioned, but I cannot seem to get rid of it... deleting with rm -rf does not persist and upon reboot it comes back. I seem to be able to copy files back to /mnt/disks/ssd500gb so I think the XFS UD is working again.

 

Would you know what is remounting and/or recreating something in /mnt and how to make it stop?

 

 

Thanks! 

Posted

For me destroying the pool does the job, you can try to reinstall the zfs plugin and issue: zpool status or zpool import -a and see if there is still something left.

 

For all the others i have found out how to use zvol's for VM storage (so you can make use of snapshots, with raw .img you cant, i only had succes with qcow2 on ubuntu/debian servers, desktop failed to do snapshots on qcow2)

 

zfs create -V 50G pool/zvolname

 

then set the VM config for disk to manual: /dev/zvol/pool/zvolname

And the type to virtio or sata (whatever works for you, virtio still the best performance wise)

 

I've also figured out how to snapshot the right way with znapzendzetup also provided as unraid plugin by stein84, also to different datasets to ensure uptime of servers. If anyone needs a hand, let me know.

Posted
3 minutes ago, ezra said:

For me destroying the pool does the job, you can try to reinstall the zfs plugin and issue: zpool status or zpool import -a and see if there is still something left.

 

For all the others i have found out how to use zvol's for VM storage (so you can make use of snapshots, with raw .img you cant, i only had succes with qcow2 on ubuntu/debian servers, desktop failed to do snapshots on qcow2)

 

zfs create -V 50G pool/zvolname

 

then set the VM config for disk to manual: /dev/zvol/pool/zvolname

And the type to virtio or sata (whatever works for you, virtio still the best performance wise)

 

I've also figured out how to snapshot the right way with znapzendzetup also provided as unraid plugin by stein84, also to different datasets to ensure uptime of servers. If anyone needs a hand, let me know.

root@Tower:~# zpool status
no pools available
root@Tower:~# zpool import -a
no pools available to import

 

I cannot seem to find what is recreating that folder... 

drwxr-xr-x  3 root   root   60 Apr 22 14:56 ssd500gb/

 

rebooted a couple of minutes ago and you can see root is recreating it and unlike the other standard Unraid folders, this one is not 777... any other ideas?

 

Posted

first try: umount /mnt/ssd500gb

 

If this output is something like: directory is not mounted, then: rm -r /mnt/ssd500gb (will delete the entire folder, so make sure there's nothing in there)

 

then or before check with: df -h 

If /mnt/ssd500gb is listed somewhere, and /mnt/disks/ssd500gb also

Posted
3 minutes ago, ezra said:

first try: umount /mnt/ssd500gb

 

If this output is something like: directory is not mounted, then: rm -r /mnt/ssd500gb (will delete the entire folder, so make sure there's nothing in there)

 

then or before check with: df -h 

If /mnt/ssd500gb is listed somewhere, and /mnt/disks/ssd500gb also

root@Tower:/# umount /mnt/ssd500gb
umount: /mnt/ssd500gb: not mounted.
root@Tower:/# df -h 
Filesystem      Size  Used Avail Use% Mounted on
rootfs          3.9G  796M  3.1G  21% /
tmpfs            32M  472K   32M   2% /run
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  196K  128M   1% /var/log
/dev/sda1        15G  470M   15G   4% /boot
/dev/loop1      7.3M  7.3M     0 100% /lib/firmware
tmpfs           1.0M     0  1.0M   0% /mnt/disks
/dev/md1        932G  462G  470G  50% /mnt/disk1
/dev/md2        932G  487G  446G  53% /mnt/disk2
/dev/md3        932G  732G  200G  79% /mnt/disk3
/dev/md4        1.9T  906G  957G  49% /mnt/disk4
shfs            4.6T  2.6T  2.1T  56% /mnt/user
/dev/sdd1       466G  113G  354G  25% /mnt/disks/ssd500gb
/dev/loop2       20G  6.0G   14G  31% /var/lib/docker
/dev/loop3      1.0G   17M  905M   2% /etc/libvirt
root@Tower:/# 

 

root@Tower:/mnt# ls -al
total 0
drwxr-xr-x  9 root   root  180 Apr 22 14:56 ./
drwxr-xr-x 20 root   root  440 Apr 22 14:58 ../
drwxrwxrwx  7 nobody users  76 Apr 22 15:06 disk1/
drwxrwxrwx  5 nobody users  47 Apr 22 15:06 disk2/
drwxrwxrwx  6 nobody users  64 Apr 22 15:06 disk3/
drwxrwxrwx  7 nobody users  75 Apr 22 15:06 disk4/
drwxrwxrwt  3 root   root   60 Apr 22 14:56 disks/
drwxr-xr-x  3 root   root   60 Apr 22 14:56 ssd500gb/
drwxrwxrwx  1 nobody users  76 Apr 22 15:06 user/
root@Tower:/mnt# rm -r ssd500gb/
root@Tower:/mnt# ls -al
total 0
drwxr-xr-x  8 root   root  160 Apr 22 15:08 ./
drwxr-xr-x 20 root   root  440 Apr 22 14:58 ../
drwxrwxrwx  7 nobody users  76 Apr 22 15:06 disk1/
drwxrwxrwx  5 nobody users  47 Apr 22 15:06 disk2/
drwxrwxrwx  6 nobody users  64 Apr 22 15:06 disk3/
drwxrwxrwx  7 nobody users  75 Apr 22 15:06 disk4/
drwxrwxrwt  3 root   root   60 Apr 22 14:56 disks/
drwxrwxrwx  1 nobody users  76 Apr 22 15:06 user/

 

At this stage it's technically not in /mnt anymore... rebooting server just now.

 

After reboot:

 

root@Tower:~# cd /mnt
root@Tower:/mnt# ls -al
total 0
drwxr-xr-x  9 root   root  180 Apr 22 15:11 ./
drwxr-xr-x 20 root   root  440 Apr 22 15:11 ../
drwxrwxrwx  7 nobody users  76 Apr 22 15:06 disk1/
drwxrwxrwx  5 nobody users  47 Apr 22 15:06 disk2/
drwxrwxrwx  6 nobody users  64 Apr 22 15:06 disk3/
drwxrwxrwx  7 nobody users  75 Apr 22 15:06 disk4/
drwxrwxrwt  3 root   root   60 Apr 22 15:11 disks/
drwxr-xr-x  3 root   root   60 Apr 22 15:11 ssd500gb/
drwxrwxrwx  1 nobody users  76 Apr 22 15:06 user/
root@Tower:/mnt# 
 

It's back... 

 

The contents of /mnt/ssd500gb is only /mnt/ssd500gb/Backups, which is the name of one of the file systems I had created... so I had a zpool of ssd500gb. 

Then I created three file systems (I think that's what they are called?)  /mnt/ssd500gb/Docker, /mnt/ssd500gb/VMs and /mnt/ssd500gb/Backups...

 

why would Docker and VMs be gone but Backups still be there? and Empty?...

 

Posted
8 minutes ago, xxxliqu1dxxx said:

The contents of /mnt/ssd500gb is only /mnt/ssd500gb/Backups, which is the name of one of the file systems I had created... so I had a zpool of ssd500gb. 

Then I created three file systems (I think that's what they are called?)  /mnt/ssd500gb/Docker, /mnt/ssd500gb/VMs and /mnt/ssd500gb/Backups...

 

why would Docker and VMs be gone but Backups still be there? and Empty?...

root@Tower:/mnt# zfs set mountpoint=none ssd500gb/Backups
cannot open 'ssd500gb/Backups': dataset does not exist
root@Tower:/mnt# 

 

root@Tower:~# zfs destroy ssd500gb/Backups
cannot open 'ssd500gb/Backups': dataset does not exist

 

I just cannot see what's recreating it if it's not ZFS... it's gotta be somewhere... it's not in /etc/fstab... what else does the ZFS plugin do upon boot which may have not been cleaned when uninstall occurred?

Posted
1 minute ago, ezra said:

it only imports the pool. just delete the folder and reboot to see if its still there. Should just be leftover or unknown typo.

I can remove it with rm -rf and it comes back after reboot, as shown previously. It gets "recreated" every reboot, even the timestamp changes. there's no "disk usage" with it, just two folders... /mnt/ssd500gb/Backups ... that's it...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...