ZFS plugin for unRAID


steini84

Recommended Posts

So I'm quite frustrated with this new beta, they're unusual a lot more stable by now - (the Unraid one and maybe the zfs rc - though I'm not sure).  I'm still getting this issue and other randomness.  As part of that testing, I'd like to downgrade to Unraid stable and keep the latest ZFS.  The instructions for that are mentioned above, except it didn't seem to work for me.  I think what's happening is when you reboot, it does it's auto update or whatever and makes it the lower version again?  My process was to downgrade the kernel, remove the older zfs versions, copy the above files from dropbox where the old ones were, reboot again.  Thoughts?

Edited by Marshalleq
Link to comment

I'm really worried it's ZFS causing all this - I just don't understand why creating a VM would trigger it.  I have to say I wish I'd not accepted the new ZFS version and upgraded the pools - it will be challenging to sort out.  I think it's enough to upgrade an OS, but keep the filesystem stable :/

 

@steini84 from the attached syslog, would you agree it's ZFS?  If so, I might need some help to log a ticket upstream i.e. around how you've packaged it.

 

Started a ticket here - I'm sure it's ZFS now.  I would appreciate if you could take a look and add any commentary - I'm concerned they'll complain about it being unraid and about it being a beta of unraid.  Thanks.

 

obi-wan-diagnostics-20201013-1953.zip

Edited by Marshalleq
Link to comment

First, thanks to @steini84 for creating this plugin, I wouldn't be on Unraid without it.  They should put you on the payroll.

 

2nd, one of the things I had and liked in Freenas was a daily report on the SMART tests of my array, as well as a pool report.  Freenas, I think, scheduled the smart tests itself, and I am working on a script to schedule the long SMART test by ZFS pool, rather than all of the drives.  However, I took edgarsuit's report.sh which spits out a HTML email containing the smart report and Pool reports, and modified it work on Unraid and handle SSDs a bit better.  I thought I would post it here in case anyone would like the same.  Let me know if you have any questions.

 

edgarsuit's version, my source material.

https://github.com/edgarsuit/FreeNAS-Report/blob/master/report.sh

 

report.sh

Edited by cadamwil
error in file
  • Like 1
Link to comment
On 10/14/2020 at 8:39 AM, Marshalleq said:

So I'm quite frustrated with this new beta, they're unusual a lot more stable by now - (the Unraid one and maybe the zfs rc - though I'm not sure).  I'm still getting this issue and other randomness.  As part of that testing, I'd like to downgrade to Unraid stable and keep the latest ZFS.  The instructions for that are mentioned above, except it didn't seem to work for me.  I think what's happening is when you reboot, it does it's auto update or whatever and makes it the lower version again?  My process was to downgrade the kernel, remove the older zfs versions, copy the above files from dropbox where the old ones were, reboot again.  Thoughts?

So what happens is that the plugin first checks if you have a locally cahed package to install in /boot/config/plugins/unRAID6-ZFS/packages/ and if not it check on github. If I understand correctly you are running unRAID 6.8.3 stable and want to run zfs 2.0.0-rc3? 

 

This is what I did to achive what you want.

Have the plugin installed and run these commands

rm /boot/config/plugins/unRAID6-ZFS/packages/zfs*
wget -O /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz https://www.dropbox.com/s/wmzxjyzqs9b9fxz/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz?dl=0
wget -O /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5 https://www.dropbox.com/s/3onv1qur26yxb7n/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5?dl=0

 

Before you reboot you can run this command and test if everything went as expected

cat /etc/unraid-version && md5sum /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz && cat /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5

and you should get this exact output:

version="6.8.3"
8a6c48b7c3ff3e9a91ce400e9ff05ad6  /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz
8a6c48b7c3ff3e9a91ce400e9ff05ad6  /root/mount/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz

then you can reboot and can confirm if it worked like expected:

root@Tower:~# dmesg | grep ZFS && cat /etc/unraid-version
[   33.429241] ZFS: Loaded module v2.0.0-rc3, ZFS pool version 5000, ZFS filesystem version 5
version="6.8.3"

 

Link to comment
On 10/14/2020 at 8:55 AM, Marshalleq said:

I'm really worried it's ZFS causing all this - I just don't understand why creating a VM would trigger it.  I have to say I wish I'd not accepted the new ZFS version and upgraded the pools - it will be challenging to sort out.  I think it's enough to upgrade an OS, but keep the filesystem stable :/

 

@steini84 from the attached syslog, would you agree it's ZFS?  If so, I might need some help to log a ticket upstream i.e. around how you've packaged it.

 

Started a ticket here - I'm sure it's ZFS now.  I would appreciate if you could take a look and add any commentary - I'm concerned they'll complain about it being unraid and about it being a beta of unraid.  Thanks.

 

obi-wan-diagnostics-20201013-1953.zip 232.52 kB · 0 downloads

First off here you can see the build script. 

https://github.com/Steini1984/unRAID6-ZFS/blob/master/build.sh

  • Thanks 1
Link to comment

I'm having issues deleting a Pool and it's datasets.  After several reboots, I was able to destroy the datasets, but I cannot destroy or export the pool.  I always receive "the pool is busy".  Neither iostat or lsof show activity.  It seems something touches it at boot, and then ever again.  Is there a way to unset the busy flag or does it have a timeout or??

Link to comment

Memory issue fixed increasing blocksize in the dataset to 1M ashift=12 disabled dedup and compression on not needed volumes (like multimedia vols)

-15GB of usage. 

 

@Marshalleq which error you have, running latest beta with gaming VM and ZFS RC2 (NAS + DEsktop PC all in one) all working fine after my fixes with the memory.

 

in your logs I saw you have disabled the xattr? 

  • Like 1
Link to comment
On 10/14/2020 at 10:55 AM, Marshalleq said:

I'm really worried it's ZFS causing all this - I just don't understand why creating a VM would trigger it.  I have to say I wish I'd not accepted the new ZFS version and upgraded the pools - it will be challenging to sort out.  I think it's enough to upgrade an OS, but keep the filesystem stable :/

 

@steini84 from the attached syslog, would you agree it's ZFS?  If so, I might need some help to log a ticket upstream i.e. around how you've packaged it.

 

Started a ticket here - I'm sure it's ZFS now.  I would appreciate if you could take a look and add any commentary - I'm concerned they'll complain about it being unraid and about it being a beta of unraid.  Thanks.

 

obi-wan-diagnostics-20201013-1953.zip 232.52 kB · 1 download

Hi,

 

I'm new with unraid and just startet with ZFS for Docker/Vms (SSDs) and ZFS for data like pictures.
I had the roughly same issues, that after closing a VM the kernel throws a panic, and afterwards a restart of the VM was not possible. (libvrt stucks).
Yesterday, I've destroyed the datasets for my ubuntu and MacOs VMs and created new ones with the xattr=sa. (zfs set xattr=sa <dataset>). Afterwards I've installed both VMs new and until now everything is working fine. I guess that there are maybe some permisson issues regarding the lock files and attributes. With xattr=sa these are not handled in the filesystem anymore. Maybe I'm wrong, but nevertheless, even after some reboots of UNRAID the VMs issue is gone.

 

best regards

Bastian

Edited by BasWeg
  • Like 1
Link to comment
5 hours ago, steini84 said:

built zfs-2.0.0-rc4 for unRAID 6.8.3 & 6.9.0-beta30

Hi,
I've tried the update by removing the plugin (via gui) and installed it again. It's mentioned that RC4 is included, but even after reboot RC3 is still active.

 

Oct 21 19:17:40 UnraidServer root: Remounting modules
Oct 21 19:17:40 UnraidServer root: 
Oct 21 19:17:40 UnraidServer root: Verifying package zfs-2.0.0-rc4-unRAID-6.9.0-beta30.x86_64.tgz.
Oct 21 19:17:41 UnraidServer root: Installing package zfs-2.0.0-rc4-unRAID-6.9.0-beta30.x86_64.tgz:
Oct 21 19:17:41 UnraidServer root: PACKAGE DESCRIPTION:
Oct 21 19:17:42 UnraidServer root: Executing install script for zfs-2.0.0-rc4-unRAID-6.9.0-beta30.x86_64.tgz.
Oct 21 19:17:42 UnraidServer root: Package zfs-2.0.0-rc4-unRAID-6.9.0-beta30.x86_64.tgz installed.
Oct 21 19:17:42 UnraidServer root: 
Oct 21 19:17:42 UnraidServer root: Deleting old files..
Oct 21 19:17:42 UnraidServer root: 
Oct 21 19:17:42 UnraidServer root: Loading ZFS modules and importing pools (this could take some time)
Oct 21 19:17:42 UnraidServer root: 
Oct 21 19:17:43 UnraidServer ntpd[1966]: receive: Unexpected origin timestamp 0xe33aebb7.120d9f40 does not match aorg 0000000000.0000000
0 from [email protected] xmt 0xe33aebb7.816ccfb4
Oct 21 19:17:43 UnraidServer kernel: znvpair: module license 'CDDL' taints kernel.
Oct 21 19:17:43 UnraidServer kernel: Disabling lock debugging due to kernel taint
Oct 21 19:17:45 UnraidServer kernel: ZFS: Loaded module v2.0.0-rc3, ZFS pool version 5000, ZFS filesystem version 5
Oct 21 19:17:51 UnraidServer root: plugin: unRAID6-ZFS.plg installed
Oct 21 19:17:51 UnraidServer root: plugin: installing: /boot/config/plugins/unRAID6-ZnapZend.plg

 

Any idea?

 

best regards

Bastian

  • Like 1
Link to comment

Since you're able to create and delete vdevs in zfs... is it possible to extend a vdev by deleting it and then creating a new vdev (with the same drives as before but additionally including the new drive) while making sure to use the same drives as parity? Maybe the parity needs to be redone - but other than that this could be turned into a script, which just expects the new /dev/name as an argument to add a new drive to an existing vdev in zfs on unraid which would be awesome!

Link to comment

Just FYI, I think I was having a similar issue to Marshalleq.  On RC2, when I stopped the unRAID array, which stopped my VM, restarted the unraid array and attempted to restart my VM, it would hang the VM management page (white bottom, no uptime in unraid) and then if you attempted to reboot, it would not reboot successfully.  You would have to reset the machine to reboot.  However, with RC4, everything seems to be working correctly.

  • Like 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.