ZFS plugin for unRAID


steini84

Recommended Posts

On 7/29/2021 at 10:41 AM, glennv said:

Cool. Its the test of all tests. If ZFS passes this with flying colours , you will be a new ZFS fanboy i would say ;-)

I am a total ZFS fanboy and proud of it ;-);-)  

Keep us posted

 

@jortanAfter a week, this dodgy drive had got up to 18% resilvered, when I got fed up with the process.  Dropped a known good drive into the server and pulled the dodgy one.  This is what it said:

zpool status
  pool: MFS2
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Mon Jul 26 18:28:07 2021
        824G scanned at 243M/s, 164G issued at 48.5M/s, 869G total
        166G resilvered, 18.91% done, 04:08:00 to go
config:

        NAME                       STATE     READ WRITE CKSUM
        MFS2                       DEGRADED     0     0     0
          mirror-0                 DEGRADED     0     0     0
            replacing-0            DEGRADED   387 14.4K     0
              3739555303482842933  UNAVAIL      0     0     0  was /dev/sdf1/old
              5572663328396434018  UNAVAIL      0     0     0  was /dev/disk/by-id/ata-WDC_WD30EZRS-00J99B0_WD-WCAWZ1999111-part1
              sdf                  ONLINE       0     0     0  (resilvering)
            sdg                    ONLINE       0     0     0

errors: No known data errors

It passed the test, 4 hours later we have this:

 zpool status
  pool: MFS2
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: resilvered 832G in 04:09:34 with 0 errors on Tue Aug  3 07:13:24 2021
config:

        NAME        STATE     READ WRITE CKSUM
        MFS2        ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0

errors: No known data errors

 

Edited by tr0910
  • Like 1
Link to comment

Done and voted for ZFS.  I hope unraid come to the party.  I'm tired of the consumer end a little and stalking TrueNAS Scale because I miss native functionality such as backups, DOMAINS, ACLS and such like.  Not trying to be negative about a great product because they do listen to their customers well - but lets face it, the target market of unraid probably doesn't fully appreciate what they're missing.  Sometimes a bit of tech experience does help.  ZFS is one thing, but there are a few basics that are making me look elsewhere - quite reluctantly I might add.  Good thing for prod and dev boxes.  


I was surprised to see this poll, really I thought this was a foregone conclusion - I guess they're still on the BTRFS is awesome bandwagon - but while that may have moved on and be true now (not convinced), it wasn't before and we haven't forgotten! :D

 

ZFS for the win! 

  • Like 1
Link to comment
3 minutes ago, Marshalleq said:

I guess they're still on the BTRFS is awesome bandwagon

 

They're saying, in the nicest possible way, that BTRFS is not stable in RAID5 mode:

 

>>btrfs today is most reliable when configured in RAID 1 or RAID 10

 

Seems like all these features will make it in to unRAID eventually and they are just polling in order to set their priorities.

Link to comment
  • 3 weeks later...

The plugin has been updated with the "Plugin Update Helper" from @ich777

 

It basically sanity checks and pre-grabs the needed files for ZFS when you are updating unRAID. If there would be any problems the plugin would let you know so you dont reboot and lose ZFS support.

 

This with the automatic builds from @ich777 I think ZFS on unRAID is as good as it gets before, we get native support.

 

Heads up for using ZFS with Docker. Starting with unRAID 6.10.0 we are shipping ZFS 2.1 which has a issue for some users with storing docker.img on a ZFS pool. This does not affect the appdata, but only the docker.img file. I recommend storing docker.img on your cache disk if you run into any problems, as it does not contain any critical information and can be easily recreated. 

 

 

Screenshot 2021-08-23 at 10.43.45.png

Screenshot 2021-08-23 at 10.16.18.png

Edited by steini84
Removed extra screenshot
  • Like 3
Link to comment
1 hour ago, Arragon said:

Can't find "Plugin Update Helper"  in the apps tab.  Is it not available in 6.9.2?  Is it sufficient to change the location or do I have to move the docker.img file first?

The plugin update helper does not add itself in the apps tab. When you install/upgrade the ZFS plugin the update helper is bundled with and watches for upgrades to unRAID. 

 

Good question, i personally just changed to location and rebuilt my docker.img, but you could also disable docker, move the file, change the location and then enable docker again. Just be sure to have the Community apps plugin installed so you can easily go to Apps->"previous apps" and re-install all your dockers, while keeping your settings:

image.png.46e9b834ffb3eb675fa66d15c8ffece5.png

Link to comment
On 8/23/2021 at 10:47 PM, steini84 said:

Heads up for using ZFS with Docker. Starting with unRAID 6.10.0 we are shipping ZFS 2.1 which has a known issue with storing docker.img on a ZFS pool. This does not affect the appdata, but only the docker.img file. I recommend storing docker.img on your cache disk as it does not contain any critical information and can be easily recreated. 

Can I just say that I don't have this issue and I do store docker.img on a ZFS drive.  I have no idea why - I did have the issue for a while, but one of the updates fixed it.  I don't even run an unraid cache drive or array (I'm entirely ZFS) so couldn't anyway.  So it may not apply to everyone.

  • Like 2
Link to comment
9 hours ago, Marshalleq said:

Can I just say that I don't have this issue and I do store docker.img on a ZFS drive.  I have no idea why - I did have the issue for a while, but one of the updates fixed it.  I don't even run an unraid cache drive or array (I'm entirely ZFS) so couldn't anyway.  So it may not apply to everyone.

For me it is the same. Only zfs. I hope I can leave it as it is. 

Link to comment
1 hour ago, BasWeg said:

For me it is the same. Only zfs. I hope I can leave it as it is. 

The thing is that we are not making any changes just shipping ZFS 2.1 by default. We have shipped 2.0 by default until now because of this deadlock problem, and 2.1 if you enabled "unstable builds" (see the first post).

 

ZFS 2.0 only supports kernels 3.10 - 5.10, but unRAID 6.10 will ship with (at least) kernel 5.13.8 therefore we have to upgrade to ZFS 2.1

 

So if you are running ZFS 2.1 now on 6.9.2 or 6.10.0-rc1 there wont be any changes:

 

You can check what version is running i two ways:

root@Tower:~# dmesg | grep -i zfs
[   69.055201] ZFS: Loaded module v2.1.0-1, ZFS pool version 5000, ZFS filesystem version 5
[   70.692637] WARNING: ignoring tunable zfs_arc_max (using 2067378176 instead)

root@Tower:~# cat /sys/module/zfs/version
2.1.0-1

 

  • Like 2
  • Thanks 1
Link to comment

Now that you mention it, I think I recall someone using ZFS 2.0 that was also having this problem and that's what was confusing me.   But 'Oh Crap' because if this is still a problem, that means I can't run native ZFS any more which is a major problem preventing me from upgrading.  I don't even have any disks for unraid array's other than a USB stick being used for a dummy array - and I definitely don't want to run docker from that.  If I recall even running docker in a folder rather than an image presented the same issue.

 

So here goes a revival of the bug thread I guess.

 

The solution might be to shift to TrueNAS Scale - and as awesome as that is, there are a few challenges to overcome with it - e.g I'm on the fence about shifting to Kubernetes - that's the major one to be honest.

Edited by Marshalleq
Link to comment

Never had any issues with zfs on unraid since day one (while before that , btrfs was all pain and misery) and been running also 2.1 since a while now. All rock solid with multiple different pools (all ssd or nvme) running all my vm's and docker. But run docker in folders on zfs . Never had it in an img on zfs. I do have libvirt image on zfs but that holds nothing compared to a docker img.
I guess you have just been unlucky as i remember your thread with all the issues you had before.

Even recently moved all my docker folders back and forth between pools while swapping and rearranging ssd's , while beeing amazed with zfs's possibilities in this (combination of snapshots send / receive and data eviction from disks/vdevs when adding/removing disks/vdevs). All smooth sailing and not a single issue.

I became such a zfs fanboy. Looooove it

  • Like 4
Link to comment

Thanks, looks like it might be OK then - will just have to try it.  That thread if I recall, was multiple people with issues, not just me.  And mine 'went away' for lack of a better explanation, but one person's didn't, so it was quite inconclusive really.  They recently posted back on ZFS that it's still an issue for them too.

 

Anyway, thanks for info - I'll hold my breath until I upgrade! :D

  • Like 2
  • Haha 1
Link to comment
23 hours ago, steini84 said:

The thing is that we are not making any changes just shipping ZFS 2.1 by default. We have shipped 2.0 by default until now because of this deadlock problem, and 2.1 if you enabled "unstable builds" (see the first post).

 

ZFS 2.0 only supports kernels 3.10 - 5.10, but unRAID 6.10 will ship with (at least) kernel 5.13.8 therefore we have to upgrade to ZFS 2.1

 

So if you are running ZFS 2.1 now on 6.9.2 or 6.10.0-rc1 there wont be any changes:

 

You can check what version is running i two ways:

root@Tower:~# dmesg | grep -i zfs
[   69.055201] ZFS: Loaded module v2.1.0-1, ZFS pool version 5000, ZFS filesystem version 5
[   70.692637] WARNING: ignoring tunable zfs_arc_max (using 2067378176 instead)

root@Tower:~# cat /sys/module/zfs/version
2.1.0-1

 

 

I've following configuration:

root@UnraidServer:~# cat /sys/module/zfs/version
2.0.0-1
root@UnraidServer:~# dmesg | grep -i zfs
[   56.483956] ZFS: Loaded module v2.0.0-1, ZFS pool version 5000, ZFS filesystem version 5
[1073920.595334] Modules linked in: iscsi_target_mod target_core_user target_core_pscsi target_core_file target_core_iblock dummy xt_mark xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp veth macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd lockd grace sunrpc md_mod zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) it87 hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding amd64_edac_mod edac_mce_amd kvm_amd kvm wmi_bmof crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel r8125(O) aesni_intel crypto_simd mpt3sas cryptd r8169 glue_helper raid_class i2c_piix4 ahci nvme realtek nvme_core ftdi_sio scsi_transport_sas rapl i2c_core wmi k10temp ccp libahci usbserial acpi_cpufreq button

 

So, what should I do? Change to unstable and see if it still works? :)

Link to comment
47 minutes ago, BasWeg said:

So, what should I do? Change to unstable and see if it still works? :)

I personally would recommend to switch to 6.10.0-rc1, the Plugin Update Helper should do everything for you, just be sure to wait a little bit after the unRAID update is finished and you should be notified like @steini84 mentioned here:

 

The reason behind I recommend this is because on 6.10.0-rc1 is in the stable branch and it should work just fine anyways.

  • Like 1
  • Thanks 1
Link to comment

following space invador one new video I set everything up, even the last part with a share of zpool. Everything went through no issues but I can't seem to access is via the network. It shows up but that is it. I can get to the root zfs but once in there I get locked out or empty folder. Adjusting now asks for a password but everything from the unraid to the windows password does nothing, everything smb is set to public share.

 

NAME                    USED  AVAIL     REFER  MOUNTPOINT
zfs                    4.96M   179T      354K  /zfs
zfs/movies              236K   179T      236K  /zfs/movies
zfs/music               236K   179T      236K  /zfs/music
zfs/tv                  236K   179T      236K  /zfs/tv

 

I will add this is a new build using the latest RC and no data is on the server yet. I can start over if needed.

Link to comment
15 hours ago, anylettuce said:

following space invador one new video I set everything up, even the last part with a share of zpool. Everything went through no issues but I can't seem to access is via the network.

 

I haven't watched space invader's video - how was the SMB share created?

 

I shared a ZFS dataset by adding the the share to: /boot/config/smb-extra.conf
 

[sharename]
path = /zfs/dataset
comment = zfs dataset
browseable = yes
public = yes
writeable = yes
vfs objects =

 

If I remember correctly, you can then restart samba with:

 

/etc/rc.d/rc.samba restart

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.