ZFS plugin for unRAID


steini84

826 posts in this topic Last Reply

Recommended Posts

  • 1 month later...
  • Replies 825
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

What is this? This plugin is a build of ZFS on Linux for unRAID 6   Installation of the plugin To install you copy the URL below into the install plugin page in your unRAID 6 web g

Built zfs-2.0.0-rc7 for unRAID-6.8.3 & 6.9.0-beta35   Great to see that unRAID is finally adding native ZFS so this might be one of the last builds from me   And yes, i´m alre

Figured it out. No need to mount through /etc/fstab.   What's missing are entries in /etc/mtab,  which are created if mounted from fstab. So a few echo into /etc/mtab is the solution. J

Posted Images

1 hour ago, devros said:

Will you have an update for 6.3.4 soon?  I had just upgraded when I stumbled onto this plugin.

Thanks for the heads up. Did not realize that there was a new version.. but it's up now

 

I need to update my notifications since I don't get any for version updates :S

 

Also if someone wants to help me with this irritating error:

 

"Fatal error: escapeshellarg(): Argument exceeds the allowed length of 4096 bytes in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 342"

My bash file always becomes to big so I have to remove a version before I can update to a newer one. 

 

I have packages for 6.1.2 - 6.3.4 but have to remove support for older versions because of my lazy batch file

 

 

 

Link to post
  • 2 weeks later...

I have a fresh install of Unraid 6.3.5 with only the following 

  1. Community plugin
  2. ZFS for Unriad Plugin
  3. Unassigned Devices plugin

I'm attempting to migrate from freenas 10....but I'm unable to get any ZFS drives to actually mount when clicking mount under unassigned Devices. I just need them to mount long enough to move the data to some new drives.

Link to post

So I wasn't able to use the Unassigned devices plugin to mount the pool I was able to SSH in and run 

zpool import -f Poolname

Once I did that it was mounted no problem....now I'm just waiting for my new drives to preclear and then transfer everything over.

Link to post
36 minutes ago, swingline said:

So I wasn't able to use the Unassigned devices plugin to mount the pool

 

Glad you figured it out but just for future readers that's expected since the UD plugin doesn't support ZFS pools.

Link to post
12 hours ago, johnnie.black said:

 

Glad you figured it out but just for future readers that's expected since the UD plugin doesn't support ZFS pools.

I just assumed it did as it listed the FS type on the list of devices. Thats what I get for assuming 

 

12 hours ago, steini84 said:

Yeah you have to import with zpool import -a, but the plugin does that on boot so every pool is auto mounted

 

 

In my case I didn't remove them from my FreeNAS pool so I had to force the imports with adding -f

Link to post
  • 3 months later...
  • 1 month later...

Thank you very much for this plugin. It adds another dimension to the utility of unRAID.

 

I was wondering if it is possible to support at rest encryption?

 

I see that openzfs 0.7.2 is used which in theory should support at rest encryption, however I am unable to pass the following options to zpool create:

-o encryption=on -o keyformat=passphrase -o keylocation=prompt

Thanks again

Link to post
Thank you very much for this plugin. It adds another dimension to the utility of unRAID.
 
I was wondering if it is possible to support at rest encryption?
 
I see that openzfs 0.7.2 is used which in theory should support at rest encryption, however I am unable to pass the following options to zpool create:
-o encryption=on -o keyformat=passphrase -o keylocation=prompt

Thanks again


Are there any compile flags I need to add for this feature? Are you sure it's already in 0.7.2?


Sent from my iPhone using Tapatalk
Link to post
39 minutes ago, steini84 said:


Are there any compile flags I need to add for this feature? Are you sure it's already in 0.7.2?


Sent from my iPhone using Tapatalk

 

It was merged into master on 15/8/17. https://github.com/zfsonlinux/zfs/pull/5769

 

I will have a look into compile flags, thanks.

 

Edit:

 

I had a look here https://blog.heckel.xyz/2017/01/08/zfs-encryption-openzfs-zfs-on-linux/#Compile-and-install (N.B. this article precedes the merge into master)

 

When I have time I'll try a build with the following dependencies and let you know:

libtool zlib1g-dev attr uuid-dev libblkid-dev libattr1-dev autoconf

 

And try and load these modules:

unicode/zunicode.ko
spl/spl.ko
nvpair/znvpair.ko
zcommon/zcommon.ko
icp/icp.ko
zfs/zfs.ko

 

Edited by matryska
Link to post
On 06/11/2017 at 8:26 AM, steini84 said:


Are there any compile flags I need to add for this feature? Are you sure it's already in 0.7.2?


Sent from my iPhone using Tapatalk

 

My apologies; I have compiled 0.7.3 and encryption was not present.

 

However, compiling master has enabled encryption. I would strongly recommend doing this in a container or VM as installing openssl did have temporary system wide adverse effects (rsync, ssh no longer working due to different openssl version installed). These adverse effects do not persist after a reboot so it didn't matter for me on a pre-production machine.

 

This required the following additional dependencies:

openssl

util-linux (for libblkid)

attr

libtirpc

 

Note the libtirpc package is also required to be installed with installpkg alongside spl/zfs.

 

3 disk raidz has been verified to work with the following options:

-o encryption=on -o keyformat=passphrase -o keylocation=prompt

 

Check with:

zfs get -r encryptionroot

 

Many thanks

Edited by matryska
Link to post
  • 1 month later...

I've been eager to try the unRAID 6.4 prerelease series but up until now hadn't had any time to investigate getting ZFS working with it. Finally got a test machine and spent more time than I'm willing to admit getting it going.

 

Using the script steini84 posted earlier in the thread, I figured out which packages from the Slackware 14.2/Current 64-bit branch needed to be installed. A few notes, the cxxlib library was removed from Slackware and needed to be replaced with a few other packages. glibc was too old in Slackware 14.2 so had to be pulled from the Current branch. With the new bzfirmware/bzmodules squashfs implementation in unRAID 6.4, I 'borrowed' portions of CHBMB's script (posted here: https://unraid.net/topic/61576-640-rc13-error-compiling-custom-kernel/) to prepare the system prior to compiling, and then build the new squashfs files.

 

I've uploaded updated build.sh, bzfirmware/bzmodules and spl/zfs package files here: https://github.com/rinseaid/unraid-zfs

 

I also modified steini84's plugin file to install the new spl/zfs packages (based on latest 0.7.5), but bzfirmware and bzmodules need to be installed to /boot manually. I guess the plugin script could backup the original and copy the files, but I feel like it's a little messy to just overwrite whatever is there and wonder if there's a better way to do it that I don't know of.

 

What I've put together, while working for me, is super hacky and I don't recommend anyone use it. I'm just not well versed enough to do this the 'right' way or I would.

 

steini84 - hope this saves you at least a little time for when 6.4 stable is released!

 

 

Edited by rinseaid
Link to post
  • 2 weeks later...

Updated for 6.4 with a big help from rinseaid - Thanks alot!

 

*** In unRAID 6.4 kernel modules are kept in /boot/bzmodules and mounted under /lib/modules/ - this plugin mounts a bzmodule file that includes the zfs kernel modules and one has to take that implementation into consideration if using another plugin that uses a customized bzmodules. I experimented with mounting the modules only when loading them (and unmounting right after) and that appeared to work fine. But to be honest I do not know if it's safe and decided to keep it mounted. If someone can confirm that it's safe I can change the plugin.

Now I just copy the modules and add the zfs/spl modules over it 

Edited by steini84
Link to post
  • 3 weeks later...
  • 3 weeks later...
  • 1 month later...
  • 3 weeks later...
  • 1 month later...
  • 3 weeks later...

Hi, need some help. I try to limit the ARC to 2GB, this is my go file:

#!/bin/bash
#Zfs ARC size
echo 2147483648 >> /sys/module/zfs/parameters/zfs_arc_max
# Start the Management Utility
/usr/local/sbin/emhttp &

I edited go-file, rebooted, then checked  cat /proc/spl/kstat/zfs/arcstats

it seems the size has not changed, it is still 4gb max

root@Tower:~# cat /proc/spl/kstat/zfs/arcstats

13 1 0x01 96 26112 40241515841 4620213926208
name                            type data
hits                            4    702353
misses                          4    4420
demand_data_hits                4    94
demand_data_misses              4    0
demand_metadata_hits            4    700921
demand_metadata_misses          4    3616
prefetch_data_hits              4    0
prefetch_data_misses            4    0
prefetch_metadata_hits          4    1338
prefetch_metadata_misses        4    804
mru_hits                        4    30635
mru_ghost_hits                  4    406
mfu_hits                        4    670574
mfu_ghost_hits                  4    571
deleted                         4    258596
mutex_miss                      4    0
access_skip                     4    0
evict_skip                      4    191
evict_not_enough                4    50
evict_l2_cached                 4    0
evict_l2_eligible               4    34050764288
evict_l2_ineligible             4    105383936
evict_l2_skip                   4    0
hash_elements                   4    32762
hash_elements_max               4    33061
hash_collisions                 4    7788
hash_chains                     4    520
hash_chain_max                  4    2
p                               4    3998406656
c                               4    4008861696
c_min                           4    250553856
c_max                           4    4008861696
size                            4    3988219216
compressed_size                 4    3836652032
uncompressed_size               4    4142148608
overhead_size                   4    139904512
hdr_size                        4    10920384
data_size                       4    3973056000
metadata_size                   4    3500544
dbuf_size                       4    410704
dnode_size                      4    247104
bonus_size                      4    84480
anon_size                       4    51695616
anon_evictable_data             4    0
anon_evictable_metadata         4    0
mru_size                        4    3924730368
mru_evictable_data              4    3721777664
mru_evictable_metadata          4    1073664
mru_ghost_size                  4    121528832
mru_ghost_evictable_data        4    93454336
mru_ghost_evictable_metadata    4    28074496
mfu_size                        4    130560
mfu_evictable_data              4    0
mfu_evictable_metadata          4    0
mfu_ghost_size                  4    33736704
mfu_ghost_evictable_data        4    3276800
mfu_ghost_evictable_metadata    4    30459904
l2_hits                         4    0
l2_misses                       4    0
l2_feeds                        4    0
l2_rw_clash                     4    0
l2_read_bytes                   4    0
l2_write_bytes                  4    0
l2_writes_sent                  4    0
l2_writes_done                  4    0
l2_writes_error                 4    0
l2_writes_lock_retry            4    0
l2_evict_lock_retry             4    0
l2_evict_reading                4    0
l2_evict_l1cached               4    0
l2_free_on_write                4    0
l2_abort_lowmem                 4    0
l2_cksum_bad                    4    0
l2_io_error                     4    0
l2_size                         4    0
l2_asize                        4    0
l2_hdr_size                     4    0
memory_throttle_count           4    0
memory_direct_count             4    0
memory_indirect_count           4    0
memory_all_bytes                4    8017723392
memory_free_bytes               4    3323105280
memory_available_bytes          3    3197829120
arc_no_grow                     4    0
arc_tempreserve                 4    0
arc_loaned_bytes                4    0
arc_prune                       4    0
arc_meta_used                   4    15163216
arc_meta_limit                  4    3006646272
arc_dnode_limit                 4    300664627
arc_meta_max                    4    17378936
arc_meta_min                    4    16777216
sync_wait_for_async             4    0
demand_hit_predictive_prefetch  4    0
arc_need_free                   4    0
arc_sys_free                    4    125276928

Please help  me to limit ARC size....

 

PS: When i run "echo 2147483648 >> /sys/module/zfs/parameters/zfs_arc_max" in terminal, it works. Arc max size is decreasing on the eyes along with the occupied RAM. Go file does not work for me. Can I make a script with "echo 2147483648 >> /sys/module/zfs/parameters/zfs_arc_max" and run it when array start? 

Edited by vanes
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.