ZFS plugin for unRAID


steini84

Recommended Posts

On 7/15/2020 at 11:41 PM, hackersarchangel said:

That's interesting, root should always have access for root = God. You can't login to the share as root, that's a big security no no. *wags finger*

The changes I described are done to the folder via ssh, so I'm not sure I'm following what you did in that regard. As for the config on the share itself via SMB that looks fine.

Also, it's late for me so I may not be grokking this well. I'll check again tomorrow and see if I read your post differently LOL

 I think I misunderstood. CHOWN doesn't actually take away access from root does it?

Link to comment
25 minutes ago, etsjessey said:

Also, Does anyone know what I would need to add to SMB Extras to have time-machine support on a share? I couldn't really find much on that either. Everything I found had to do with shares through the shares tab. =(

Nevermind, was able to find an awesome video by Spaceinvader One on doing this. Please see video below.

[name_of_share_here]
      ea support = Yes path = /mnt/disks/yourdiskname/foldername 
      vfs objects = catia fruit streams_xattr 
      valid users = users_who_have_access_here
      write list = users_who_have_access_write_here 
      fruit:time machine max size = 500 G <!--You can put whatever size you want here-->
      fruit:encoding = native 
      fruit:locking = netatalk 
      fruit:metadata = netatalk 
      fruit:resource = file 
      fruit:time machine = yes 
      fruit:advertise_fullsync = true 
      durable handles = yes 
      kernel oplocks = no 
      kernel share modes = no 
      posix locking = no 
      inherit acls = yes 
<!--leave one space before unassigned devices stuff-->

 

 

 

 

Edited by etsjessey
XML comment
Link to comment

*Sigh* I have another weird share question that doesn't make logic sense to me...lol

 

The same share I mentioned before will let me copy a file to and from the share but, will not let me open a folder to view it's contents within...
I checked permissions and it says the same user that has the read/write access has permissions on that share has read/write access to the folders so it was marked recursive. I must be missing a line of xml or something that will allow that...any ideas?

 

Thanks in advance!

Link to comment
9 minutes ago, etsjessey said:

*Sigh* I have another weird share question that doesn't make logic sense to me...lol

 

The same share I mentioned before will let me copy a file to and from the share but, will not let me open a folder to view it's contents within...
I checked permissions and it says the same user that has the read/write access has permissions on that share has read/write access to the folders so it was marked recursive. I must be missing a line of xml or something that will allow that...any ideas?

 

Thanks in advance!

Do you have eXecute permissions? That's what allows you to traverse it.

Link to comment
  • 2 weeks later...

Hello,

I'm still at the beginning but quite interested in ZFS since I only want to use ssd's.

Sadly, I already created a zfs pool with the "zpool create ABC raidz /dev/sdb /dev/sdc" and now when I try to type "zpool create -m /mnt/SSD SSD mirror sdx sdy"  the message:

 

/dev/sdb is in use and contains a unknown filesystem.
/dev/sdc is in use and contains a unknown filesystem.

is shown.

I'm afraid that the mount point may be different when I don't use the zpool create -m command.

 

I tried deleting the system (writing 0s for the first 100mb) but it didn't help.

Link to comment
On 8/9/2020 at 9:22 PM, teiger said:

Hello,

I'm still at the beginning but quite interested in ZFS since I only want to use ssd's.

Sadly, I already created a zfs pool with the "zpool create ABC raidz /dev/sdb /dev/sdc" and now when I try to type "zpool create -m /mnt/SSD SSD mirror sdx sdy"  the message:

 


/dev/sdb is in use and contains a unknown filesystem.
/dev/sdc is in use and contains a unknown filesystem.

is shown.

I'm afraid that the mount point may be different when I don't use the zpool create -m command.

 

I tried deleting the system (writing 0s for the first 100mb) but it didn't help.

wipefs -a /dev/sdx

 

Link to comment

anyone having an issue with ZFS on latest beta 6.9.0-beta25 ? I tried to import and have the below error due to unsupported feature.

 

root@Omega:/# zpool import omega
This pool uses the following feature(s) not supported by this system:
        com.delphix:log_spacemap (Log metaslab changes on a single spacemap and flush them periodically.)
All unsupported features are only required for writing to the pool.
The pool can be imported using '-o readonly=on'.
cannot import 'omega': unsupported version or feature
 

Link to comment

This may be an idiotic question, but is this based on Open ZFS 2.0?  I'm currently running FreeNas, but thinking of switching.  If I do, I was wondering if upgrading to TrueNas Core and upgrading my pool would make it compatible with the ZFS plugin for Unraid?  I'm trying to avoid copying everything to a couple of disks and copying it back.

Link to comment

OK, so, currently my storage is on FreeNAS 11.3-U3.2.  I booted into Unraid, loaded the ZFS on Unraid plugin and was able to see and import one of my storage pools.  However, I couldn't control the mount point.  I tried issuing the command

# zpool import FirewallSSD -d /mnt/FirewallSSD

I received the following message

Quote

cannot import 'FirewallSSD': no such pool available

However, I was able to import the pool by just issuing the command

# zpool import FirewallSSD

I also tried the following command to force the mount point

# zpool import -d /mnt/FirewallSSD FirewallSSD

The system responded with

Quote

cannot import 'FirewallSSD': no such pool available

 

Am I missing something, is there a way to import these and set the mounting point to /mnt/PoolName or is there a better method.  It looks like I may be able to import my pools and upgrade them afterwords, correct?

Link to comment
9 minutes ago, cadamwil said:

OK, so, currently my storage is on FreeNAS 11.3-U3.2.  I booted into Unraid, loaded the ZFS on Unraid plugin and was able to see and import one of my storage pools.  However, I couldn't control the mount point.  I tried issuing the command


# zpool import FirewallSSD -d /mnt/FirewallSSD

I received the following message

However, I was able to import the pool by just issuing the command


# zpool import FirewallSSD

I also tried the following command to force the mount point


# zpool import -d /mnt/FirewallSSD FirewallSSD

The system responded with

 

Am I missing something, is there a way to import these and set the mounting point to /mnt/PoolName or is there a better method.  It looks like I may be able to import my pools and upgrade them afterwords, correct?

Use -m during import to specify mount point not -d. Afterwards use "zfs set mountpoint=/path pool".

 

Usefull command: zfs --help and zpool --help.

 

Link to comment

 

root@PDrasterServer:~# zpool status -v
  pool: SSD
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub repaired 25K in 0 days 00:03:03 with 2 errors on Fri Aug 21 22:34:29 2020
config:

        NAME        STATE     READ WRITE CKSUM
        SSD         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdi     ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        /mnt/SSD/domains/VMw101/vdisk1.img


I ran into a problem that when I run vm errors constantly appear and then vm just freezes, if i use pool just for read\write files everything is ok.

Link to comment
6 hours ago, GameOverPD said:

 


root@PDrasterServer:~# zpool status -v
  pool: SSD
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub repaired 25K in 0 days 00:03:03 with 2 errors on Fri Aug 21 22:34:29 2020
config:

        NAME        STATE     READ WRITE CKSUM
        SSD         ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdi     ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        /mnt/SSD/domains/VMw101/vdisk1.img


I ran into a problem that when I run vm errors constantly appear and then vm just freezes, if i use pool just for read\write files everything is ok.

 

Why are you running raidz1 with 2 devices? Did a disk drop offline?

 

I highly recommend using BTRFS RAID-5 instead of ZFS RAIDZ1 for a SSD pool in Unraid.

Integration with the OS is something you take for granted until for instance a disk drop from a ZFS pool without your realising it.

(PS: ZFS also completely ignores isolcpus so don't be surprised when things lag terribly under heavy IO)

Link to comment
4 hours ago, testdasi said:

 

Why are you running raidz1 with 2 devices? Did a disk drop offline?

 

I highly recommend using BTRFS RAID-5 instead of ZFS RAIDZ1 for a SSD pool in Unraid.

Integration with the OS is something you take for granted until for instance a disk drop from a ZFS pool without your realising it.

(PS: ZFS also completely ignores isolcpus so don't be surprised when things lag terribly under heavy IO)

I actully have 6 ssd disks, i created this pool for some fast test to see if the problem repeats. BTRFS RAID-5 you mean cache pool? For cache there orher ssd's with different size.

Edited by GameOverPD
Link to comment

The first release candidate of OpenZFS 2.0 has been released

https://github.com/openzfs/zfs/releases/tag/zfs-2.0.0-rc1

 

I have built it for unRAID 6.9.0 beta 25

 

For those already running ZFS 0.8.4-1 on unRAID 6.9.0 beta 25 and want to update,  you can just un-install this plugin and re-install it again (don´t worry , you wont have any ZFS downtime) or run this command and reboot

rm /boot/config/plugins/unRAID6-ZFS/packages/zfs-0.8.4-unRAID-6.9.0-beta25.x86_64.tgz

Either way you should see this:

#Before
root@Tower:~# modinfo zfs | grep version
version:        0.8.4-1
srcversion:     E9712003D310D2B54A51C97

#After 
root@Tower:~# modinfo zfs | grep version
version:        2.0.0-rc1
srcversion:     6A6B870B7C76FB81D4FEFB4

 

  • Thanks 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.