ZFS plugin for unRAID


steini84

Recommended Posts

finally got it so I can at least explore the folders. I can not add any folders or content.

I also have a 2nd unriad server I am trying to link up. The primary one is ZFS and it only storage, the 2nd one is everything else like jellyfin, sab, radarr,sonarr and more. I was able to map it over but again no adding folders/data via sonarr for example

Link to comment
1 hour ago, anylettuce said:

finally got it so I can at least explore the folders. I can not add any folders or content.

I also have a 2nd unriad server I am trying to link up. The primary one is ZFS and it only storage, the 2nd one is everything else like jellyfin, sab, radarr,sonarr and more. I was able to map it over but again no adding folders/data via sonarr for example

 

If you want to make the ZFS datasets open to everyone, on the unraid server with your zfs pool:

 

chown -R nobody:users /zfs/movies

chown -R nobody:users /zfs/music

chown -R nobody:users /zfs/tv

 

That should probably fix it, but if not you could try this as well:

 

chmod -R 777 /zfs/movies

chmod -R 777 /zfs/music

chmod -R 777 /zfs/tv

Link to comment
On 8/29/2021 at 1:53 PM, anylettuce said:

following space invador one new video I set everything up, even the last part with a share of zpool. Everything went through no issues but I can't seem to access is via the network. It shows up but that is it. I can get to the root zfs but once in there I get locked out or empty folder. Adjusting now asks for a password but everything from the unraid to the windows password does nothing, everything smb is set to public share.

 

NAME                    USED  AVAIL     REFER  MOUNTPOINT
zfs                    4.96M   179T      354K  /zfs
zfs/movies              236K   179T      236K  /zfs/movies
zfs/music               236K   179T      236K  /zfs/music
zfs/tv                  236K   179T      236K  /zfs/tv

 

I will add this is a new build using the latest RC and no data is on the server yet. I can start over if needed.

In my experience you can no longer use Unraid's sharing menu to share files via SMB that are stored on ZFS.  Instead use the /boot/config/smb-extra.conf that @jortan mentions below.  Unraid has added other automation to their sharing system including the automation of actually creating the folder - which does not fit well with an already created folder.

 

Of course this functionality is likely to change when they finally add ZFS support into unraid natively.  We are all waiting for that day for various reasons!

 

Oh and you do have to stop and restart the array (or reboot) for these shares to become active.  So it pays to plan them out in advance.

  • Like 1
Link to comment
52 minutes ago, Marshalleq said:

Oh and you do have to stop and restart the array (or reboot) for these shares to become active.  So it pays to plan them out in advance.

 

Any shares configured in /boot/config/smb-extras.conf will appear after restarting samba service:

 

/etc/rc.d/rc.samba restart

 

Edited by jortan
Link to comment
On 9/1/2021 at 9:22 PM, jortan said:

 

Any shares configured in /boot/config/smb-extras.conf will appear after restarting samba service:

 

/etc/rc.d/rc.samba restart

 

I that happens now that's great - it didn't always.  I so seldom reboot my prod box now that I have no idea.  It was a really good thing having a dev and a prod - definitely improved the up time in the house - I can't stop fiddling sometimes!

Link to comment

spaceinvader one's video brought me here. I installed ZFS and ZFS companion plugin and my GUI wont come up. I boot in safe mode and GUI shows up. I remove both the plugins and the system boots normally. Are there any logs that I can go through to see what is causing this?

edit: reinstalled and rebooted. no issues this time.

Edited by Xxharry
update
Link to comment

I installed zfs and I can see my shares that I set up in smb-extra.conf. I have a requirement for a nfs share and a timemachine share. how do I set those up? Thanks

p.s - where can I see all the options available in the smb-extra.conf. I would like to do the equivalent of unraid's Export = No option and allow authenticated users on some of them

Link to comment

@steini84 

 

Would there be any reason that you know of that the zfs plugin (or zfs in general) would have issues with super fast drives?

 

I have a raid array with Gen4 2TB nvme drives (4 in the array) and when im writing a lot of data at once, the pool gets corrupt, which causes unresponsiveness and crashes. upon reboot:

 

 pool: fast
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 0B in 00:11:15 with 2 errors on Wed Sep  8 10:52:31 2021
config:

        NAME                                           STATE     READ WRITE CKSUM
        fast                                           ONLINE       0     0     0
          nvme0n1                                      ONLINE       0     0     0
          nvme1n1                                      ONLINE       0     0     0
          nvme2n1                                      ONLINE       0     0     0
          nvme-Sabrent_Rocket_Q4_03F10707144404184492  ONLINE       0     0     0

 

It only seems to be if Im transferring higher than 2Tb/s or so.... 

this happens on my other raidz array also, only when when writing or reading a lot:

  pool: vmstorage
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 129K in 00:32:13 with 1 errors on Wed Sep  8 11:13:23 2021
config:

        NAME        STATE     READ WRITE CKSUM
        vmstorage   ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0
            sdi     ONLINE       0     0     0
            sdj     ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        /vmstorage/domains/Lightning Node/vdisk1.img

 

now, on this one, I cannot just delete that file.... 

Link to comment

Maybe far off but the only thing i remember seeing was a video on linus tech tips when he was trying to build an all nvme storage array (i believe 24 or so) and it required some bios and or kernel adjustment as this was too fast . But completely unclear about the details as was a while ago. But should be easy to find.
Otherwise no clue nut sounds scary

Link to comment

@TheSkaz I'd tend to start from the angle @glennv has eluded i.e. hardware.  The only place I've seen ZFS struggle is when it's externally mounted on USB.  I think there is a bug logged for that, but not sure as to it's status.  As you can imagine it needs a consistent communication with the devices for it to work properly so if the hardware is not keeping up or being saturated somehow is my first thought.

 

I've also been surprised at just how rubbish my NVME Seagate FireCuda drives are compared to the Intel SSD ones I have.  So either that could confirm your theory or it shows the potential variability in hardware.

 

I would be interested in what you find either way.

Link to comment
53 minutes ago, Marshalleq said:

I've also been surprised at just how rubbish my NVME Seagate FireCuda drives are compared to the Intel SSD ones I have.

 

For what workload?  Outside of "370GB of sustained writes filling up the pSLC cache" scenarios, they seem to perform well

 

I'm currently using 2 x Firecuda 520 nvmes in RAIDZ1 pool (for possible raidz expansion later)

 

No issues encountered, though mine are sitting behind a pcie 3.0 switch.

Link to comment

Ah so you have the same drives, that's interesting.  I got mine due to low stock of Intel for some chia plotting.  The performance of them is actually less that some much older Intel SATA connected drives.  And when I say less, I was putting the two firecuda's into a zero parity stripe for performance vs 4 of the SATA drives into a stripe.  Given the phenomenal advertised speed difference between NVME and SATA I was not expecting reduced performance.

 

Mine are also the 520's in the 500GB flavour.  However I don't have PCIe4 and have them connected via a pass through card in 2xpcix16 slots.  My motherboard is an x399 with gazillions of PCIe lanes so that's not a limiting factor.  Even though I don't have PCIe 4 it would still be a very large performance increase as far as I know.  But this is my first foray into NVME - either way, the result was disappointing.

 

EDIT - I should add the Chia plotting is one of the few use cases that exploits good / bad drive hardware and connectivity options - I'm not sure how much you know but it writes about 200G of data and outputs to a final 110ish Gig file.  This process takes between 25 minutes and hours depending on your setup.  But disk is the primary factor that slows everything down.  I was managing about 28 minutes on the Intels and about 50 minutes on the FireCuda's.

 

I should also add (because I can see you asking me now) that a single Intel SSD Drive outperformed the firecuda's also coming in at about 33 minutes.

 

The single intel was a D3-S4510 and the 4 Intels in the stripe were DC S3520 240G M.2's.

 

That should give you enough information to compare them and understand (or maybe make some other suggestion) as to why the firecudas were so much slower.  On paper, I don't think they should have been.

Edited by Marshalleq
Link to comment
41 minutes ago, Marshalleq said:

EDIT - I should add the Chia plotting is one of the few use cases that exploits good / bad drive hardware and connectivity options

 

Agreed, Firecuda 520 not optimal for chia plotting.  You're hitting SLC cache limits as well as potentially reduced write speed due to TLC / drive filling up:

https://linustechtips.com/topic/1234481-seagate-firecuda-520-low-write-speeds/?do=findComment&comment=13923773

 

I'm doing some background serial plotting now, but just using an old 1TB SATA disk and -2 pointing to a 110GB ramdisk.  Takes about 5 hours per plot, but Chia netspace growth has really levelled off now, so I'm less keen to burn through SSDs/nvmes:

https://xchscan.com/charts/netspace

 

I was more interested in TBW rating.  Doesn't compare to enterprise SSD, but good ratings compared to other consumer nvme.  I'm hoping these will run my dockers/VMs for 5 years or more.

Link to comment

So yes, my original assessment stands then, it's performance is abysmal.  Why doesn't really matter - though I read that link and understand it's another drive that cheats with fast cache at the start and slow at the end.  So great for minor Bursts but not much else.

 

I was using Ramdisk too with those above numbers.

 

Yes, I got these because of the 'advertised' speed and the advertised endurance.  Normally I'd buy Intel, but the store had none.  I'm fairly new to Chia, but am happy it's levelled off a bit that's for sure.  Those people that go and blow 100k on drives and are only in it for the money - they deserve to leave!  I do have 2 faulty drives I need to replace which will have plots added, but that's it.  

 

I should add, I'm grateful for the link as now I understand it's definitely not me!

 

Thanks.

 

Marshalleq.

Edited by Marshalleq
Link to comment
48 minutes ago, Marshalleq said:

So yes, my original assessment stands then, it's performance is abysmal.  Why doesn't really matter

 

Don't want to labour the point, but it matters if your use case isn't huge streaming writes as in the case off chia plotting.  For most people:

 

Chia plotting = Abysmal

ZFS pool for some dockers and VMs = Great

 

The Firecuda 520 isn't in this graph, but most (consumer) nvme devices have similar reduction in write performance after their fast-cache fills up:

 

XKfABwtV9PBsouenLMmne9-2356-80.png

Edited by jortan
clarification
Link to comment
  • 2 weeks later...

Can confirm: I am here because of Linus. 

Couple of questions from this video that I actually came over here to clarify:

1. 9:00 : Due to limitation of Unraid, "we still need to have an array...[to fill with placeholder drives]...this will be fixed a few versions from now.

I'm not sure I follow this. To be clear, is the idea, when running in ZFS, you need to have a placeholder drive? If so, does it need to adhere to any space or spec guidelines or can it literally be _any_ drive?

The video does a little handwaving over "a couple of months until this is resolved". Should I wait, or is this not really a big deal?

2. 12:25: ZFS is not really designed for NVMe...
a. Set arc-cache to meta data only bash: `zfs set primarycache=metadata {{pool_name}}`
b. enable auto-trim (just in case as zfs may set this automatically) bash: `zpool set autotrim=on {{pool_name}}`
c. disable access time bash:  `zfs set atime=off {{pool_name}}`
d. set compression on lz4 bash: `zfs set compression=lz4 {{pool_name}}`

I lurked this forum, likely a year ago, and my takeaway at the time was _don't use ssd only with unraid and zfs_, so I left a mental note to come back when ssds "were a thing". This video prompted that.

With the tweaks above, is zfs on unraid a reasonable thing to do now, or something a few months out like question 1?

3. 15:00 : Pinned a single core and ~258MB transfer (screenshot 15:32). Assumed to be smb. I realize there are a lot of details that are unclear such as how the laptops are connected (wireless vs wired, at what bandwidth), but that transfer seems pretty uninspiring. Anyone want to field a guess as to the issue and if there is an obvious misconfiguration?

 

Overall, super interesting and an all SSD NAS is something that's been on my radar for a while. This video was a good chance to introduce myself and ask a few questions (and maybe help others here with similar questions).

Link to comment
2 hours ago, NewMountain said:

I'm not sure I follow this. To be clear, is the idea, when running in ZFS, you need to have a placeholder drive? If so, does it need to adhere to any space or spec guidelines or can it literally be _any_ drive?

 

I have an unraid server for testing that uses a single 8GB thumb drive for the array. You don't need to assign a parity device.

 

Keep in mind that by default the "system" share (libvirt image, docker image) is going to be placed on the array, as presumably you also won't have an unraid array cache device either.  If you're going to use a thumb drive for your array + ZFS pool for storage, you will want to make sure all of these point to appropriate locations in your ZFS pool:

 

Settings | Docker

- Docker vDisk location

- Default appdata storage location

 

Settings | VM Manager

- Libvirt storage location

- Default VM storage path

- Default ISO storage path

 

Note that some dockers from Community Applications ignore the default appdata storage location, and will still default to:

/mnt/user/appdata/xxxx

 

Make sure you check these and change to a path within your pool when adding any new docker applications

 

2 hours ago, NewMountain said:

>>a. Set arc-cache to meta data only bash: `zfs set primarycache=metadata {{pool_name}}`

 

I'm no ZFS expert, but I'm not sure that this is a good idea.  From what I understand, this setting could add to write amplification for asynchronous writes and cause other performance issues.  For a dataset of bluray images, this makes sense.  Not so much for dockers/vms.

 

The ZFS ARC will use up to half your system memory for caching by default (I think?) - but is also very responsive to other memory demands from your system and will release any memory required by other processes.  In most cases it's best to just let the ZFS ARC use whatever memory it can - unless you have large databases or other processes that could make better use of unallocated memory.

 

2 hours ago, NewMountain said:

enable auto-trim (just in case as zfs may set this automatically) bash: `zpool set autotrim=on {{pool_name}}`

 

You should still setup a user script to run "zpool trim poolname" manually in addition to this:

https://askubuntu.com/questions/1200172/should-i-turn-on-zfs-trim-on-my-pools-or-should-i-trim-on-a-schedule-using-syste

Edited by jortan
Link to comment

You should have a single drive for Unraid's array, so the system can start since most services need the array started.  Also your docker.img should be placed on this drive as was witten earlier.

I have atime off in general and setting autotrim on for SSDs is something I just do like setting ashift manually on creation.  For the compression I would go with zstd instead of lz4 however.

  • Like 1
Link to comment
2 hours ago, ich777 said:

Please try to use not an image, use a Docker path instead, have no problem using everything on a zpool, currently I'm using ZFS 2.1.1 on unRAID 6.10.0-rc1

 

Limetech does not yet recommend this (https://wiki.unraid.net/Manual/Release_Notes/Unraid_OS_6.9.0#Docker)

I think it would be best if we could get it to work with the ZFS storage driver (https://docs.docker.com/storage/storagedriver/zfs-driver/) once ZFS is officially in Unraid.

 

@ich777what does 

docker info

give you for your setup? Did it autoselect zfs as storage driver?

Link to comment
7 minutes ago, ich777 said:

I don't understand, what is not recommended or where does it say it's not recommended?

it says

Quote

In a specified directory which is bind-mounted at /var/lib/docker.  Further, the file system where this directory is located must either be btrfs or xfs.

zfs would have zfs storage driver instead of the btrfs or overlay2 one

Link to comment
10 minutes ago, Arragon said:

it says

Please keep in mind that this documentation is for stock unRAID systems without any third party applications/plugins installed, because the developers can't account for every possible scenario like in this case with the 3rd party ZFS plugin from @steini84. ;)

 

root@DevServer:~# docker info
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 2
  Running: 1
  Paused: 0
  Stopped: 1
 Images: 2
 Server Version: 20.10.8
 Storage Driver: zfs
  Zpool: cache
  Zpool Health: ONLINE
  Parent Dataset: cache
  Space Used By Parent: 896096768
  Space Available: 114598258176
  Parent Quota: no
  Compression: off
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b638
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.13.13-Unraid
 Operating System: Slackware 15.0 x86_64
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.642GiB
 Name: DevServer
 ID: THISISMYSECRET
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

 

As you can see it is using the ZFS driver automagically.

 

Please also keep in mind that I run a little bit of a unusual system where my Cache from unRAID is also a ZFS pool, but this would not affect Docker itself, this comes only with a few other caveats and that's why it only runs on my development machine.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.