ZFS plugin for unRAID


steini84

Recommended Posts

That can be done, but is sub-optimal - as the file wasnt created on the zfs system, the zfs specific metadata (xattr, etc) aren't applied, instead using whatever it was at creation, which may not match the zfs filesystems settings/config.

 

Worst case is leaving performance on the table though (unless you start accessing that image over the network), so not a huge deal.

Link to comment
59 minutes ago, gyto6 said:

I copied my img file and it's working now on my ZFS partition.

 

@asopala

 

If you already have an img file, try to simply copy it to your dedicated dataset and run docker.

 

So the thing is I have the docker container img pointing to a docker dataset, but it makes its own datasets that look like this.  I'm trying to get rid of them.

 

Screen Shot 2022-07-14 at 7.46.55 PM.png

Link to comment
1 hour ago, asopala said:

 

So the thing is I have the docker container img pointing to a docker dataset, but it makes its own datasets that look like this.  I'm trying to get rid of them.

 

Screen Shot 2022-07-14 at 7.46.55 PM.png

 

This ones been gone through a number of times on the forum (I think in this thread actually even) - check your docker settings and reconfirm, you'll get it sorted 👍

Link to comment
2 minutes ago, BVD said:

 

This ones been gone through a number of times on the forum (I think in this thread actually even) - check your docker settings and reconfirm, you'll get it sorted 👍

What particular settings should I adjust?  I tried looking for it but there's 52 pages of stuff I've had to sift through, including the search function.  413528922_ScreenShot2022-07-14at9_20_59PM.thumb.png.a8dfb58da8b79927b856b3eebaf1a31e.pngHere are my settings under Settings\Docker.

 

 

Link to comment
7 minutes ago, asopala said:

What particular settings should I adjust?

 

Please dont take this the wrong way, and I'm coming at this from a place of respect in helping you protect your data, but if you're not willing to do the legwork to research / find things of this nature, zfs might not be best suited for you... ZFS is super powerful, but can also be quite dangerous if deployed  without proper care, or at least a willingness to seek out answers ones self.

 

Again, I understand you may be strapped for time, and I dont mean this as any kind of slight, not at all. Just that you'll need a willingness/time/patience to search out your own answers in order to be successful with zfs in the long run.

 

Apologies in advance for any offense, none meant!

Edited by BVD
Link to comment

Hi everyone,

 

I'm really hoping someone can help me unmount my pool.  I originally had my pool mounted off of /pool/dataset, and wanted to move it into /mnt.  I unmounted, then made the dumb move of running:

zfs set mountpoint=/mnt pool/dataset

I now see that I should have instead run:

zfs set mountpoint=/mnt/pool/dataset pool/dataset

When I try to unmount the pool I consistently get "cannot unmount '/mnt': pool or dataset is busy".  The array is stopped, no dockers, or VMs, are running.

 

I've rebooted the server multiple times, and no matter what I try it is always busy.

root@ur:~# zfs unmount cxUrPool
cannot unmount '/mnt': pool or dataset is busy
root@ur:~# zfs unmount -f cxUrPool/liveFiles
cannot unmount '/mnt': pool or dataset is busy
root@ur:~# zfs unmount -f cxUrPool
cannot unmount '/mnt': pool or dataset is busy
root@ur:~# zfs set mountpoint=/mnt/cxUrPool/liveFiles cxUrPool/liveFiles
cannot unmount '/mnt': pool or dataset is busy

Further complicating matters I have the pool mounted in one spot and the pool/dataset mounted in another:

root@ur:~# zfs get mountpoint
NAME                PROPERTY    VALUE       SOURCE
cxUrPool            mountpoint  /cxUrPool   local
cxUrPool/liveFiles  mountpoint  /mnt        local

        NAME                                      STATE     READ WRITE CKSUM
        cxUrPool                                  ONLINE       0     0     0

I would like to remove both mountpoints and remount into /mnt/cxUrPool/liveFiles.

 

How can I determine what is causing the pool or dataset to be busy, and kill it so I can make this change?

 

Thanks everyone,

Cal.

 

Edited by calvados
Link to comment
1 hour ago, calvados said:

Hi everyone,

 

I'm really hoping someone can help me unmount my pool.  I originally had my pool mounted off of /pool/dataset, and wanted to move it into /mnt.  I unmounted, then made the dumb move of running:

zfs set mountpoint=/mnt pool/dataset

I now see that I should have instead run:

zfs set mountpoint=/mnt/pool/dataset pool/dataset

When I try to unmount the pool I consistently get "cannot unmount '/mnt': pool or dataset is busy".  The array is stopped, no dockers, or VMs, are running.

 

I've rebooted the server multiple times, and no matter what I try it is always busy.

root@ur:~# zfs unmount cxUrPool
cannot unmount '/mnt': pool or dataset is busy
root@ur:~# zfs unmount -f cxUrPool/liveFiles
cannot unmount '/mnt': pool or dataset is busy
root@ur:~# zfs unmount -f cxUrPool
cannot unmount '/mnt': pool or dataset is busy
root@ur:~# zfs set mountpoint=/mnt/cxUrPool/liveFiles cxUrPool/liveFiles
cannot unmount '/mnt': pool or dataset is busy

Further complicating matters I have the pool mounted in one spot and the pool/dataset mounted in another:

root@ur:~# zfs get mountpoint
NAME                PROPERTY    VALUE       SOURCE
cxUrPool            mountpoint  /cxUrPool   local
cxUrPool/liveFiles  mountpoint  /mnt        local

        NAME                                      STATE     READ WRITE CKSUM
        cxUrPool                                  ONLINE       0     0     0

I would like to remove both mountpoints and remount into /mnt/cxUrPool/liveFiles.

 

How can I determine what is causing the pool or dataset to be busy, and kill it so I can make this change?

 

Thanks everyone,

Cal.

ur-diagnostics-20220715-0000.zip 77.57 kB · 0 downloads

In your case, I'll uninstall ZFS, reboot the machine and stop the array, docker and VM to umount.

 

PS: I switched back to my XFS ZVOL as I couldn't access the docker tab this morning with the img running from my ZFS partition.

Edited by gyto6
Link to comment
7 hours ago, BVD said:

@calvados can you check for anything in use on the pool?

 

ps -aux | grep cxUrPool

 

Short of this, you can also try setting the mounpoint to legacy.

Hi @BVD,

 

Thanks for your message.  Here is the result of running the ps -aux | grep cxUrPool:

root@ur:~# ps -aux | grep cxUrPool
root     12283  0.0  0.0   3984  2144 pts/0    S+   08:54   0:00 grep cxUrPool

 

I'm not sure how to interpret this.  Looks like I'm (root) using it?  I'm not sure how to kill this process or how to proceed from here.

 

I'll do some reading into how to set the mountpoint to legacy.  Any chance you could point me to a guide on how to achieve that?

 

EDIT:

I figured out the kill command, but now I'm chasing the process ID. The bellow commands were called within seconds of each other.  I'm super confused, or maybe a bit daft.  Perhaps both.

root@ur:/etc# ps -aux | grep cxUrPool
root      1973  0.0  0.0   3984  2276 pts/0    S+   10:41   0:00 grep cxUrPool
root@ur:/etc# kill -9 1973
bash: kill: (1973) - No such process
root@ur:/etc# ps -aux | grep cxUrPool
root      6737  0.0  0.0   3984  2064 pts/0    S+   10:42   0:00 grep cxUrPool
root@ur:/etc# kill -9 6737
bash: kill: (6737) - No such process
root@ur:/etc# ps -aux | grep cxUrPoo
root      7673  0.0  0.0   3980  2216 pts/0    S+   10:42   0:00 grep cxUrPoo
root@ur:/etc# kill -9 7673
bash: kill: (7673) - No such process

 

Thanks again @BVD,

Cal.

Edited by calvados
Link to comment
7 hours ago, gyto6 said:

In your case, I'll uninstall ZFS, reboot the machine and stop the array, docker and VM to umount.

 

PS: I switched back to my XFS ZVOL as I couldn't access the docker tab this morning with the img running from my ZFS partition.

Hi @gyto6,

 

If I uninstall the ZFS plugin I no longer have access to zfs, or zpool commands.  Maybe this is a dumb question, but how do I unmount a zfs pool if I don't have zfs commands?

 

Thanks @gyto6,

Cal.

Link to comment
7 hours ago, BVD said:

@calvados can you check for anything in use on the pool?

 

ps -aux | grep cxUrPool

 

Short of this, you can also try setting the mounpoint to legacy.

Hi @BVD,

 

Sorry to hit you with two replies before you've had a chance to reply.  I tried setting mountpoint to legacy but I get the same "pool or dataset is busy".  Any suggestions as to how to proceed?

 

root@ur:/etc# zfs set mountpoint=legacy cxUrPool/liveFiles
cannot unmount '/mnt': pool or dataset is busy

 

Thanks again for your help @BVD,

Cal.

Link to comment
1 hour ago, calvados said:

Hi @gyto6,

 

If I uninstall the ZFS plugin I no longer have access to zfs, or zpool commands.  Maybe this is a dumb question, but how do I unmount a zfs pool if I don't have zfs commands?

 

Thanks @gyto6,

Cal.

I believe if you unistall the plugin and reboot, the pool you have will not automount on boot. So uninstall, reboot, install plugin, then import the pool with the mountpoint you want.

 

As for busy drives, make sure there are no shares involved with those datasets.

Link to comment
36 minutes ago, muddro said:

I believe if you unistall the plugin and reboot, the pool you have will not automount on boot. So uninstall, reboot, install plugin, then import the pool with the mountpoint you want.

 

As for busy drives, make sure there are no shares involved with those datasets.

Hi @muddro @BVD @gyto6,

 

Thanks for all your help.  @muddro's comment got me on track and worked out perfectly.  Thank you all so very very much for all your help and advise.  It is very very much appreciated.

 

One last question...now that I have mounted my dataset in /mnt, Fix Common Problems is throwing an error:

Generally speaking, most times when other folders get created within /mnt it is a result of an improperly configured application. This error may or may not cause issues for you

 

Is there indeed an issue with mounting my zfs dataset inside of /mnt?  Can I safely ignore this error, or are there other factors I'm unaware of and should consider?

 

root@ur:/mnt/liveFiles# zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
cxUrPool            9.06T  5.35T      128K  none
cxUrPool/liveFiles  9.06T  5.35T     9.06T  /mnt/liveFiles

 

Thanks again everyone,

Cal.

Link to comment
23 minutes ago, calvados said:

Hi @muddro @BVD @gyto6,

 

Thanks for all your help.  @muddro's comment got me on track and worked out perfectly.  Thank you all so very very much for all your help and advise.  It is very very much appreciated.

 

One last question...now that I have mounted my dataset in /mnt, Fix Common Problems is throwing an error:

Generally speaking, most times when other folders get created within /mnt it is a result of an improperly configured application. This error may or may not cause issues for you

 

Is there indeed an issue with mounting my zfs dataset inside of /mnt?  Can I safely ignore this error, or are there other factors I'm unaware of and should consider?

 

root@ur:/mnt/liveFiles# zfs list
NAME                 USED  AVAIL     REFER  MOUNTPOINT
cxUrPool            9.06T  5.35T      128K  none
cxUrPool/liveFiles  9.06T  5.35T     9.06T  /mnt/liveFiles

 

Thanks again everyone,

Cal.

Ignore it

Link to comment

@calvados the next time this happens, some other things to look at:

 

* Autosnapshot tools - sanoid/syncoid/auto-snapshot.sh, anything that automatically handles snapshot management and is currently configured on the system can cause this. If you have it, kill *that* process (e.g. sanoid) first, then retry.

* intermittent commands - you can try "umount -l /mnt/path" to do a "lazy" unmount; pretty commonly needed for a variety of reasons. 

 

Glad you got it sorted for now!

Link to comment
1 hour ago, BVD said:

@calvados the next time this happens, some other things to look at:

 

* Autosnapshot tools - sanoid/syncoid/auto-snapshot.sh, anything that automatically handles snapshot management and is currently configured on the system can cause this. If you have it, kill *that* process (e.g. sanoid) first, then retry.

* intermittent commands - you can try "umount -l /mnt/path" to do a "lazy" unmount; pretty commonly needed for a variety of reasons. 

 

Glad you got it sorted for now!

Thanks @BVD!  Very much appreciated!

Link to comment
1 hour ago, ich777 said:

As @SimonF pointed out nothing too fancy, a necessary change how the plugin downloads/installs the package & code cleanup.

 

Actually I pushed the update. 😅

Well thanks then. 😉

 

You've not been advised about the ZFS plugin depreciation by the LimeTech Team due to its integration in Unraid so far?

Link to comment
1 hour ago, gyto6 said:

You've not been advised about the ZFS plugin depreciation by the LimeTech Team due to its integration in Unraid so far?

Why should it be deprecated? This plugin is basically only the ZFS module and the tools which are needed to make ZFS run on Unraid.

Link to comment
24 minutes ago, ich777 said:

Why should it be deprecated? This plugin is basically only the ZFS module and the tools which are needed to make ZFS run on Unraid.

 

I might I've a bad english.

 

I was asking if Limetech has warned you about official ZFS support in Unraid.

 

I've only thought that the Unraid's team would warn you if they're planning to support ZFS in order to not reproduce what happened with @CHBMB who discovered that @limetech deprecated its works (and more, but that's not the subject) without warning him/her.

Edited by gyto6
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.