Jump to content

Reboot required to apply Unassigned Devices update + ZFS


Recommended Posts

UD creates three folders at /mnt/ for its mounts.  They are /mnt/disks/, /mnt/remotes/, and /mnt/rootshare/.  When UD is installed, it adds a protection mount to each of those folders that prevents adding files and folders to those folders.  This is to prevent the /tmp file system from filling up from misconfigured dockers and VMs and crashing Unraid.  If one of those folders has mounts when this protection is added (this can happen when UD is upgraded), you will be asked with the banner you are seeing to reboot so UD can apply the protection mounts.  It appears that something other than UD is adding a mount point to one of those folders.  Check your ZFS configuration.

Link to comment
9 hours ago, dlandon said:

UD creates three folders at /mnt/ for its mounts.  They are /mnt/disks/, /mnt/remotes/, and /mnt/rootshare/.  When UD is installed, it adds a protection mount to each of those folders that prevents adding files and folders to those folders.  This is to prevent the /tmp file system from filling up from misconfigured dockers and VMs and crashing Unraid.  If one of those folders has mounts when this protection is added (this can happen when UD is upgraded), you will be asked with the banner you are seeing to reboot so UD can apply the protection mounts.  It appears that something other than UD is adding a mount point to one of those folders.  Check your ZFS configuration.

My ZFS is mounted in /mnt/disks/zfs

What I have to do? To make is disappear ?

Link to comment
1 hour ago, itimpi said:

If the OP mentions where/how the ZFS pool is being mounted we could probably give helpful advice on how to get things done in the right order?

 

Yes, that would help, but my understanding is they are auto mounted and I don't know how or when.  This discussion should probably move the the ZFS support forum.

Link to comment
1 hour ago, dlandon said:

Yes, that would help, but my understanding is they are auto mounted and I don't know how or when.  This discussion should probably move the the ZFS support forum.

ZFS is always automounted, UD does not support ZFS.

Link to comment
8 minutes ago, Tommy said:

I tryed to delay ZFS service but this is very dirty solution. 

Can I ignore this warning (wait for oficial ZFS support) ? Are there any consequences of leaving this as is?

What's dirty about it?  You can ignore the warning about rebooting, but the reason for that notice is to apply the write protection on the /mnt/disks/ that keeps misconfigured VMs and Docker Containers from writing into /mnt/disks/ from crashing Unraid.

Link to comment
44 minutes ago, Tommy said:

I tryed to delay ZFS service but this is very dirty solution. 

On 3/16/2022 at 8:54 PM, Tommy said:

My ZFS is mounted in /mnt/disks/zfs

Why don't you change the mount point to /mnt/zfs?

This would be the easiest method and you don't get this warning...

 

Something like this should get you covered:

zfs set mountpoint=/mnt/zfs yourpoolname

(I would recommend to stop the Array in the first place and change all directories to the right location and after that start the Array again)

Link to comment
3 hours ago, ich777 said:

Why don't you change the mount point to /mnt/zfs?

This would be the easiest method and you don't get this warning...

 

Something like this should get you covered:

zfs set mountpoint=/mnt/zfs yourpoolname

(I would recommend to stop the Array in the first place and change all directories to the right location and after that start the Array again)

There is no problem I cant do it.

Link to comment
4 hours ago, ich777 said:

Why don't you change the mount point to /mnt/zfs?

This would be the easiest method and you don't get this warning...

 

Something like this should get you covered:

zfs set mountpoint=/mnt/zfs yourpoolname

(I would recommend to stop the Array in the first place and change all directories to the right location and after that start the Array again)

 

Unfortunately if you do so, the "Fix Common Problems" plugin (see spoiler picture) gives you this warning which, of course, you could gently ignore...

But on the other hand, why would it throw this warning if it makes no sense?

So I'd second the question of which would be the right path to mount zfs pools? Maybe someone of the UnRaid devs could have a look and answer... Thanks!

 

Spoiler

grafik.thumb.png.3f22461f0c0d399259466ec533e36fe3.png

 

Link to comment
2 minutes ago, DaKarli said:

But on the other hand, why would it throw this warning if it makes no sense?

Because the most common cause of extra folders in /mnt is somebody mistyping or misunderstanding a container mapping, which will quickly run the server out of RAM if not addressed.

 

You created the folders on purpose, it's up to you to manage whether they are mounted properly before you start writing to them. The warning isn't saying don't do it, it's saying

"This error may or may not cause issues for you". Feel free to click the ignore error button, since you are aware of the issues and can manage the risk.

 

It's just notifying you that folders outside of Unraid's direct control exist in a spot that could write to RAM instead of a mounted filesystem.

 

If your mount process fails for whatever reason, the folder will happily accept data written, up until you exhaust the available RAM and crash the server.

Link to comment
18 minutes ago, JonathanM said:

Because the most common cause of extra folders in /mnt is somebody mistyping or misunderstanding a container mapping, which will quickly run the server out of RAM if not addressed.

 

You created the folders on purpose, it's up to you to manage whether they are mounted properly before you start writing to them. The warning isn't saying don't do it, it's saying

"This error may or may not cause issues for you". Feel free to click the ignore error button, since you are aware of the issues and can manage the risk.

 

It's just notifying you that folders outside of Unraid's direct control exist in a spot that could write to RAM instead of a mounted filesystem.

 

If your mount process fails for whatever reason, the folder will happily accept data written, up until you exhaust the available RAM and crash the server.

Ok I changed mount point to /mnt/zfs , but now I remember why I switched it previously... from /mnt/zfs to /mnt/disks/zfs

 

This message was me reason:

Event: Fix Common Problems - LarvaSubject: Errors have been found with your server (Larva).

Description: Investigate at Settings / User Utilities / Fix Common Problems

Importance: alert* **Invalid folder zfs contained within /mnt**

 

image.png.36ab8bd587c0770822cb9598256dbadd.png

Edited by Tommy
Link to comment
54 minutes ago, DaKarli said:

So I'd second the question of which would be the right path to mount zfs pools?

From my perspective the right path is /mnt since not everyone has Unassigned Devices installed.

 

Have you even read the ZFS support thread, there is also the /mnt path used as the mount path.

 

57 minutes ago, DaKarli said:

Fix Common Problems

@Tommy Isn‘t there a button to ignore that?

 

Basically you can mount your ZFS volumes wherever you want but I would strongly recommend that you mount it in the /mnt directory.

 

The last thing that you can do is uninstall Fix Common Problems of you can‘t ignore that warning or contact @Squid about that.

Link to comment
8 minutes ago, ich777 said:

From my perspective the right path is /mnt since not everyone has Unassigned Devices installed.

 

From my point of view, /mnt/disks is the proper place as everything in /mnt is assumed to be managed by the OS with the exception of disks, remotes, rootshare  @dlandon can pipe in.

 

But. like everything else in FCP most warnings are "opinions" (mine) on what the "proper" way to do things are.  The entire reason why Ignore is there

  • Like 2
  • Thanks 1
Link to comment
1 hour ago, Squid said:

From my point of view, /mnt/disks is the proper place as everything in /mnt is assumed to be managed by the OS with the exception of disks, remotes, rootshare  @dlandon can pipe in.

I agree.  Using /mnt/ and having VMs or Docker Containers misconfigured can cause some serious issues.  UD sets write limits of 1MB on the /mnt/disks/, /mnt/remotes/, and /mnt/rootshare/ to keep misconfigured VMs and Docker Containers from filling the tmpfs and crashing Unraid.  Install UD and ignore it.  You don't have to use any of it's features.

 

The only thing is the ZFS mounts have to happen after UD has installed.

Link to comment
On 3/18/2022 at 8:00 PM, dlandon said:

I agree.  Using /mnt/ and having VMs or Docker Containers misconfigured can cause some serious issues.  UD sets write limits of 1MB on the /mnt/disks/, /mnt/remotes/, and /mnt/rootshare/ to keep misconfigured VMs and Docker Containers from filling the tmpfs and crashing Unraid.  Install UD and ignore it.  You don't have to use any of it's features.

 

The only thing is the ZFS mounts have to happen after UD has installed.

@dlandon@Squid@ich777

 

I've put you three on copy because I think the question "which is the best mountpoint for zfs" is something which has to be discussed and needs a clear statement because of the hopefully soon to be realised plans to natively integrate zfs as an underlying filesystem though I also have a good understanding that maybe we may be in an too early stage for this discussion to happen...

 

I am aware that I can mount my zfs whereever I want (technically speaking) and during my first steps with Unraid I realized that Unraid behaves differently from what I have seen with other linux systems. To be honest, though reading through a lot of posts in this forum I still don't completely understand how Unraid handles mounts, config files, its complete bootup and the fact that the system resides in RAM after boot, etc.

I have a quite good knowledge of all the technology involved but until I have the big picture here, I am poking a little bit in the mist... ;-)

 

My first attempt to mout zfs was under / but I realized I couldnt choose e.g. the VM path in the file picker if it resided there. My second approach, after disabling user shares in global share settings, was to mount my zfs under /mnt/disks/zfs (as suggested) which allowed me to use the file picker but led to problems with docker on my zfs share (I've read the warning but nevertheles tried it).

Re-enabling user shares and putting docker back onto a regular Unraid array share solved this but now i got into trouble with the file picker as it only showed the paths provided by "user shares" (appdata, domains, system, isos). So the file picker was worthless for what I intended.

Not being happy with such a long path for my zfs mount I went back to mount zfs under /mnt/zfs which led to UD warning me about that... Ok, I can ignore that if I want to but on the other hand, what is the impact?

 

Now @dlandon said, UD sets write limits on the /mnt/disks path! So this absoultely means it would not be advised to put zfs in this path!

To sum up, and as I am (for the moment) ok with not being able to make use of the file picker, I am gonig to leave my zfs mounted under /mnt/zfs and will click to ignore the warning in Fix Common Problems.

 

What I don't get is what you mean by "...the ZFS mounts have to happen after UD has installed...".

I installed UD, then I created my ZFS pools and datasets with the mountpoints at /mnt/zfs-pools/zfs-datdasets - and for the moment see no problems with that.

Or is it something else you mean by that?

Edited by DaKarli
fixed a mistake
Link to comment

It's too early to have this discussion about the mount points.  When ZFS is implemented in Unraid, UD will probably be able to mount ZFS disks that are not in the array.  I expect that UD would mount legacy ZFS disks so the data could be copied into the array or a new Unraid ZFS pool.  The mount point for UD mounted ZFS disks would be /mnt/disks/.  That was recommended for now to avoid the FCP /mnt/ warning.

 

28 minutes ago, DaKarli said:

I installed UD, then I created my ZFS pools and datasets with the mountpoints at /mnt/zfs-pools/zfs-datdasets - and for the moment see no problems with that.

Or is it something else you mean by that?

This does not mean the initial installation of UD.  When your serever is booted, plugins are installed in alphabetic order.  UD has to complete it's installation and set up the protection on the /mnt/disks/ folder before anything can be mounted or UD detects that mount and insists on a reboot to clear the mount on /mnt/disks/ so it can install the protection.  In your situation the ZFS mounts are auto mounting to /mnt/disks/ before UD can apply it's protection mechanism and that's why you see the reboot message.

 

For now mount your ZFS disks at /mnt/zfs/ and ignore the FCP warning.

Link to comment
52 minutes ago, dlandon said:

It's too early to have this discussion about the mount points.  When ZFS is implemented in Unraid, UD will probably be able to mount ZFS disks that are not in the array.  I expect that UD would mount legacy ZFS disks so the data could be copied into the array or a new Unraid ZFS pool.  The mount point for UD mounted ZFS disks would be /mnt/disks/.  That was recommended for now to avoid the FCP /mnt/ warning.

 

This does not mean the initial installation of UD.  When your serever is booted, plugins are installed in alphabetic order.  UD has to complete it's installation and set up the protection on the /mnt/disks/ folder before anything can be mounted or UD detects that mount and insists on a reboot to clear the mount on /mnt/disks/ so it can install the protection.  In your situation the ZFS mounts are auto mounting to /mnt/disks/ before UD can apply it's protection mechanism and that's why you see the reboot message.

 

For now mount your ZFS disks at /mnt/zfs/ and ignore the FCP warning.

Thanks @dlandon for the explanation. Another puzzle completing the big picture.

 

Just as I thought, it's too early yet... 😉


The good thing with ZFS is the fact, that even if everything changes in the way how Unraid will implement it, a ZFS pool can simply be exported/imported with just two commands and so you are able to run your Z-Pool on ANY system that supports ZFS. This file system is rock-solid and really bullet-proof since I started to use it back in about 2007/8 on Solaris 10... 😎 

 

...now gonig to upgrade to Unraid 6.10 rc-4 to see what has changed under the hood... 😋

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...