Jump to content

[Plugin] LXC Plugin


Recommended Posts

@Roalkege so now I've tested it, I completely wiped my cache drive and formatted it with ZFS and put my LXC directory in /mnt/cache/lxc/:
grafik.thumb.png.dbbc07c9037a11451cf34730334f9830.png

 

I then created two containers for testing:
grafik.thumb.png.bd900a37fc04e993de03112049255304.png

 

Then I queried the LXC directory (looking good so far with all the configs in there and also the rootfs):

grafik.png.b8ef0d2eb262fc54d4afac38e9370b9e.png

 

After that I looked up all datasets:
grafik.png.e98c3b59c4f7848df8afe08c50b0bfbc.png

 

Then I did a reboot just for testing if everything is still working after a reboot:

grafik.thumb.png.334c8aa279fb026fd06a08c07bd07898.png

 

Then I've created two snapshots, one from Debian, then one From Slackware and then again one from Debian:

grafik.png.957e47c237abc4ecb07c9c61e2724a15.png

 

And everything seems still to work fine:
grafik.thumb.png.4c58ac62974726a2388fb6714a9d35ee.png

 

 

I really can't help with snapshotting within the ZFS Master plugin however if you are taking snapshots with the ZFS Master plugin I would recommend, at least from my understanding, that you leave LXC in Directory mode because you are taking a snapshot from the directory/dataset.

 

The ZFS backing storage type is meant to be used with the built in snapshot feature from LXC/LXC plugin itself.

 

 

 

EDIT: I even tried now tried another reboot with the snapshots in place and everything is working as expected:
grafik.thumb.png.9b789adcb6666f439348a60a8bd2b5c9.png

Link to comment

Is it safe to change the backing storage type from default to zfs when there are already existing containers running? Will it make a difference? Can i convert existing ones to the new backing storag type? What do i have to do so?

 

Btw, some "tooltips" (i mean those help texts) for various options would be helpful. Like for this setting "Default LXC backing storage type:". I have some idea what that means, but for others it might not, what it involes, what it does, etc. Just an idea for the future :D

Edited by Joly0
Link to comment
2 hours ago, Joly0 said:

Is it safe to change the backing storage type from default to zfs when there are already existing containers running?

Yes.

The containers that use directory will still be using directory, where the new set up ones will use ZFS.

 

2 hours ago, Joly0 said:

Will it make a difference?

Depends on what you are doing with it, it can come in handy when you are planing on doing snapshots with the LXC built in snapshot feature.

 

2 hours ago, Joly0 said:

Can i convert existing ones to the new backing storag type?

No, but the real answer is a bit more complicated.

 

2 hours ago, Joly0 said:

What do i have to do so?

It is possible but I would not recommend it, it would be better to re-create them from scratch.

 

2 hours ago, Joly0 said:

Btw, some "tooltips" (i mean those help texts) for various options would be helpful.

This is also on the to do list, haven't had got time yet. I recently removed the red text from the settings page or at least display it only when someone has set it to /mnt/user/...

I've now added a help text to the settings page. :)

 

The new backing storage type comes in handy if you want to use for example the LXC built in snapshot feature (not from ZFS itself <- read a few posts back, someone had an issue with snapshots with ZFS but I can't reproduce that but I'm assuming that he took the snapshots with the ZFS Master plugin and not with the LXC snapshot function, I tried to reproduce in the post above yours).

  • Like 1
Link to comment
24 minutes ago, ncceylan said:

Please tell me how to write this command into config. Because when I try to use vim to modify /mnt/cache/xxx/config, it will cause an error in the lxc container.

A bit more information on your configuration would be nice, can you maybe post your Diagnostics so that I can see why it is not working?

Link to comment
8 minutes ago, ich777 said:

A bit more information on your configuration would be nice, can you maybe post your Diagnostics so that I can see why it is not working?

Found the problem. I copied undefined characters when using the editor before, which caused an error in the lxc configuration file. Now I have changed the editor and successfully modified the config file. Thank you for your reply

  • Like 1
Link to comment

Something is wrong with my System...

I removed the LXC plugin, deleted my lxc and zfs_lxccontainers share/dataset. After that I installed the plugin. I created again the DNS container and everything worked. I also updated to 6.12.4.

Now after a bit over 24h the same problem, again. Now debian logo and no RX/TX. I was able to locate the config file and created a backup. But the container is gone... I also created a backup yesterday but the backup tab is also gone.

What is the problem?

 

unraid-diagnostics-20231002-2014.zip

Edited by Roalkege
typo
Link to comment
2 hours ago, Roalkege said:

What is the problem?

You set the lxc share to use the cache which means that when the mover kicks in the files will be moved to the array and not stay on the cache, that's most likely what causes your issue.

Set the LXC share so that the primary storage is the cache and that it has no secondary storage.

 

I see this in your Diagnostics which is ultimately wrong:

# Share exists on cache, disk1
...
shareUseCache="yes"
...

The first line means that it exists on disk1 and cache which is one part of the issue and the second one indicates that the mover moves all the files to the array, it should be "only" instead of "yes" (BTW this is something you must have set at some point, by default it is created to stay on the cache).

 

This is what it should look like:

grafik.thumb.png.89b65e9e98fe20e659c1c30b3e807aac.png

Link to comment
12 minutes ago, Roalkege said:

That must also have been the problem with zfs. Why is the whole thing so strict with permissions etc.?

I'm not 100% sure about that but could be...

 

 

12 minutes ago, Roalkege said:

Is there a way I can use the backup, or set everything up again from scratch?

How did you create the backup? With the integrated Backup function? If yes, try to restore it with the same name. Did you enable the global backup?

Link to comment
1 minute ago, Roalkege said:

I enabled global backup and created the backup with right click on container -> backup

Then go to this tab and click Create Container from Backup:

grafik.thumb.png.58b44fdb0d49c58eb5e4e3efe6320fd3.png

 

In the following window make sure to enter the same name and click create (this can take some time):

grafik.thumb.png.5952d63a8fb20eeeec6a65c4149a450d.png

 

If that doesn't work you can also do that in your case from the command line with:

lxc-autobackup -r DNS DNS

(the first "DNS" is the Backup name and the second one is the new container name)

Link to comment

Just a technical question: Does the plugin constantly check if the lxc folder exists and if not, creates it? If so, i might have found a bug. I had lxc disabled and my cache drive empty, where lxc would store the lxc folder (/mnt/cache/lxc). Now even though i deleted everything from my cache drive, randomly this folder structure appeared on my cache pool: /mnt/cache/lxc/cache

This made me unable to reformat my cache drive (which i was trying to do). Took me some time to notice the folder on the cache drive

Link to comment

EDIT: Sorry, this was my fault. The IP I was trying to use was already in use :/

 

How do I force a container to use a particular IP? I added a DHCP reservation on my router for `10.1.1.14` using the correct MAC address, but the container kept using `10.1.2.1` on boot. I tried explicitly setting the IP in the config:

 

lxc.net.0.ipv4.address = 10.1.1.14/16 10.1.255.255

 

But now the container has two IP addresses: The one I want (10.1.1.14) and the one I don't want (10.1.2.1)! 10.1.2.x is my un-reserved DHCP range so I don't want any servers in that range.

Edited by Daniel15
Link to comment
7 hours ago, Joly0 said:

Does the plugin constantly check if the lxc folder exists and if not, creates it?

No.

 

7 hours ago, Joly0 said:

I had lxc disabled and my cache drive empty, where lxc would store the lxc folder (/mnt/cache/lxc).

Did you reboot in between or something like that?

 

7 hours ago, Joly0 said:

Now even though i deleted everything from my cache drive, randomly this folder structure appeared on my cache pool: /mnt/cache/lxc/cache

This is a symlink and should not prevent you from formatting a drive.

 

I would need a bit more information what you did exactly in the process.

 

EDIT: I've looked now a bit deeper into that and it makes sure that the path exists even if you disable the service but I assume you've deleted everything after you've disabled the service and not before disabling the service, correct?

Link to comment
1 hour ago, ich777 said:

I wouldn‘t recommend doing it like that, it is always better to set a static IP in the container itself than in the config.

 

Please also specify the distribution that you are using or include your Diagnostics next time please.

Where? The container generally doesn't have a network config; it's provided by the host. I'm running Debian in the LXC container.

In any case, I got the DHCP assignment working.

Link to comment
7 minutes ago, Daniel15 said:

The container generally doesn't have a network config; it's provided by the host.

That's not entirely true.

 

7 minutes ago, Daniel15 said:

Where?

Open up a terminal from the container and depending on the Debian distribution in the network configuration but if you are running for example Bookworm:

/etc/systemd/network/eth0.network

 

And do something like that:

[Match]
Name=eth0
[Network]
Address=<STATICIP>/24
Gateway=<GATEWAYIP>
DNS=<DNSIP>

 

After that restart the container and it will use that IPs

Link to comment
On 9/30/2023 at 6:38 PM, ich777 said:

@Roalkege so now I've tested it, I completely wiped my cache drive and formatted it with ZFS and put my LXC directory in /mnt/cache/lxc/:
grafik.thumb.png.dbbc07c9037a11451cf34730334f9830.png

 

I then created two containers for testing:
grafik.thumb.png.bd900a37fc04e993de03112049255304.png

 

Then I queried the LXC directory (looking good so far with all the configs in there and also the rootfs):

grafik.png.b8ef0d2eb262fc54d4afac38e9370b9e.png

 

After that I looked up all datasets:
grafik.png.e98c3b59c4f7848df8afe08c50b0bfbc.png

 

Then I did a reboot just for testing if everything is still working after a reboot:

grafik.thumb.png.334c8aa279fb026fd06a08c07bd07898.png

 

Then I've created two snapshots, one from Debian, then one From Slackware and then again one from Debian:

grafik.png.957e47c237abc4ecb07c9c61e2724a15.png

 

And everything seems still to work fine:
grafik.thumb.png.4c58ac62974726a2388fb6714a9d35ee.png

 

 

I really can't help with snapshotting within the ZFS Master plugin however if you are taking snapshots with the ZFS Master plugin I would recommend, at least from my understanding, that you leave LXC in Directory mode because you are taking a snapshot from the directory/dataset.

 

The ZFS backing storage type is meant to be used with the built in snapshot feature from LXC/LXC plugin itself.

 

 

 

EDIT: I even tried now tried another reboot with the snapshots in place and everything is working as expected:
grafik.thumb.png.9b789adcb6666f439348a60a8bd2b5c9.png

 

Works this only with Unpaid >=6.12?

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...