Jump to content

[Plugin] LXC Plugin


Recommended Posts

6 hours ago, ich777 said:

EDIT: I've looked now a bit deeper into that and it makes sure that the path exists even if you disable the service but I assume you've deleted everything after you've disabled the service and not before disabling the service, correct?

Yes, exactly. I disabled docker, lxc and vm´s, deleted or moved everything from my cache pool to my array and tried to format my pool, but couldnt because that path existed. After deleting/moving i made sure, no files were on my cache pool anymore

Link to comment
11 minutes ago, Joly0 said:

I disabled docker, lxc and vm´s, deleted or moved everything from my cache pool to my array and tried to format my pool

If you did it in this specific order the this path should get deleted too, however if you where using the mover it is maybe the case that it didn't move this directory because it's a symlink.

 

13 minutes ago, Joly0 said:

After deleting/moving i made sure, no files were on my cache pool anymore

Did you maybe stop the array once and started it up again somewhere in between this process? It also creates the directory when the array is started <- I will add a check if the service is enabled to not create it if it is disabled.

Link to comment
Just now, ich777 said:

Did you maybe stop the array once and started it up again somewhere in between this process? It also creates the directory when the array is started <- I will add a check if the service is enabled to not create it if it is disabled.

I dont think, i have stopped and started the array in between this process, but i am not 100% sure. I have looked through the source code and have seen, that the directory gets created when the array starts and i also thought it might be a good idea to add a check, but i first wanted to ask here if this is a bug or not.

  • Like 1
Link to comment
11 minutes ago, Joly0 said:

but i first wanted to ask here if this is a bug or not.

I've now changed what's done/executed on start from the array so that should fix your issue.

 

I will push the update maybe today. I'm not entirely sure if I find anything else that I want to change... :D

 

Anyways, thank you for the report!

Link to comment

Sorry if this has been answered here before but I couldn't find anything similar when searching the forum.

I'm trying to setup my first container in LXC and get the following error when I try create container.

"mkdir: cannot create directory '/var/cache/lxc': Too many levels of symbolic links"

Any idea what I can do fix this?

 

01 Error message when trying to create container

01 Eror cannot create directory.png

 

02 LXC settings

02 LXC settings.png

 

03 LXC share settings

03 LXC share settings.png

 

04 drive setup using ZFS if that has anything to do with it

04 dockers drive ZFS.png

 

05 settings I used to create container

05 settings used for creating container.png

Edited by Nevis
added captions to screenshots
Link to comment
12 minutes ago, Nevis said:

Sorry if this has been answered here before but I couldn't find anything similar when searching the forum.

I don‘t know why that happens on some systems, can‘t reproduce this, try the command from this post please:

(simply copy paste it)

  • Like 1
Link to comment
11 minutes ago, ich777 said:

I don‘t know why that happens on some systems, can‘t reproduce this, try the command from this post please:

(simply copy paste it)

 

Thanks for super fast reply. Seems like I didn't check the thread through enough. But that did it. The commands worked and I got my container running.

image.png.0fd1c29fbfd7b27b276bc286c6525e7c.png

Edited by Nevis
  • Like 1
Link to comment
4 minutes ago, Nevis said:

Thanks for super fast reply. Seems like I didn't check the thread through enough. But that did it. The commands worked and I got my container running.

No worries, this issue pops up from time to time on new installations but I really don't know why...

As said I can't reproduce this over here... :/

Link to comment
46 minutes ago, Nevis said:

04 drive setup using ZFS if that has anything to do with it

Did you maybe created the dataset for LXC after you started and configured everything in the plugin?

If so this is most likely the case why you got this error.

 

A restart of the LXC service would have solved that too. Please let me know if the dataset was created after you configured everything.

 

46 minutes ago, Nevis said:

02 LXC settings

Please note that you can also use ZFS here as backing storage type, this will create a dedicated dataset where the containers/snapshots are stored.

Link to comment

The dataset got created automatically when I created the share. At first I tried to to use ZFS as save method but I looked at some of the previous posts where users had created snapshot with ZFS master plugin and that had damaged their containers. So I switched to "directory" instead. But I did try create it with ZFS enabled at first.

 

I'm using spaceinvader one's script to turn directories to datasets. Descripted in this video. 

 

I'm also using his script to sync the dataset to array.

 

This is using Sanoid. So I figured better not to mix another different kind ZFS snapshoting in there and instead do normal backup (directories) and do snapshoting using the setup I use for my appdata for instance.

 

Only bad side is I get single snapshot which contains all my LXC containers. So if I have many in the future and want to roll back. I roll back everything. Unless I use the snapshots which come with LXC plugin.

Edited by Nevis
Link to comment
32 minutes ago, Nevis said:

Only bad side is I get single snapshot which contains all my LXC containers. So if I have many in the future and want to roll back. I roll back everything. Unless I use the snapshots which come with LXC plugin.

You can also take snapshots with lxc-autosnapshot from the command line maybe with a user script.

 

lxc-autosnapshot is a unique Unraid feature which I wrote specific for that use case.

 

 

Sorry but I'm not that deep into ZFS because I find it for home use a bit overkill, at least what most people do with it... :D

 

So to speak the dataset was there before you configured the plugin correct? If not and the dataset was created after the plugin created the folder this is the culprit.

Link to comment

Yeah tell me about it. I pretty much wanted ZFS for the snapshot capabilities since I got tired of home assistant and nextcloud breaking themselves between updates and sometimes feels like I don't change anything and those still end up committing seppuku out of pure spite. Then again I had my own adventure with ZFS when I almost nuked my appdata. Documented that here. Luckily @JorgeB helped me out with it and pointed me to right direction.

 

But to be more precise. The original dataset dockers/lxc appeared when I made the share on my ZFS drive. But when I first tried to create LXC container I had ZFS option on. That created separate dataset dockers/zfs_lxccontainers/node01 and still gave me the same error I mentioned in previous posts. Then I destroyed dockers/zfs_lxccontainers/node01 datasets, changed LXC save settings to directory and tried again. Still got same error. At that point I started to go through the support thread and check if someone else had come by same message.

Link to comment

Great job patiently explaining how to get LXC up and running.

 

Looking for advice - hope this LXC forum thread is the right place...

 

I was trying to find out more about LXC (on GitHub.com/lxc) and saw DistroBuilder - not sure if this is an overkill for a Dockerfile replacement - I found out the round-about way from this discussion!

https://github.com/lxc/distrobuilder

 

Looking at a GUI manager for LXC and not finding much out there when I saw LinuxContainers.org released Incus yesterday.

https://github.com/lxc/incus

 

The 'try-it' example looks super simple and also really easy to script (in place of Dockerfile).

https://linuxcontainers.org/incus/try-it/

 

Is this an option?

 

I was looking at PodMan (in a VM) to ease my custom Dockerfile (multiple) pains - so still looking around.

https://podman.io

 

 

Link to comment
4 hours ago, Nevis said:

I got tired of home assistant and nextcloud breaking themselves between updates

Same for me... :)

grafik.thumb.png.736d3a242523a68cc23df2868e840e15.png

 

I already have container "templates" which maybe will become available through the CA App, still many thing to sort out.

 

However LXC is cool because you are in control but of course the mayor downside to that is that you are in control and it is a bit more to maintain than a docker container. But in the case for my HomeAssistant container it has a built in updater which uses crome and is based on HomeAssistant core, if you are interested let me know and write me a PM.

 

4 hours ago, Nevis said:

But to be more precise.

Thanks for the detailed explanation, that helps a lot but I'm still not sure why this is happening on some systems and on some not.

 

However from my testing it is safe to use ZFS as the storage backing type, I use it on a daily basis <- LXC also supports BTRFS and it as a backing storage type which also works with snapshots and send/receive.

 

4 hours ago, Nevis said:

dockers/zfs_lxccontainers/node01

For ZFS I have to use a secondary dataset because ultimately it could mess up other containers if I put it directly into your main LXC path.

Link to comment
48 minutes ago, LoneStar said:

Great job patiently explaining how to get LXC up and running.

Yeah, a few things have changed since I've created the plugin but it should be more easily now and there are also help texts all over the Settings page.

 

48 minutes ago, LoneStar said:

not sure if this is an overkill for a Dockerfile replacement

I don't understand what you mean with that...

 

Distrobuilder is used to create custom images for LXC but not images with for example HomeAssistant or PiHole installed.

 

However I have a few templates that you can try out for Unraid and are specifically created for Unraid and for installation through the CA App, but many things needed to be sorted out until this can be released.

 

48 minutes ago, LoneStar said:

Looking at a GUI manager for LXC and not finding much out there when I saw LinuxContainers.org released Incus yesterday.

https://github.com/lxc/incus

This looks very similar to a LXD replacement because LXD was ripped out of the hands from the community and is now maintained and more or less closed source by Conical.

I never used LXD because it was based on Python (which would introduce Python itself as a dependency for LXC which I never wanted) and because it was already somewhere on my radar that LXD is commercialized.

 

Don‘t forget if you use Distrobuilder you also need a way of providing the images to the users, I‘ve alteady come up with ideas how that is achievable for Unraid so that nearly everyone can publish containers.

 

You already have a GUI in Unraid where you can manage your containers, what do you want to do exactly with it?

 

For what do you need Incus? You already can do most of it in the GUI.

I'm also planning on releasing some kind of help page where some very useful examples are listed how to pass through a TUN device, Intel iGPU,... but that is something for later down the road.

 

 

LXC is basically a VM with the advantages of Docker, so to speak shared resources and you are in charge to maintain it and so on but don't mix it up with Docker where a container is easily to update because a LXC container is not easy to update, you have to maintain it and make sure that everything is up to date and so on, so to speak with the downsides of a VM (could be also a upside).

Link to comment

Again thank you for setting time aside to answer - I've been for a long time looking for a reason to play with LXC and this plugin made it simple enough for me to start (and learn).

 

16 hours ago, ich777 said:

Distrobuilder is used to create custom images for LXC but not images with for example HomeAssistant or PiHole installed.

Thanks for clarifying that it is a "distro" "builder" for the OS (e.g. Debian). I was expecting that much that it is NOT a replacement for Dockerfile. That's why I was looking for a "packer"/"terraform"/"ansible"/... alternative in updating the LXC content and stumbled upon "incus". Not exactly what I'm looking for and will explain later....

 

 

15 hours ago, ich777 said:

I'm also planning on releasing some kind of help page where some very useful examples are listed how to pass through a TUN device, Intel iGPU,... but that is something for later down the road.

That sounds like a something I'm looking for and would be a great help - I would put my hand up if you need reviewers / testers.

 

 

16 hours ago, ich777 said:

You already have a GUI in Unraid where you can manage your containers, what do you want to do exactly with it?

Hope I can answer your question without getting too distracted (comments below is NOT about the LXC plugin, so if I miss something please correct me)...


I prefer a simple & automated way to maintain custom "containers".

 

For example - take "mkdocs-material" (documenting code with a Python package) using a custom Dockerfile:

- The custom Dockerfile has a bunch of extra steps & plugins

- UnRAID Stacks is out (except if there is a way to get the Dockerfile rebuild when a new "mkdocs-material" is released)

- and Portainer is out (Portainer also does not rebuild the Dockerfile).

- This means I regularly check if there are updates and then rebuild & redeploy

- Rebuild is not a problem (I got some scripting around it), but it is elementary and manual today. I can spend time cleaning it all up - or look for a ready solution

 

 

LXC - add the OS maintenance steps, else the same steps

- in LXC it would require managing Python 3.11+ releases and the "pyproject.toml" / "requirements.txt" to rebuild modules & steps. 

 

Maybe docker is a more suitable solution for this example, but I would like to see if LXC can simplify the steps (allowing me to evaluate the best fit). That's why I got all excited when I saw the "cloud-init" (cloud-config) discussion:

https://discuss.linuxcontainers.org/t/newbie-question-whats-the-lxc-version-of-a-dockerfile/6487/5

 

I have a number of other Linux (only) processes I would like to automate, and particularly interested in how "virt-manager" (https://hub.docker.com/r/mber5/virt-manager) enables a GUI in a container, thereby eliminating a full VM.

 

I hope it makes (more) sense what I'm looking for, so suggestions welcome - even if it is a pointer to sites to go read.

Link to comment
8 hours ago, LoneStar said:

I hope it makes (more) sense what I'm looking for, so suggestions welcome - even if it is a pointer to sites to go read.

First of all, automation has always it's up- and downsides and these is a completely different discussion and does not belong in this thread.

 

However you've mentioned a GUI above wich LXD nor Incus is, it is just a, let's call it "management interface" from the CLI for LXC.

 

Sure you can install various dependencies through "cloud-init" but that doesn't always work because of the many, many, many different distributions and how they are installing/managing their packages and of course it's more meant for the initialization that you can use the container as LAMP or similar.

After you've set up the container as "cloud-init" it is also up to you to maintain the applications running inside that container.

...and if you plan on always deploying a new container for LXC that is not that easily possible because you always have to destroy the container and rebuilt it and possibly mount a path where the data persists <- LXC is not intended to be used like that.

Set up a cron schedule that runs a script and gives you the output to somewhere and you are good to go, you could even do that through a User Script within Unraid if you want to go fancy.

 

I completely understand what you are trying to do but that always introduces some maintenance and I would never recommend that you for example do a distribution upgrade or even package updates automated because you know, some thing(s) will most certainly go wrong... :D

 

Why not use Ansible or something like that to maintain the containers.

 

For example if you look into most of my Docker containers they are self maintained, meaning that they check on every restart if there is a newer version from the application available and update the application if necessary with a relatively simple script.

 

The test containers for LXC that I've made for testing include PiHole, AdGuard-Home (these two because it is way easier to set them up in a LXC container than in Docker), HomeAssistant and AMP.

 

All the containers have set up a cron schedule which runs a update of the various applications that are running inside the container but not the base packages itself (again, I've had some horrible experience in the past). It is also possible for the user to disable the cron schedule too and do everything manually.

 

Look for example at this repository, it is basically done on the frontend but the backend needs a bit more love (if you want to try this, then send me a short PM and I will tell you how to install it, pretty easy). ;)

 

8 hours ago, LoneStar said:

I have a number of other Linux (only) processes I would like to automate, and particularly interested in how "virt-manager" (https://hub.docker.com/r/mber5/virt-manager) enables a GUI in a container, thereby eliminating a full VM.

I'm not familiar with that but it looks pretty much the same as the LXC GUI in Unraid, you can start/stop/freeze/kill the container, open up a terminal, set/limit resources (through the config),...

 

 

I hope I've covered all your points and that this helps, however I would recommend that we continue this conversation else where because this is strictly speaking the support thread for the LXC plugin.

Link to comment

Is it possible to assign static ip to LXC container via network configuration? Or even better is it possible to isolate a container from local lan using these settings? I wish to create a linux container to run a specific program but I don't want it to have access to my local lan. It should have only internet and allow ssh in from local lan, if possible. 

 

This is how LXC container network configuration look by default.

# Network configuration
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = br0
lxc.net.0.name = eth0

 

Technically I could use iptables with-in the container os but I would prefer an outside solution to the isolation, rather than trust the software modifications done to the operating system running inside the container.

Link to comment
1 hour ago, Nevis said:

Is it possible to assign static ip to LXC container via network configuration? Or even better is it possible to isolate a container from local lan using these settings?

There are many ways to achieve what you want to do.

You can change the network for each individual container manually through the config file.

However I would strongly recommend that you set the static IP address in the container and not in the configuration file from the container itself because I had some issues in the past with certain distributions from my testing.

To do that for a Debian based container (Bullseye+) here is a quick tutorial: Click

 

It is possible to create a VLAN, you can also use another physical NIC for the container, macvlan, proxy and so on...

 

You can read more about that over here: Click

(just scroll down where it says Network to get all the available options)

 

If you need help with anything then please feel free to reach out again, I'm here to help. :)

Link to comment
7 minutes ago, L0rdRaiden said:

In unraid mixing docker macvlan with linux bridges causes "call traces" will the same happen with lxc?

That is a general Linux issue, you can't use a bridge with macvlan and not related to Unraid.

 

Please explain that a bit more in detail what you want to achieve....

 

There is no issue using LXC with a macvlan bridge as long as you disable the bridge in the network settings from Unraid:
grafik.thumb.png.ea5f7fcd4906131a74895b3681d97978.png

 

After that go to the LXC Settings page, change the default network interface ether to eth0 or vhost0 (I would recommend that you use eth0) and if you have already containers that use the default veth setting click the little checkbox next to the network interface (this will change the network configuration from existing containers to macvlan):

grafik.png.de8ce7adfd6f2385f030cd80733426e6.png

 

So to speak, this will change the default configuration from:
grafik.png.f3e1a2daaa6169def9bed32ccc76bc25.png

 

to:

grafik.png.5f4a70030aa5f0d21cec436fd8c12a58.png

 

(and of course if you've checked the checkbox next to the network interface all networks from existing containers).

Link to comment

Why do you recommend to use eth0 instead of vhost0?

 

I'm planning to build this, the only change is that the docker0,1,2 will actually be lxc containers with docker inside.

Each lxc would have 2 interfaces assigned, one for administration of the lxc OS and another one for docker.

Ideally I would like to use macvlan in docker in order to have different IP's per container, but not sure yet how this will play over LXC. Maybe doing a bypass of the virtual nics inside lxc with facilitate everything.

 

Or maybe instead using different bridges to split the traffic I could creat vlans in eth0/vhost0 and assign 2 of the to each lxc and use macvlan on top of that for lxc and the docker inside lxc

 

I don't have yet a clear picture of the approach so unless I get some help I guess I will get it via the trial and error approach xD 

 

image.thumb.png.83153f2cd9c67ce5a02ded7657ae39ca.png

 

I might find other problems with the storage at some point

https://github.com/nextcloud/all-in-one/discussions/1490

Edited by L0rdRaiden
Link to comment
1 minute ago, L0rdRaiden said:

Why do you recommend to use eth0 instead of vhost0?

Because vhost0 is exclusively used for VMs and will only show you the traffic on the LXC page from the VMs which is strictly speaking wrong. If you use eth0 you'll see the whole traffic from the interface and I would also rather recommend attaching the macvlan interface to the physical NIC than to another virtual interface.

 

3 minutes ago, L0rdRaiden said:

Ideally I would like to use macvlan in docker in order to have different IP's per container, but not sure yet how this will play over LXC.

This is possible, set it up as usual.

 

3 minutes ago, L0rdRaiden said:

I don't have yet a clear picture of the approach so unless I get some help I guess I will get it via the trial and error approach xD 

TBH if this is a home setup to me this seems like a pretty complicated solution and I'm also not a big fan of virtualizing a Firewall but that's up to you.

 

I really can't help with like you want to set up because there are too many variables that I can't know but in general it should be possible.

You can see all different options for how you can configure each individual container here: Click

(just scroll down where it says Network to get all the available options)

 

As long if you don't mix a bridged interface with macvlan bridge you should be good to go.

  • Like 1
Link to comment
1 hour ago, Brramble said:

Does this support limiting the bandwidth on containers?

You can do that in the container with `tc` of course you first have to install it (I think in most distributions it's included in iproute or iproute2), configure it and run it with your preferences.

 

1 hour ago, Brramble said:

Do you know if we are able to make unprivileged containers yet?

Of course I know that and the answer is: not at the time.

As said in a previous post, this is something for way later (no, not in the next months, I have absolutely no timeframe on that) but it is on the TODO list.

Unprivileged containers are a bit harder to do on Unraid because Unraid is running from RAM and various other reasons.

 

Hope that helps! ...at least a bit.

  • Like 1
  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...