[Plugin] LXC Plugin


Recommended Posts

8 hours ago, ich777 said:

After that add the line:

lxc.mount.entry = /mnt/user/YOURSHARE mnt/PATHINLXC none bind 0 0

 

Please make sure that like in this example the path /mnt/PATHINLXC has to exist in the first place and that you set the permissions right and create a user in the LXC container so that it matches the host, otherwise you will run into permission issues if something else is using the files that you mounted.


What would mimic the Read/Write - Slave access type? (two of my maps are remote shares - /mnt/remotes/...

Alright so mkdir first in lxc container - as root? The host is root user AFAIK.

EDIT: Hmm... I've added 3 and none worked (not seeing any mounted data in the container
Should it be mnt/user/YOURSHARE mnt/PATHINLXC or /mnt/user/YOURSHARE /mnt/PATHINLXC

Edited by Econaut
Link to comment
14 hours ago, Econaut said:

Should it be mnt/user/YOURSHARE mnt/PATHINLXC or /mnt/user/YOURSHARE /mnt/PATHINLXC

As I wrote above and this is no typo that the first / is missing at PATHINLXC:

lxc.mount.entry = /mnt/user/YOURSHARE mnt/PATHINLXC none bind 0 0

 

14 hours ago, Econaut said:

What would mimic the Read/Write - Slave access type?

There is nothing like that but it should work as usual.

 

 

I can tell also for sure that it is working since I'm using a path from UD also in a LXC container.

  • Thanks 1
Link to comment
3 hours ago, ich777 said:

As I wrote above and this is no type that the first / is missing at PATHINLXC:

lxc.mount.entry = /mnt/user/YOURSHARE mnt/PATHINLXC none bind 0 0

 

There is nothing like that but it should work as usual.

 

 

I can tell also for sure that it is working since I'm using a path from UD also in a LXC container.

Ahh, I noticed the missing / but thought it was a typo - thanks for clarifying that it wasn't haha. Working now!

  • Like 1
Link to comment

So I hit the same python 3.10 requirement (glibc2.35+) on debian bullseye (no surprise). I upgraded from bullseye to bookworm and added the unraidcgroup2 param & rebooted. All looks fine in the system except the WAN stopped working (apt, wget, etc). Worked fine prior to the upgrade.

LAN connections still work fine.

Looks like the resolv.conf file (resolve service) got removed. /run/systemd/resolve is missing.

Manually grabbed the .deb and installed but domain resolution is still broken >.<
Any ideas?

Manually adding IP <> domain entries into /etc/hosts works but is unsustainable.

Edited by Econaut
Link to comment
1 hour ago, Econaut said:

So I hit the same python 3.10 requirement (glibc2.35+) on debian bullseye (no surprise)

Please post your diagnostics, something seems wrong with your system.

 

I have now two users who are having 0 issues with the container nor have they any issues with LXC.

 

1 hour ago, Econaut said:

Looks like the resolv.conf file (resolve service) got removed. /run/systemd/resolve is missing.

Manually grabbed the .deb and installed but domain resolution is still broken >.<
Any ideas?

In the container or on Unraid?

 

You have to understand that cgroupv2 only changes some low level permissions and nothing more…

Link to comment

The issue is with the container post-upgrade (bullseye > bookworm). Started a new ubunutu jammy instance and it has working dns. So for now I am just going to re-configure that one, reboot, & hope it sticks.

Seems like the issue (if you want to reproduce) was regarding the upgrade itself.

EDIT: Ran into another issue (ubuntu jammy container wont start up again after initial startup) - I added the lxc.mount.entry lines to the config after shutting it down. The debian bookworm container starts up with the same entries.

Edited by Econaut
Link to comment
11 minutes ago, Econaut said:

EDIT: Ran into another issue (ubuntu jammy container wont start up again after initial startup) - I added the lxc.mount.entry lines to the config after shutting it down. The debian bookworm container starts up with the same entries.

Please post your Diagnostics, I can't even reproduce that.

 

I have no issue mounting a directory in a Ubuntu Jammy container (even tested to create a file in this directory from within the container and I can also access it on the host).

Link to comment
36 minutes ago, ich777 said:

I have no issue mounting a directory in a Ubuntu Jammy container (even tested to create a file in this directory from within the container and I can also access it on the host).

I can't blindly post the diagnostics without picking through it thoroughly so still working on that. Are there segments in particular you would look at?

I can say that by commenting out the lxc.mount entries and trying to start, the container starts fine again.

These are the three mount entries (which worked fine in debian container)
(Created the media/subdir1, etc paths - all owned by root - same as the debian container)
 

Quote

lxc.mount.entry = /mnt/remotes/[IP]_share media/subdir1 none bind 0 0
lxc.mount.entry = /mnt/remotes/[IP]_share2 media/subdir2 none bind 0 0
lxc.mount.entry = /mnt/user/named_cache_share media/subdir3 none bind 0 0


I do see a warning in fix common problems plugin (ignored as this should be proper default config for LXC):
 

Quote

 

Share lxc set to not use the cache, but files / folders exist on the cache driveYou should change the shares settings appropriately

or use the dolphin / krusader docker applications to move the offending files accordingly. Note that there are some valid use cases for a set up like this.

 


I set up LXC with /mnt/cache/lxc/ (cache:no)

EDIT: Had a typo in the 3rd entry container map name... Definitely need those pre-created correctly. Would be nice if there were a relevant error for that somewhere :)

Edited by Econaut
Link to comment
29 minutes ago, Econaut said:

EDIT: Had a typo in the 3rd entry container map name... Definitely need those pre-created correctly. Would be nice if there were a relevant error for that somewhere :)

I can't display a error message when there is nowhere one to be found...

 

You only can start the container with:

lxc-start -F CONTAINERNAME

and hope that you see a error message but I don't think you will see one if you have a typo somewhere.

 

Nice that you've figured it out.

Link to comment
2 hours ago, trott said:

I'd like to move my LXC container to another disk, just want to check what's the proper way to do that

Use the mv command but please make sure that you have a backup from the whole LXC directory where it's currently located.

 

Do something like this:

  1. First stop the LXC Service in the Settings tab
  2. Move the whole LXC folder from the current location to the new location with something like:
    mv /mnt/disk1/lxc /mnt/disk2/
    (please keep in mind that the move process can take a really long time and make sure that you leave the terminal from Unraid in front or even better connect through SSH to it and please don't close it!!!)
  3. Change the path to match the new path in the LXC Settings
  4. Start the LXC Service

 

As said above, I've never done this but the above should at least work (make sure as pointed out above that you create a backup from the old directory first!

 

Link to comment
  • 2 weeks later...
6 hours ago, fabricionaweb said:

Is it possible to create unprivileged containers?

It should be possible and also on my roadmap but that is something for way later because some changes need to be made to Unraid and also how things are working in terms for the plugin itself.

  • Like 1
Link to comment
1 hour ago, wtfcr0w said:

Can this be done?

Yes.

 

1 hour ago, wtfcr0w said:

I would like to know if it is possible to pass through /var/run/docker.sock from the host machine to a container

May I first ask why you want to do that? Do you want to use Docker from the host for developing container? If yes, I would rather recommend that you install Docker inside the LXC container, that would be the preferred way to go (I do this also in a LXC container with Docker installed in the container to build my container locally and upload them to Docker Hub and GHCR).

 

I really don't recommend mounting the docker.sock from the host and this is also a huge security concern and isn't recommended by the devs or me...

 

Installing Docker is easy as:

#!/bin/bash
#Install Docker
cd /tmp
curl -fsSL https://get.docker.com -o get-docker.sh
chmod +x /tmp/get-docker.sh
/tmp/get-docker.sh
rm /tmp/get-docker.sh

(please note that the above steps should be executed as root, you maybe have to add "sudo" to the line /tmp/get-docker.sh depending on the distribution that you are using)

  • Thanks 1
Link to comment
Quote

May I first ask why you want to do that? Do you want to use Docker from the host for developing container?

I am developing a docker front end dashboard and wanted to see about passing through the socket to use rather than using nested containers. I have plenty of containers on the host machine and didnt really want to run all the same containers in the LXC as well, as I would be using the API's to pull info from the docker containers. 

Link to comment
Quote

I really don't recommend mounting the docker.sock from the host and this is also a huge security concern and isn't recommended by the devs or me...

I decided to forego mounting the docker socket to the LXC and ended up opening up the TCP port to the docker daemon and am connecting to it through ip/port to get all of my containers. Thank you for the help anyway.

  • Like 1
Link to comment
1 hour ago, Patty92 said:

- LXC containers are visible and active again

Where they not active before? Or do you mean that the service got activated again when you go to the Dashbaord?

 

Did you click update so that the service was disabled?

You actually don't have to stop the containers in the first place, this is done if you stop the service automatically.

Link to comment
1 hour ago, ich777 said:

Where they not active before? Or do you mean that the service got activated again when you go to the Dashbaord?

I first terminated the container and then the service.

 

1 hour ago, ich777 said:

Did you click update so that the service was disabled?

Yes, I did.

 

1 hour ago, ich777 said:

Did you click update so that the service was disabled?

You actually don't have to stop the containers in the first place, this is done if you stop the service automatically.

It was a coincidence that I had this constellation.

 

Again, a little more detail on the sequence:

  • Stop LXC container
  • Wechsel zum Dashboard - Container war beendet
  • Switch to LXC settings - Enable LXC - no
  • Click Update
  • Switch to dashboard
  • LXC containers are visible and active again

 

I would argue that if I disable the service, yes the containers should not start again.
Maybe even remove the display from the dashboard when the service is disabled.

 

I hope this is understandable so far.

Link to comment
9 minutes ago, Patty92 said:
  • Stop LXC container
  • Wechsel zum Dashboard - Container war beendet
  • Switch to LXC settings - Enable LXC - no
  • Click Update
  • Switch to dashboard
  • LXC containers are visible and active again

I have to look into it whats the cause for that.

This seems like a bug to me that the service or better speaking the containers are started after you visit the dashboard (even if the service is disabled).

I'm assuming that the containers have Autostart enabled correct?

 

12 minutes ago, Patty92 said:

I would argue that if I disable the service, yes the containers should not start again.
Maybe even remove the display from the dashboard when the service is disabled.

Please keep in mind that the Dashboard was added recently and I haven't got time to fully test everything yet, but thank you for the report, really much appreciated.

Link to comment
10 minutes ago, ich777 said:

I'm assuming that the containers have Autostart enabled correct?

That's right.

 

10 minutes ago, ich777 said:

Please keep in mind that the Dashboard was added recently and I haven't got time to fully test everything yet, but thank you for the report, really much appreciated.

No problem at all, I just noticed it and thought here might be the best place to report a "possible error".

 

Greetings Patty

  • Thanks 1
Link to comment
58 minutes ago, ich777 said:

Please update the plugin to the latest version.

  • If you disable the LXC service the card for LXC is now not visible anymore
  • Containers are now properly stopped after you disable the LXC service

Wonderful, it works. 👍
Thanks for the quick implementation. 🙃

  • Like 1
Link to comment
  • 3 weeks later...

Hi,

 

I wanted to use nix / nixos and I couldn't find any info in this thread.

 

So here's my steps for what I did to get nixos working. It's based on the wiki here https://nixos.wiki/wiki/Proxmox_Linux_Container

These steps could be stream lined for someone else, but this is what I did. If you have any suggestions on how I can improve this procedure, please let me know.

 

1. Download the container tarball from hydra as described on the nix wiki and save it to your unraid server.

2. Setup a new lxc container, such as ubuntu jammy. I mostly did this to get the config file. I named my container NixOS

3. cd to the NixOS folder from the unraid terminal.

4. Delete the contents of NixOS/rootfs folder (i.e. rm -rf NixOS/rootfs/*) .

5. Extract the hydra container tarball to NixOS/rootfs

6. `mkdir NixOS/rootfs/.ssh`

7. echo "your ssh public key" > NixOS/rootfs/.ssh/authoriized_keys

8. chown 0600 NixOS/rootfs/.ssh/authoriized_keys

9. Edit the config file and add this line:

lxc.init.cmd = /sbin/init

10. (Optional) copy the NixOS folder to NixOS-template so that you can reuse it later. Then, you can just create copies of it 

11. Start the NixOS container.

 

Done.

 

12. (Optional) if you want to create a new container from your template, just copy it the NixOS-Template to it's new name, like NixOS-App. Then, edit the network address of NixOS-App/config and  give it new address and modify the path of the rootfs.

 

Edited by The_Eric
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.