Jump to content

[Plugin] LXC Plugin


Recommended Posts

8 hours ago, ich777 said:

These entries are wrong.

You can't use : you have to use =

: where the old format and are not supported anymore.

 

Yes, because of the formatting error.

 

May I ask why you are using LXC for Plex in the first place? A Docker container would be much easier to set up or did I get this wrong and you have a different use case for the container?

 

thanks ich777,i successful pass through nvidia card and installed plex under instruction of this thread

https://www.geekbitzone.com/posts/2022/proxmox/plex-lxc/install-plex-in-proxmox-lxc/

 

It totally work as I expected,I can use nv card to encode video from unraid folder and port forward plex service on dedicated ethernet(not like orignal unraid docker failed to fulfill that purpose ) to pub connection

 

now I have lots of interest to dive into lxc learning.

 

Thanks ich777, you are the pro

Link to comment
8 hours ago, buxel said:

I have tried rebooting and disabling/enabling the plugin already. The folders on `/mnt/cache/lxc` are owned by `root`. Is that correct?

Yes.

 

8 hours ago, buxel said:

migrate some existing containers from my previous servers.

What was your previous server and how do you migrate?

 

8 hours ago, buxel said:

I tried your plugin on a fresh installation of Unraid 6.12.8 and LXC plugin verison 2024.03.14. ´Creating containers worked fine but destroying them always results in this error:

Please post your Diagnostics.

Are these migrated containers or are these newly set up containers through the plugin?

I assume that the path /mnt/cache is on a ZFS pool, did you delete by any chance the dataset zfs_lxccontainers?

Or did you maybe create a container, then migrated the data from the old system over?

 

Please describe exactly what you are doing.

 

EDIT: Just as a sanity check I created and felted a container, using also ZFS as Backing Storage type.

Link to comment

Hello @ich777, thank you for the quick response.

 

Migration is something i want to tackle later. For now, all containers are newly created through the plugin.

I have not deleted (or any other way messed with) anything. I first want to see how the system works, before i break it 😉. However, i have not created any dataset myself, i just followed the instructions in the plugin and (i think) it mentioned that the non-existing paths will be created.

 

Please find the logs attached. Before exporting, i have tried again with "test-container", which gave the same error as before.

 

 

hermes-diagnostics-20240319-1929.zip

Edited by buxel
Link to comment
1 hour ago, buxel said:

Please find the logs attached. Before exporting, i have tried again with "test-container", which gave the same error as before.

Just out of curiosity I tried also a NIX container 23.11:

grafik.png.a34e1d05323f47cc3384d9aca01080fa.png

 

...let it run for a bit:

grafik.thumb.png.65e177fb287917b55750fbd9c42ab097.png

 

...and then deleted it:

grafik.png.e706501b6a8c7f23e26cdc3df4a89943.png

 

Seems to work fine here...

 

I then changed the plugin to Directory as Backing Storage type:

grafik.png.afac0c3182748fce8e3a0ccdd9210ff3.png

 

...and there it is when I delete the container:
grafik.thumb.png.9222615f4ce6a0dca5cdce95488487e3.png

 

 

However it seems to be related specific to NIXOS since it is not happening with a Debian container.

 

I just noticed that there the immutable bit is set. You have to remove that first and then delete it.

To remove that bit open up a Unraid terminal and type in:

chattr -i <PATHTOCONTAINER>/rootfs/var/empty

(you have to replace the path to container with the absolute path to /rootfs/var/empty like in my screenshot /mnt/nvme/lxc/nix/rootfs/var/empty)

 

After that you can delete the folder for the container from the lxc directory and it will disappear (refresh the LXC page once) or you could also go to the LXC page and delete the container since after that the removal will work as intended.

 

Hope that helps for now and I will look into if I can work around that in the plugin.

 

I'll look into that and report back.

 

EDIT: BTW, please set the Storage Backing Type in the LXC settings to BTRFS in your case since then LXC will use BTRFS nativesnapshots. ;)

 

EDIT2: @buxel please update the LXC plugin to version 2024.03.19a where I implemented a workaround to remove the immutable bit first and then delete the container.

Link to comment

I can confirm everything you just said. The immutable bit seems to date back quite a while: https://github.com/NixOS/nixpkgs/commit/3877ec5b2ff7436f4962ac0fe3200833cf78cb8b#commitcomment-19100105

 

I assume, NixOS being special in it's ways about immutability, this was implemented to keep some declarative guarantees and prevent apps from writing where they shouldn't.

I have (ab)used this bit myself to make sure no app is accidently writing to a not yet mounted share.

 

Oh, and thanks about the note on the backing storage. 👍

  • Like 1
Link to comment
2 minutes ago, buxel said:

Oh, and thanks about the note on the backing storage. 👍

I don't know if you have seen it already but I've made a workaround for that, please update the plugin to version 2024.03.19a

 

Container removal now works fine, even when you are using Directory as the backing storage type.

Link to comment

Hello again,

 

i've been tinkering with the container and noticed some difference to my other server.

I'm trying to run Tailscale inside the container, following these instructions: https://tailscale.com/kb/1130/lxc-unprivileged

 

tailscaled refuses to work:

is CONFIG_TUN enabled in your kernel? `modprobe tun` failed with: 
wgengine.NewUserspaceEngine(tun "tailscale0") error: tstun.New("tailscale0"): operation not permitted
flushing log.
logger closing down
getLocalBackend error: createEngine: tstun.New("tailscale0"): operation not permitted

 

`/dev/net/tun` is available inside the container but i suspect some permissions to be off.

The Proxmox wiki mentions permissions, but i don't know if the mappings are the same in Unraid: https://pve.proxmox.com/wiki/OpenVPN_in_LXC

 

Here is the container's config:

# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: --dist nixos --release 23.11 --arch amd64
# Template script checksum (SHA-1): 78b012f582aaa2d12f0c70cc47e910e9ad9be619
# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)
lxc.mount.entry = proc dev/.lxc/proc proc create=dir,optional 0 0
lxc.mount.entry = sys dev/.lxc/sys sysfs create=dir,optional 0 0

# Allow Tailscale to work
lxc.cgroup2.devices.allow = c 10:200 rwm
#lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file
lxc.mount.entry = /dev/net dev/net none bind,create=dir

# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = x86_64
# According to NixOS Wiki
lxc.init.cmd = /sbin/init


# Container specific configuration
lxc.rootfs.path = btrfs:/mnt/cache/lxc/morbo/rootfs
lxc.uts.name = morbo

# Network configuration
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = br0
lxc.net.0.name = eth0

lxc.net.0.hwaddr=52:54:00:72:04:38
lxc.start.auto=1

 

Any pointers would be appreciated 😊

Edited by buxel
Link to comment
3 hours ago, buxel said:

Any pointers would be appreciated 😊

As far as I can tell in the Proxmox documentation is nesting enabled:

features: nesting=1

and that means basically uncomment this line:

#lxc.include = /usr/share/lxc/config/nesting.conf

so that it looks like that:

lxc.include = /usr/share/lxc/config/nesting.conf

 

However it seems a bit much only for the tun device to be working, IIRC you don't need to uncomment this but I could be also wrong about that. Sorry but I really can't help with Tailscale since I don't use it.

 

 

3 hours ago, buxel said:
# Allow Tailscale to work

It seems that you've found my post from here:

Link to comment
57 minutes ago, ich777 said:
lxc.include = /usr/share/lxc/config/nesting.conf

 

 

I have tried this but it results in an invalid configuration. I'm quite a noob to how LXC and apparmor/selinux relate, but from this thread i gathered that Unraid does not use apparmor -> hence the invalid configuration error when i uncomment the "lxc.include" line. 
 

# <content of /usr/share/lxc/config/nesting.conf>
# Use a profile which allows nesting
lxc.apparmor.profile = lxc-container-default-with-nesting

# Add uncovered mounts of proc and sys, else unprivileged users
# cannot remount those

lxc.mount.entry = proc dev/.lxc/proc proc create=dir,optional 0 0
lxc.mount.entry = sys dev/.lxc/sys sysfs create=dir,optional 0 0

 

This is why i copied over the two other 'lxc.mount' lines over to the container's config. Thinking about it, i may just be missing the equivalent setting for SELinux... 🤔

 

My hunch says the /dev/net/tun device is mapped propery but the tailscale process is lacking permissions to modify it. 

Link to comment
Just now, buxel said:

This is why i copied over the two other 'lxc.mount' lines over to the container's config. Thinking about it, i may just be missing the equivalent setting for SELinux... 🤔

No, please forget about that, this is the wrong direction.

 

Stop the container, comment the line from above again (this was the wrong file that I pointed you to), append this to your config:

lxc.cap.drop =

and start the container again (no, I didn't forgot anything, this is meant to be like in my example).

Link to comment

Thank you for looking into this with me.

I realized that i have mixed up two different problems: nesting was enabled for NixOS. The two mounts allow it to work properly, but are unrelated to tailscale or '/dev/net/tun'

 

I have updated the config but there is no change in behavior. The error messages are the same. 

For completeness sake, here is the current config:

 

# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)
lxc.mount.entry = proc dev/.lxc/proc proc create=dir,optional 0 0
lxc.mount.entry = sys dev/.lxc/sys sysfs create=dir,optional 0 0

# Allow Tailscale to work
lxc.cap.drop =
lxc.cgroup2.devices.allow = c 10:200 rwm
lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file

# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = x86_64
# According to NixOS Wiki
lxc.init.cmd = /sbin/init

# Container specific configuration
lxc.rootfs.path = btrfs:/mnt/cache/lxc/morbo/rootfs
lxc.uts.name = morbo

# Network configuration
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = br0
lxc.net.0.name = eth0

lxc.net.0.hwaddr=52:54:00:72:04:38
lxc.start.auto=1

 

Link to comment
6 minutes ago, buxel said:
# Allow Tailscale to work
lxc.cap.drop =

This is not what I've wrote you should do, I wrote:

15 minutes ago, ich777 said:

append this to your config

to be more precise put this at the end from your config.

 

I would always recommend to append your changes at the end because you make your life harder than it is if you put it in different spots in the config.

  • Like 1
Link to comment

Oh, i was not aware of that the order in the config matters. That is a pretty big gotcha!

 

With the _appended_ line it actually works as expected👍. Does this mean the container inherits all capabilities from the host? If i figure out the right one, could i just add it via "lxc.cap.add"? i assume it is "net_admin"

 

The init command is there because I just folloewed the NixOS wiki. But you are right, it works fine with

Edited by buxel
Link to comment
54 minutes ago, buxel said:

Does this mean the container inherits all capabilities from the host?

Yes.

 

54 minutes ago, buxel said:

could i just add it via "lxc.cap.add"?

There is no cap add.

 

You can allow all caps with lxc.cap.drop = and then forbid individual caps like:

lxc.cap.drop =
lxc.cap.drop = mac_admin mac_override sys_time sys_module sys_rawio

(the listed caps above in the second line are all caps that are dropped at container start - you have to experiment what you have to drop and what not)

  • Thanks 1
Link to comment
  • 2 weeks later...
7 hours ago, stayupthetree said:

How will this affect LXC in Unraid? 

This post is from December 2023…

 

This won‘t affect the plugin at all because the plugin is not using LXD it is pure LXC and only LXD users will loose access to the image repository.

 

Question:

Quote

Will incus still maintain access to all prebuilt distributions?


Answer:

Quote

Yes, Incus and LXC both have access to all the images on the image server.

 

Link to comment
  • 3 weeks later...
2 hours ago, emrepolat7 said:

What do you think is more sensible: setting up an LXC for each app, or creating a Docker container?

That depends on you and how you like to configure things.

For me for example I have about 40 always on Docker containers and 3 always on LXC container.

 

For example I like to have my whole DNS Stack (PiHole, Unbound, keepalived, DoH Server and LANCache) in one single container and that the routing is done internally in the container and the most important thing is that I don't have to change any ports on Unraid since a LXC container has always it's own dedicated IP.

 

I also like to have my HomeAssistant instance running in a LXC container to have more granular control over it because I was really unsatisfied with the Docker container and it broke multiple times after an update.

 

There are also other things which are better (in my opinion) to set up in a LXC container like AMP, the main benefit is that a LXC container is more kind of a VM than a Docker container and it is easy to set up Docker in it.

 

Here is it what it looks like on my Server:

grafik.png.3d25341225ad7323ac85e6049da02a28.png

 

You can also try the AMP, Hass or PiHole container, I already have premade container archives for that which you can find here:

https://github.com/ich777/unraid_lxc_pihole

https://github.com/ich777/unraid_lxc_homeassistant_core

https://github.com/ich777/unraid_lxc_amp

 

Just download the RAW file 'lxc_container_template.xml' which you'll find in each repository and put it in the directory /tmp/ on your server.

After that navigate to: http://yourserverIP/LXCAddTemplate

On the next page you can configure the container how you need it (similar like if you are installing it a Docker container from the CA App), click Create and wait for the Done button <- this can take a bit depending on your Internet connection.

 

Hope that helps. ;)

Link to comment

It's interesting to hear how you've grouped certain services together in a single LXC container for better internal routing and dedicated IP addresses. I see this approach can offer more control and ease of management, especially for services like your DNS stack and HomeAssistant instance where granular control is important.

Using LXC containers for certain applications, like AMP, makes sense when you prefer a more VM-like environment and want to easily run Docker within it.

 

Thank you for the answer and sharing your setup and the links to your pre-made container archives. 

  • Like 1
Link to comment

@ich777 Hello, thank you very much for creating this plugin. I am trying to pass /dev/kfd to an lxc container and install the AMD ROCm drivers within it. I have added the following content to the config:

lxc.cgroup2.devices.allow = c 226:0 rwm
lxc.cgroup2.devices.allow = c 226:128 rwm
lxc.cgroup2.devices.allow = c 242:0 rwm
lxc.mount.entry = /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
lxc.mount.entry = /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
lxc.mount.entry = /dev/kfd /dev/kfd none bind,optional,create=file

but I noticed that the /dev/kfd file is not being created inside the lxc container. I can pass /dev/kfd to Docker and it gets recognized correctly. I am not sure what the issue is and I hope to get your help.

on unraid:

ls -l /dev/kfd
crw-rw-rw- 1 root video 242, 0 Apr 27 21:38 /dev/kfd

on lxc Ubuntu Jammy:

ls /dev
console  core  dri  fd  full  initctl  log  lxc  mqueue  null  ptmx  pts  random  shm  stderr  stdin  stdout  tty  tty1  tty2  tty3  tty4  urandom  zero

 

Link to comment
36 minutes ago, zh522130 said:
lxc.mount.entry = /dev/kfd /dev/kfd none bind,optional,create=file

 

You have an extra / at the beginning of the device inside the container, try it like this:

lxc.mount.entry = /dev/kfd dev/kfd none bind,optional,create=file

 

 

If that is still not working try the following:

lxc.mount.entry = /dev/kfd dev/kfd none bind,optional,create=char

 

Link to comment
18 minutes ago, ich777 said:

You have an extra / at the beginning of the device inside the container, try it like this:

lxc.mount.entry = /dev/kfd dev/kfd none bind,optional,create=file

 

 

If that is still not working try the following:

lxc.mount.entry = /dev/kfd dev/kfd none bind,optional,create=char

 

You are amazing, you've solved my problem. This issue has been troubling me for days. Thank you once again.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...