[Plugin] LXC Plugin


Recommended Posts

2 hours ago, cr08 said:

Am I missing something to see the container memory usage? I'm seeing the other container stats but it's not showing the memory usage.

You will see the container memory only with cgroup v1.

I‘ve already looked into this and there is no easy fix for this on Unraid but I‘s on my to do list.

 

1 hour ago, csrihari said:

Edit: Though it says Backup Failed, it did create tar file. Did not attempt restoring.

Certain file paths are perhaps too long but it will create the tar file anyways. Have you yet tried the integrated snapshot feature?

  • Like 1
Link to comment
12 hours ago, ich777 said:

Certain file paths are perhaps too long but it will create the tar file anyways. Have you yet tried the integrated snapshot feature?

I did not try snapshots but will do. I am just playing around at this time to see how it all works. By the way, tar in backup script works without a problem if verbose is enabled. Weird. 

Edited by csrihari
Link to comment
1 hour ago, csrihari said:

By the way, tar in backup script works without a problem if verbose is enabled. Weird. 

This is really weird, have to look into this, definitely strange.

The Snapshot feature should also work fine, but please keep in mind if you take a snapshot leave the tab in the foreground.

Link to comment
On 1/27/2023 at 1:41 PM, ich777 said:

I don't see that you've enabled cgroup v2 like mentioned in the first post.

You have to put "unraidcgroupv2" in the same line not below...

grafik.png.4e086fcad83f46ca029137c75753bcd9.png

I definitely did image.png.a891f1afb1f3d03bf45f7c29706fd324.png

 

edit: i didn't realize it had to be on the same line

Edited by Exes
Link to comment
15 minutes ago, Exes said:

I definitely did

But from what I see in the disagnostics that you've sent over after you've changed it it is not applied:

Jan 26 19:19:34 CD1GIT-Unraid kernel: Command line: BOOT_IMAGE=/bzimage pcie_acs_override=downstream initrd=/bzroot

 

Otherwise it would show up in the above posted line.

 

Did you reboot after you've applied that setting? Is "Unraid OS" your default boot method?

 

Please run:

lxc-checkconfig

From a Unraid Terminal and post the full output.

Link to comment

Enabling nesting gives an error

lxc-start: debtest: ../src/lxc/confile.c: set_config_apparmor_profile: 1651 Invalid argument - Built without AppArmor support
lxc-start: debtest: ../src/lxc/parse.c: lxc_file_for_each_line_mmap: 129 Failed to parse config file "/usr/share/lxc/config/nesting.conf" at line "lxc.apparmor.profile = lxc-container-default-with-nesting"

Is this expected?

Link to comment
4 hours ago, csrihari said:

Is this expected?

A little bit more context would be nice.

What container are you using? The config would be also helpful. What do you want to do with the container?

 

Your error is right here:

4 hours ago, csrihari said:

lxc-start: debtest: ../src/lxc/parse.c: lxc_file_for_each_line_mmap: 129 Failed to parse config file "/usr/share/lxc/config/nesting.conf" at line "lxc.apparmor.profile = lxc-container-default-with-nesting"

LXC on Unraid is built without AppArmor because Unraid uses SELinux.

Link to comment
10 hours ago, ich777 said:

A little bit more context would be nice.

What container are you using? The config would be also helpful. What do you want to do with the container?

 

Your error is right here:

LXC on Unraid is built without AppArmor because Unraid uses SELinux.

Sorry nevermind. I was overthinking my docker setup. Disabled this and it works fine.

  • Like 1
Link to comment

Hello, I would like to set up a VPN connection with a Fritzbox (IPSec) in an LXC with Ubuntu and encounter this error message, does anyone have an idea why I have no rights to add the network as root?

 

Screenshot 2023-02-07 211740.png

 

apparently it is because the TUN drivers are not installed, see screenshot

 

1923031695_Screenshot2023-02-07212141.thumb.png.5c6c98bb50d8f394f25f9c557a58273c.png

Edited by Lucas Mietke
Link to comment
10 hours ago, Lucas Mietke said:

apparently it is because the TUN drivers are not installed, see screenshot

I have never done this but I can look into it next week.

 

From a quick Google search I found this.

 

At least from my understanding, you have to add this to the config from the LXC container:

lxc.cgroup2.devices.allow = c 10:200 rwm
lxc.mount.entry = /dev/net dev/net none bind,create=dir

 

Link to comment
15 hours ago, ich777 said:

I have never done this but I can look into it next week.

 

From a quick Google search I found this.

 

At least from my understanding, you have to add this to the config from the LXC container:

lxc.cgroup2.devices.allow = c 10:200 rwm
lxc.mount.entry = /dev/net dev/net none bind,create=dir

 

thanks that works fine

  • Like 1
Link to comment
4 hours ago, ich777 said:

I‘m not an Ubuntu expert (because I still don‘t like it :D ) and I can only link to here.

 

I can only tell you that it is just working fine in my Debian containers.

 

This is purely a distribution thing and has nothing to do with LXC.

okay thanks.

 

so I now know that it cannot be changed in the LXC, I looked for and found the solution for Ubuntu

 

https://www.itslot.de/2014/02/ubuntu-ip-adresse-per-konsole-andern.html#:~:text=In Ubuntu können Sie eine,den DHCP-Modus umgestellt werden.

  • Like 1
Link to comment
15 hours ago, Lucas Mietke said:

so I now know that it cannot be changed in the LXC, I looked for and found the solution for Ubuntu

Yes and no, it could be but I would rather recommend that you configure that in the container or in your DHCP server since this is more like a VM and not a Docker container.

 

For instance on my Debian Bullseye containers I use systemd networking to configure IPs, Gateway, DNS,...

Link to comment
On 6/12/2022 at 5:24 AM, juan11perez said:

Ok so I got the debian bullseye container to work passing an nvidia gpu and using docker.... 🙂

It's transcoding plex, frigate and possibly any other container

 

I've tried documenting my procedure in a readme.txt

I've also created a script (system_config.sh) to install all requirements.

I'm also sharing my container <<dockerhost>> config file.

 

I'm attaching all docs.

 

I'm by no means an expert, I got it working after several hours of trial and error, but I'll try to answer questions. I've included the sources of info in the readme.txt

 

@ich777 thank you again

 

 

files.zip 4.71 kB · 36 downloads

hi,juan11perez,thank for your guide file

I could install debian and nvidia driver(using nvidia-smi) in lxc, but when I install torch==1.13.1+cu117 in debian, it said Torch is not able to use GPU

I follow your readme file to passing nvidia card(tesla p4)and when I check ls -l /dev/nvidia* ,only shows 195 /dev/nvidia0 and /dev/nvidiactl ,there's no number of  dev/nvidia-uvm and dev/nvidia-uvm-tool. how can I get those two ?

 

# Pass nvidia card into container
# Allow cgroup access
lxc.cgroup.devices.allow = c 195:* rwm

# Pass through device files
lxc.mount.entry = /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry = /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry = /dev/nvidia1 dev/nvidia1 none bind,optional,create=file
lxc.mount.entry = /dev/nvidiactl dev/nvidiactl none bind,optional,create=file

 

root@Tower:~#  ls -l /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Feb 13 19:49 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Feb 13 19:49 /dev/nvidiactl

 

root@Tower:~#  ls -l /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Feb 13 19:49 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Feb 13 19:49 /dev/nvidiactl

/dev/nvidia-caps:
total 0
cr-------- 1 root root 244, 1 Feb 13 19:49 nvidia-cap1
cr--r--r-- 1 root root 244, 2 Feb 13 19:49 nvidia-cap2
root@Tower:~# ls -al /dev/dri/*
crwxrwxrwx 1 root video 226,   0 Feb 13 19:46 /dev/dri/card0
crwxrwxrwx 1 root video 226,   1 Feb 13 19:48 /dev/dri/card1
crwxrwxrwx 1 root video 226, 128 Feb 13 19:46 /dev/dri/renderD128
crwxrwxrwx 1 root video 226, 129 Feb 13 19:48 /dev/dri/renderD129

/dev/dri/by-path:
total 0
drwxrwxrwx 2 root root 120 Feb 13 19:48 ./
drwxrwxrwx 3 root root 140 Feb 13 19:48 ../
lrwxrwxrwx 1 root root   8 Feb 13 19:48 pci-0000:00:02.0-card -> ../card1
lrwxrwxrwx 1 root root  13 Feb 13 19:48 pci-0000:00:02.0-render -> ../renderD129
lrwxrwxrwx 1 root root   8 Feb 13 19:46 pci-0000:05:00.0-card -> ../card0
lrwxrwxrwx 1 root root  13 Feb 13 19:46 pci-0000:05:00.0-render -> ../renderD128

 

 

Unraid Nvidia Driver installed 515.76

Nvidia Info:
Nvidia Driver Version:	515.76
Open Source Kernel Module:	No
Installed GPU(s):	0:
Tesla P4
05:00.0
GPU-c8b07fdb-a2d2-d7b4-0c65-559015576121

 

 

Edited by dhlsam
wrong spell
Link to comment
19 hours ago, dhlsam said:

I could install debian and nvidia driver(using nvidia-smi) in lxc

Are you sure that you've installed the exact same driver version as on the host?

 

19 hours ago, dhlsam said:

Torch is not able to use GPU

What kind of software do you try to run? I think there is a command line switch to skip the cuda torch check, at least I think so...

Link to comment
8 hours ago, ich777 said:

Are you sure that you've installed the exact same driver version as on the host?

 

What kind of software do you try to run? I think there is a command line switch to skip the cuda torch check, at least I think so...

yes,same driver version as nvidia driver on host,

typing ls -al /dev/nvidia*

/dev/nvidia-modeset ,/dev/nvidia-uvm-tools and /dev/nvidia-uvm shows up when I uninstalled the driver then reinstall again (without reboot) ,those three disappeared after reboot the unraid

 

I am using automatic installation on linuxs of stable diffusion webui from AUTOMATIC1111

 

Link to comment
  • 2 weeks later...

I added a new ubuntu jammy container (added fine) but it cannot start.
Only log data I see is:

Quote

LXC: Container Ubuntu started
br0: port 2(veth2eduqn) entered disabled state
device veth2eduqn left promiscuous mode
br0: port 2(veth2eduqn) entered disabled state


Says it started but it doesn't start.

Edit: seems to work fine with debian bullseye

How does one map paths like docker?

Edited by Econaut
Link to comment
1 hour ago, Econaut said:

I added a new ubuntu jammy container (added fine) but it cannot start.

Have you read the first post, especially right at the beginning where it says "ATTENTION" and "cgroup v2"?

Follow the Instructions and your issue should be solved.

 

cgroupv2 will be the default with the release of Unraid 6.12.0

 

2 hours ago, Econaut said:

How does one map paths like docker?

You have to edit your config manually (you should find the path to the config when you click on the container and on "Show Config" at the first line in the pop up).

 

After that add the line:

lxc.mount.entry = /mnt/user/YOURSHARE mnt/PATHINLXC none bind 0 0

 

Please make sure that like in this example the path /mnt/PATHINLXC has to exist in the first place and that you set the permissions right and create a user in the LXC container so that it matches the host, otherwise you will run into permission issues if something else is using the files that you mounted.

  • Like 1
  • Thanks 1
  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.