[Plugin] LXC Plugin


Recommended Posts

8 minutes ago, ich777 said:

Something must be wrong somehow on your end, I've did the exact same steps as you did with slightly different commands.

 

As you can see from my screenshots everything is working just fine...

 

Ok thanks, for checking that. I also tested most of the mentioned commands in the vnc container, but same result.

 

I have no Idea what else could be the problem in this case. I have installed and removed the plugin 1-2 weeks earlier. Maybe I have to clean some files/folder manually? Maybe rw permissions for the lxc path? 

Link to comment
16 minutes ago, Kulisch said:

Ok thanks, for checking that. I also tested most of the mentioned commands in the vnc container, but same result.

Try the following:

  1. Destroy the container
  2. Create a new VNC container with the name: DebianVNCTest
  3. Wait for the installation to finish
  4. Open up the browser and go to the noVNC webinterface from the container LXCContainerIP:8080/vnc.html?autoconnect=true
  5. In the taskbar click on the terminal
  6. Type in "su" and enter "Unraid" as the password
  7. Type in "passwd" enter "Unraid" and after that twice "12345"
  8. Type in "exit"
  9. Type in "su" again and "12345"
  10. When in the su shell type in "passwd debian" and twice "12345"
  11. After that you can change the shell with "su debian" but I would recommend to type in "exit" to go back to the debian shell (I never recommend to jump around the shells from a shell in a new shell and so on...)
  12. After beeing in the debian shell type in "passwd" enter "12345" once and after that type in "Unraid" twice

(all commands without double quotes)

 

You should end up with a passwords for the user debian "Unraid" and root "12345"

 

15 minutes ago, Kulisch said:

Maybe I have to clean some files/folder manually?

No, just destroy the container and create a new one, maybe try picking a different name and that should do it (even the same name is fine...).

 

15 minutes ago, Kulisch said:

Maybe rw permissions for the lxc path?

Please don't do this!!! This will most definitely destroy mess up all your LXC containers!

Link to comment
22 minutes ago, Kulisch said:

Start failing there:

Is your keyboard layout maybe wrong? Sorry I really don't know what could be wrong on your machine...

It seems like that you mistyped the password or something like that...

I can't imagine something else also it is working for everyone else from what I know...

 

Also what is this line with:

#Password: Unraid

 

22 minutes ago, Kulisch said:

In the meantime, I tried another image... 

 

Result:

 

kulisch@Alpine:/$ su root
su: must be suid to work properly

 

That's why I don't recommend connecting to the root shell, switching to a user shell and then try to switch to a root shell again. I also don't see why this makes any sense...

Link to comment
39 minutes ago, ich777 said:

Sorry I really don't know what could be wrong on your machine...

 

Honestly... im thinking in the same way,... I wish I could show you in plaintext what I'm typing, but I dont know how. I want to show you that this is not an layer 8 problem, because I'm starting to feel ridiculous.

  

39 minutes ago, ich777 said:

Also what is this line with:

#Password: Unraid

 

 

I wanted to make sure that I typed the password correctly, because like you said... maybe the Keyboard Layout is wrong. But thats not the case.

 

If this helps, I can change my Layout with dpkg-reconfigure, but this shouldn't effect this in any way.

 

39 minutes ago, ich777 said:

That's why I don't recommend connecting to the root shell, switching to a user shell and then try to switch to a root shell again. I also don't see why this makes any sense...

Switching user multiple times to another user doesn't make much sense, that is correct but it works on every other VM, Machine,... thats only for testing purposes to troubleshoot.

 

kulisch -> root -> kulisch -> root (works on my testmachine too)

 

image.png.064d29c5dc141986055036a4d5ee6e2b.png

 

If there is no solution for this, I will uninstall this plugin and come back later, hoping that is problem gets solved by... I dont know some random patch or something.

Edited by Kulisch
Link to comment
8 minutes ago, Kulisch said:

Switching user multiple times to another user doesn't make much sense, that is correct but it works on every other VM, Machine,... thats only for testing purposes to troubleshoot.

This is working in the VNC LXC container too:

image.png.ef4ea338dc1d2bcf99cb4dd5fbb87698.png

Link to comment

Ok, ssh to the VNCContainer works but su does not... I can't explain why...

 

image.png.6116c34a89db0e65c6473d01beec1dba.png

 

So if no one else is expiriencing this issues, than that is fine. I will try this in the future again but for now I leave that, because I dont know what exactly is the problem in this case.

 

But thank you very much for your time and trying find a solution.

Link to comment
15 minutes ago, Kulisch said:

So if no one else is expiriencing this issues, than that is fine. I will try this in the future again but for now I leave that, because I dont know what exactly is the problem in this case.

Sorry but I have to say that something must be wrong on your side:

image.png.23be27c970e331919c722906dca7215f.png

 

No issue here whatsoever over ssh...

(please ignore the one su attempt, had to change the password since I forgot it now that I changed it that many times... :D )

  • Thanks 1
Link to comment

Good day, has anyone been able to run a gpu within the container?

 

I've managed to pass it into the container by following this guide with the addition of an extra parameter:

 

# Allow cgroup access
lxc.cgroup.devices.allow = c 195:* rwm
lxc.cgroup.devices.allow = c 243:* rwm
lxc.cgroup.devices.allow = c 239:* rwm

# Pass through device files
lxc.mount.entry = /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry = /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.mount.entry = /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry = /dev/nvidia1 dev/nvidia1 none bind,optional,create=file
lxc.mount.entry = /dev/nvidiactl dev/nvidiactl none bind,optional,create=file

 

 

afterwards running nvidia-smi in the container works.

 

However, when i try to spin a docker container using GPU i get this error:

 ⠹ Container jellyfin  Starting                                                                                                                         0.2s
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: container error: failed to get device cgroup mount path: no cgroup filesystem mounted for the devices subsytem in mountinfo file: unknown

 ⠹ Container jellyfin  Starting                                                                                                                         0.2s
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: container error: failed to get device cgroup mount path: no cgroup filesystem mounted for the devices subsytem in mountinfo file: unknown

 

 

I've tried several "recommendations" from the web including  this to no avail.

 

This is the sample debian bullseye container.

 

I've also been unable to create a more recent stable ubuntu container.

 

This command creates the container but it doesnt start

 

lxc-create --name docker --template download -- --dist ubuntu --release jammy --arch amd64

 

Attempting to start it from cli doesnt show any errors.

 

The only other that has worked is the impish release

Link to comment
2 hours ago, juan11perez said:

# Pass through device files

Why?

 

I would rather recommend to install the drivers in the Container.

 

2 hours ago, juan11perez said:

Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: container error: failed to get device cgroup mount path: no cgroup filesystem mounted for the devices subsytem in mountinfo file: unknown

Have you installed to container runtime in the container?

 

I have to investigate further but don't hold your breath on this because I will be not much around in the next two months.

 

2 hours ago, juan11perez said:

Attempting to start it from cli doesnt show any errors.

You have to start it like this to get error messages:

lxc-start -F CONTAINERNAME

(the parameter -F stands for Foreground)

 

Also Ubuntu won't work because it relies heavily on systemd which is not available on Unraid and the workarounds that I've found so far are not the reliable or better speaking good...

Link to comment

thank you. I'm just taking the opportunity to learn something new with this plugin.

For instance I learned why staic ip instructions in the container conf dont work ------- again the culprit systemd 

And learned a workaround. interesting!

 

38 minutes ago, ich777 said:

I would rather recommend to install the drivers in the Container.

 

So I'll give it a go with installing the drivers in the container. 

 

 

38 minutes ago, ich777 said:

Have you installed to container runtime in the container?

 

I haven't. I've not worked out how to do it. I thougut it was with 

    sudo apt-get install -y nvidia-docker2
    sudo pkill -SIGHUP dockerd

 

 

I run docker quite extensively in unraid and have no issues whatsoever. So no urgency or similar.

 

Thank you for taking the time to answer and producing/sharing this component

 

  • Like 1
Link to comment

WORKAROUND for Ubuntu, Debian Bookworm+, Fedora 36,... containers that wont start.

Add this line to your container config at the end:

lxc.init.cmd = /lib/systemd/systemd systemd.unified_cgroup_hierarchy=1

 

This will actually enable the containers to start.

  • Like 1
  • Upvote 1
Link to comment
On 5/23/2022 at 7:18 PM, ich777 said:

WORKAROUND: If you want to get a Ubuntu, Debian Bookworm+, Fedora 36,... container to run you have to add this line to your container configuration file at the end so that it actually is able to start:

Thank you for the systemd tip. Tested arch, ubuntu all work.!!!!

 

Have you tried running windows? I saw a video, but it involves lxd. 

 

Just curious if it'd work.

 

Link to comment
12 minutes ago, juan11perez said:

Thank you for the systemd tip. Tested arch, ubuntu all work.!!!!

Glad to hear that everything is working now for you... :)

It is not a perfect workaround but it works for now... ;)

 

9 minutes ago, juan11perez said:

Have you tried running windows? I saw a video, but it involves lxd. 

Not yet, but it should work (please keep in mind that in the next two months I'm not able to do much here because I'm really busy in real life)... :/

LXD is only a set of tools/databases or better speaking a management system for LXC so it is not needed in general.

 

I've don't integrated LXD because it introduces to much dependencies like Python and could maybe interfere with other Python installations which are maybe installed through the NerdPack and so on...

Link to comment

Once again thank you. It's a very useful tool. (lxc)

 

I can see how much more efficient it is for setting/using 'throwaway' vms or those set up just to run a specific application. Compared to a full vm, resource utilisation is minimal.

 

I was curious to see how frigate would perform; but while it did the job (using the coral and nvidia card) it does have a minimal yet noticeable lag when compared to a direct docker on unraid. I guess it's to be expected when doing..... unraid > lxc > debian > docker > frigate as opposed to unraid > docker > frigate.  

 

I'm sure I'm not the only one extremely thankful for the effort you devote to this community and also understand that paying the bills is the prerogative.

 

Thank you again !!!!!!!!!!

 

 

 

 

Link to comment
9 minutes ago, juan11perez said:

I was curious to see how frigate would perform; but while it did the job (using the coral and nvidia card) it does have a minimal yet noticeable lag when compared to a direct docker on unraid. I guess it's to be expected when doing..... unraid > lxc > debian > docker > frigate as opposed to unraid > docker > frigate.

Isn't it also possible to install Frigate on Bare metal instead of a Docker container, that would be way more efficient.

 

My HomeAssistant Core installation that runs inside a LXC container (without Docker) is way more performant than the Docker container, but that can be also a subjective feeling, haven't tested it yet if it's actually faster.

Link to comment
On 6/9/2022 at 2:22 PM, Kulisch said:

So if no one else is expiriencing this issues, than that is fine. I will try this in the future again but for now I leave that, because I dont know what exactly is the problem in this case.

 

But thank you very much for your time and trying find a solution.

 

About the passwd and su problem... I solved this issue by changing the path where the lxc files are stored. Changed it to the mentioned path /mnt/cache/lxc/ and now everything works as expected.

 

Now I have another question:

 

I'm trying to mount a document scanner (Epson Perfection v30), but some informations are changing after unplug and replug the device.

 

root@unraid:/ lsusb
Bus 001 Device 009 <scanner>

 

After unplug and replug:
 

root@unraid:/ lsusb
Bus 001 Device 010 <scanner>

 

/mnt/cache/lxc/<container>/config

 

#USB Pass
lxc.cgroup.devices.allow = c 189:* rwm
lxc.mount.entry = /dev/bus/usb/001/011 dev/bus/usb/001/011 none bind,optional,create=file

 

My solution is to edit the config file manually every time.

 

Is there simple solution to automount my scanner to an LXC container? Or do I have to make a script that detects the scanner and edits the file? I don't want to passtrough the whole BUS folder.

  • Like 1
Link to comment
3 hours ago, Kulisch said:

Changed it to the mentioned path /mnt/cache/lxc/ and now everything works as expected.

What path did you try before?

 

3 hours ago, Kulisch said:

My solution is to edit the config file manually every time.

Doesn't your printer get entry in /dev itself? I don't think it is recognized as a ttyd in /dev? What about the devices in /dev/usb when the printer is connected?

You should be also able to create a custom udev rule for your specific device ID.

 

I haven't looked much into it but something like "Hotplugging USB Devices LXC" should point you in the right direction on google.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.