[Plugin] LXC Plugin


Recommended Posts

LXC

(Unraid 6.10.0+)

 

LXC is a well-known Linux container runtime that consists of tools, templates, and library and language bindings. It's pretty low level, very flexible and covers just about every containment feature supported by the upstream kernel.

This plugin doesn't include the LXD provided CLI tool lxc!

 

This allows you basically to run a isolated system with shared resources on CLI level (without a GUI) on Unraid which can be deployed in a matter of seconds and also allows you to destroy it quickly.

Please keep in mind that if you have to set up everything manually after deploying the container eg: SSH access or a dedicated user account else than root

 

 

ATTENTION: This plugin is currently in development and features will be added over time.

LIMITATIONS: Distributions which use systemd (Ubuntu, Debian Bookworm+,...) will not work or not work properly currently.

WORKAROUND: If you want to get a Ubuntu, Debian Bookworm+, Fedora 36,... container to run you have to add this line to your container configuration file at the end so that it actually is able to start:

lxc.init.cmd = /lib/systemd/systemd systemd.unified_cgroup_hierarchy=1

 

 

  1. Install LXC from the CA App:
    grafik.png.05b66c641f621b78b8636b20a460816c.png

     
  2. Go to the Settings tab in Unraid an click on "LXC"
    image.png.f6c695caec8772271ba20abcf1950e88.png

     
  3. Enable the LXC service, select the default storage path for your images (this path will be created if it doesn't exist and it always needs to have a trailing / ) and click on "Update":
    image.thumb.png.d1bf251f24fc1532a3fb6e1442e06568.png
    ATTENTION:
    - It is strongly recommended that you are using a real path like "/mnt/cache/lxc/" or "/mnt/diskX/lxc/" instead of a FUSE "/mnt/user/lxc/" to avoid slowing down the entire system when performing heavy I/O operations in the container(s) and to avoid issues when the Mover wants to move data from a container which is currently running.
    - It is also strongly recommended to not share this path over NFS or SMB because if the permissions are messed up the container won't start anymore and to avoid data loss in the container(s)!
    - Never run New Permissions from the Unraid Tools menu on this directory because you will basically destroy your container(s)!

     
  4. Now you can see the newly created directory in your Shares tab in Unraid, if you are using a real path (what is strongly recommended) weather it's on the Cache or Array it should be fine to leave the Use Cache setting at No because the Mover won't touch this directory if it's set to No:
    image.png.0bd9e627e0c6258d69d8b3676faddf65.png

     
  5. Now you will see LXC appearing in Unraid, click on it to navigate to it
    image.png.1001d8a7d3a4993c5232da0e136c32a0.png

     
  6. Click on "Add Container" to add a container:
    image.thumb.png.8620c945be773cf21617fddfe8a9f2b7.png

     
  7. On the next page you can specify the Container Name, the Distribution, Release, MAC Address and if Autostart should be enabled for the container, click on "Create":
    image.thumb.png.3c25c6933aba7fff9c34314009f3aa72.png
    You can get a full list of Distributions and Releases to choose from here
    The MAC Address will be generated randomly every time, you can change it if you need specific one.
    The Autostart checkbox let's you choose if the container should start up when the Array or LXC service is started or not (can be changed later).

     
  8. In the next popup you will see information about the installation status from the container (don't close this window until you see the "Done" button) :
    image.png.c0653b72930392d939ac2ef3019316f4.png

     
  9. After clicking on "Done" and "Done" in the previous window you will be greeted with this screen on the LXC page, to start the container click on "Start":
    image.thumb.png.8034bc399a05d1d08d0f4f7844e515f5.png
    If you want to disable the Autostart from the container click on "Disable" and the button will change to "Enable", click on "Enable" to enable it again.

     
  10. After starting the container you will see several information (assigned CPUs, Memory usage, IP Address) about the container itself:
    image.thumb.png.cafa3ea1a1db1c6e64e78d5e92706217.png

     
  11. By clicking on the container name you will get the storage location from your configuration file from this container and the config file contents itself:
    image.thumb.png.280998d467c8c4c4e31a92de2b69a06e.png
    For further information on the configuration file see here

     
  12. Now you can attach to the started container by clicking the Terminal symbol image.png.df536ffdea3a475f1d4b6a8cf58595f6.png in the top right corner from Unraid and typing in lxc-attach CONTAINERNAME /bin/bash (in this case lxc-attach DebianLXC /bin/bash):
    image.png.a5740d6e3e7b71ed7be21359019a6573.png
    You can of course also connect to the container without /bin/bash but it is always recommended to connect to the shell that you prefer

     
  13. Now you will see that the terminal changed the hostname to the containers name this means that you are now successfully attached to the shell from the container and the container is ready to use.
    I recommend to always do a update from the packages first, for Debian based container run this command (apt-get update && apt-get upgrade):
    image.png.835d38a2c5aaf20c0ba15f6f1733e8d7.png

     

 

Please keep in mind that this container is pretty much empty and nothing else than the basic tools are installed, so you have to install nano, vi, openssh-server,.. yourself.

 

 

To install the SSH Server (for Debian based containers) see the second post.

  • Like 5
  • Thanks 8
Link to comment

Install SSH Server in Debian based containers:

 

 

Method 1 (recommended) :

 

  1. Attach to the container with "lxc-attach DebianLXC /bin/bash" (replace DebianLXC with your container name) :
    image.png.848f698e89e00f2269f7e2bcc2d732c9.png

     
  2. I would first recommend that you add a password for the user root, to do so enter "passwd" and enter your preferred root password two times (there is nothing displayed while typing) :
    image.png.7353e6a8a088759084ccbb865de11fa9.png

     
  3. Now Create a user with the command "useradd -m debian -s /bin/bash" (in this case the newly created username is "debian") :
    image.png.03a23ba0ad3c6e55ff631f95e500bf7a.png

     
  4. In the next step we will create a password for the user "debian" with the command "passwd debian" (replace "debian" with your preferred username) type in the password two times like above for the root user:
    image.png.f3f5c7421960b94b530d52518b95f492.png

     
  5. Now install the openssh-server with "apt-get -y install openssh-server":
    image.png.ef0df4f7df37d9a094a9b8889abd5828.png

     
  6. After it successfully installed you can close the terminal window from the LXC container, connect via SSH to the container via Putty or your preferred SSH client through the IP from your container and the username "debian" and the password set for the user "debian" (in this example we will connect through a Linux shell with the command "ssh debian@10.0.0.237" you see the IP address in the LXC tab in Unraid) :
    image.png.9168c528364fb2f8a27a92d1a129f309.png
     

Now you are connected through SSH with the user "debian" to your LXC container.

 

 

 

Method 2 (not recommended - root connection) :

 

  1. Attach to the container with "lxc-attach DebianLXC /bin/bash" (replace DebianLXC with your container name):
    image.png.848f698e89e00f2269f7e2bcc2d732c9.png

     
  2. I would first recommend that you add a password for the user root, to do so enter "passwd" and enter your preferred root password two times (there is nothing displayed while typing) :
    image.png.7353e6a8a088759084ccbb865de11fa9.png

     
  3. Now install the openssh-server with "apt-get -y install openssh-server":
    image.png.ef0df4f7df37d9a094a9b8889abd5828.png

     
  4. Now issue the command: "sed -i "/#PermitRootLogin prohibit-password/c\PermitRootLogin yes" /etc/ssh/sshd_config" (this will basically change your SSH configuration file so that you can login with the root account through SSH) :
    image.png.f106ea6a9e98b11f897dc281744b06eb.png

     
  5. Restart the sshd service with the command "systemctl restart sshd" to apply the new settings:
    image.png.c40164a800736722c60a92e85b159dc8.png

     
  6. After that you can close the terminal window from the LXC container, connect via SSH to the container via Putty or your preferred SSH client through the IP from your container and the username "root" and the password set for the "root" user (in this example we will connect through a Linux shell with the command "ssh root@10.0.0.237" you see the IP address in the LXC tab in Unraid) :
    image.png.6e56aefb4265e0ce2ad8d6384e63ae2d.png

 

Now you see that you are connected through SSH with the user "root" to your LXC container.

  • Like 2
  • Thanks 3
Link to comment
Posted (edited)

Been waiting for something like this.  I just created a ubuntu bionic container and have no network connection.  Nothing is shown on the page under address and 'ip a' doesn't show an IP address either.  

 

 

EDIT:  I was able to get network connectivity using

lxc.net.0.ipv4.address
lxc.net.0.ipv4.gateway

 

But I don't see a place to put dns resolver info.

Edited by jmztaylor
Link to comment
5 minutes ago, jmztaylor said:

But I don't see a place to put dns resolver info.

You should be able to do this inside the container itself if the Gateway and the Address are assigned correctly within the container.

I think this should help: Click

Link to comment
1 minute ago, ich777 said:

You should be able to do this inside the container itself if the Gateway and the Address are assigned correctly within the container.

I think this should help: Click

 

Yeah thats what I did to get network connectivity.  Still, all IP functions work like pinging my router but anything domain 'temporary failure in name resolution'

Link to comment
Posted (edited)
1 minute ago, ich777 said:

This seems possible but I first want to get it running stable and need a few responses if everything works well.

Understood!! You're awesome for getting this going in the first place. Serious kudos for you.

 

So far I am running a Debian Buster container without any issues. Trying to figure out how to pass through a USB device right now..

Edited by wtfcr0w
  • Like 1
Link to comment
2 minutes ago, jmztaylor said:

Yeah thats what I did to get network connectivity.  Still, all IP functions work like pinging my router but anything domain 'temporary failure in name resolution'

Will look into this... :)

Link to comment
9 minutes ago, jmztaylor said:

Yeah thats what I did to get network connectivity.

There you have it:

lxc-start -F BionicTest
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems, freezing.
Freezing execution.

 

It's also failing because systemd is missing.

Try to set up a Debian Bullseye container, it should work fine for now if this is a viable option for you.

 

EDIT: You actually have to kill the container or wait until the timeout kicks in if you click on Stop.

Link to comment
Posted (edited)
6 minutes ago, ich777 said:

There you have it:

lxc-start -F BionicTest
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems, freezing.
Freezing execution.

 

It's also failing because systemd is missing.

Try to set up a Debian Bullseye container, it should work fine for now if this is a viable option for you.

 

EDIT: You actually have to kill the machine or wait until the timeout kicks in if you click on Stop.

 

Bullseye works fine.  I saw in the first post Ubuntu 19+ wont work.  Im not familiar with systemd so thought 18 would work.  But yes I can reproduce that.  Thanks

Edited by jmztaylor
Link to comment
Just now, jmztaylor said:

Bullseye works fine.  I saw in the first post Ubuntu 19+ wont work.  So thought 18 would work.  But yes I can reproduce that.  Thanks

Already changed the first post, I haven't been able to test everything so far because there are a lot of images...

 

I can so far confirm that Homeassistant Core is working fine, Docker is working fine (with containers who doesn't need privileged rights), I've also made a container that uses noVNC in conjunction with TurboVNC to get a Desktop environment through a browser (xrdp should also work with a few tweaks from what I know).

  • Like 1
  • Thanks 1
Link to comment
20 minutes ago, wtfcr0w said:

Trying to figure out how to pass through a USB device right now..

List your USB devices with "lsusb":

Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 8564:1000 Transcend Information, Inc. JetFlash
Bus 001 Device 003: ID 05e3:0610 Genesys Logic, Inc. Hub
Bus 001 Device 002: ID 1a86:7523 QinHeng Electronics CH340 serial converter
Bus 001 Device 007: ID 0781:5567 SanDisk Corp. Cruzer Blade
Bus 001 Device 006: ID 05e3:0610 Genesys Logic, Inc. Hub
Bus 001 Device 004: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

 

Let's say I want to add "Bus 001 Device 004: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller" to the container, do a "ls -l /dev/bus/usb/001/004":

crw-rw-r-- 1 root root 189, 3 May 22 07:02 /dev/bus/usb/001/004

 

After that add these lines to the end of your config file for the container:

lxc.cgroup.devices.allow = c 189:* rwm
lxc.mount.entry = /dev/bus/usb/001/004 dev/bus/usb/001/004 none bind,optional,create=file 

 

 

This is actually untested but this is how it should work.

Link to comment
Just now, ich777 said:

List your USB devices with "lsusb":

Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 8564:1000 Transcend Information, Inc. JetFlash
Bus 001 Device 003: ID 05e3:0610 Genesys Logic, Inc. Hub
Bus 001 Device 002: ID 1a86:7523 QinHeng Electronics CH340 serial converter
Bus 001 Device 007: ID 0781:5567 SanDisk Corp. Cruzer Blade
Bus 001 Device 006: ID 05e3:0610 Genesys Logic, Inc. Hub
Bus 001 Device 004: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

 

Let's say I want to add "Bus 001 Device 004: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller" to the container, do a "ls -l /dev/bus/usb/001/004":

crw-rw-r-- 1 root root 189, 3 May 22 07:02 /dev/bus/usb/001/004

 

After that add these lines to the end of your config file for the container:

lxc.cgroup.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/bus/usb/001/004 dev/bus/usb/001/004 none bind,optional,create=file 

 

 

This is actually untested but this is how it should work.

I found that as well, tried it and no go. As soon as I altered the config file the LXC container disappeared from the LXC page.

Link to comment
Just now, wtfcr0w said:

I've also tried Ubuntu Jammy and Fedora 36, no dice. Neither of them start.

You can always troubleshoot by starting it from the command line with the command:

lxc-start -F CONTAINERNAME

 

This will run the container in Foreground and should tell you what failed.

Link to comment
1 minute ago, wtfcr0w said:

I found that as well, tried it and no go. As soon as I altered the config file the LXC container disappeared from the LXC page.

Ah sorry edited the post above, the format was wrong, it should be = instead of :

Link to comment
27 minutes ago, ich777 said:

Ah sorry edited the post above, the format was wrong, it should be = instead of :

I was able to pass it through and find it in /dev but for some reason I cant utilize the device. Messed around with permissions, probably have to alter udev rules or something. I'll have to play around with it later this evening.

 

Also, when attempting to start Fedora in the foreground I get this:

lxc-start -F Fedora
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!]
Failed to mount API filesystems, freezing.
Freezing execution.

 

Great job though. I can see this going far!

  • Like 1
Link to comment
27 minutes ago, wtfcr0w said:

Also, when attempting to start Fedora in the foreground I get this:

Also the systemd issue…

 

27 minutes ago, wtfcr0w said:

Great job though. I can see this going far!

Hope it‘s enough for now…

 

I will definitely continue to improve the plugin but I was a little concerned releasing the plugin with the systemd issues.

Link to comment
8 minutes ago, ich777 said:

Hope it‘s enough for now…

 

I will definitely continue to improve the plugin but I was a little concerned releasing the plugin with the systemd issues.

It's definitely enough. I'm thrilled to have even just the Debian lxc working! Thank you so much.

  • Like 1
Link to comment
Just now, wtfcr0w said:

I'm thrilled to have even just the Debian lxc working!

Me too because this saves a lot of resources on my server to because I use a LXC container with Docker installed in it to build all my Docker containers locally and push it to DockerHub and the GitHub Container Registry. :)

Planing also to switch over my Homeassistant Docker container to the LXC Homassistant Core version because I was never really satisfied with the Docker container.

 

Please also let me know about your findings with the USB passthrough <- this is a thing on my to do list but as you can imagine my to do list is pretty long... :D

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.