Jump to content

[Plugin] LXC Plugin


Recommended Posts

I've installed casaos inside an ubuntu container (to test out docker containers without polluting unraid) and it is able to see all my drives.

 

lsblk

 

lists them all out.

 

Do I have to be careful here? Can I somehow hide my drives from the container?

Link to comment
8 hours ago, L0k1 said:

Do I have to be careful here?

Nope.

 

You can see them but can‘t access them.

This is the default behavior from LXC because it uses the Host Kernel but LXC will prevent the usage from a disk or device which is outside of the LXC container or better speaking not mounted in some way to the LXC container.

(BTW: The same also applies if you issue lsblk inside a Docker container, of course lsblk must be installed in the container. ;) )

 

Hope that helps. ;)

Link to comment
10 hours ago, ich777 said:

Hope that helps. ;)

 

Certainly does. Thanks.

 

This seems like a perfect middle ground between VM's and Docker, I can see myself using it for quite a few things so thanks for making it. The snapshotting and backup functionality are especially useful.

  • Like 1
Link to comment
  • 4 weeks later...

This is when i try to install AMP on Debian via LXC: 

root@AMP:/# bash <(wget -qO- getamp.sh)
Please wait while GetAMP examines your system and network configuration...
 - Checking installed packages...
 - Checking environment...
 - Checking network configuration...
 - Detecting network type...
System locale is not a UTF-8 compatible locale, it is currently 
Please update your system locale to a UTF-8 one and reboot before running this script.
You can do this by running 'dpkg-reconfigure locales && locale-gen' as root and making sure a UTF-8 locale for your region/language is selected.
On some systems it may be 'update-locale LANG=en_GB.utf8' instead.
It may be necessary to log out and log in again for locale changes to take effect.

Who can help me? I already tried to set regions etc.. is there any easyer way? maybe install with one command default programs D:
heeelp!

Link to comment
58 minutes ago, Snolte said:

Who can help me?

  1. Open up a Unraid termina and execute this:
    wget -O /tmp/lxc_container_template.xml https://github.com/ich777/unraid_lxc_amp/raw/main/lxc_container_template.xml
  2. Close the terminal
  3. Visit this site on your server:
    http://YOURSERVERIP/LXCAddTemplate
    (please change "YOURSERVERIP" to the actual IP from your server)
  4. Configure it to your preference
  5. Click Create
  6. Wait for the window to say DONE (this can take some time depending on your connection speed)
  7. Click on the container Icon and select WebUI
  • Thanks 1
Link to comment
  • 2 weeks later...
On 11/27/2023 at 5:45 PM, ich777 said:

So is it working for you? I saw that you've quoted me but removed it.

thank you very much everything works excellently. :)!
Is there a collection of templates somewhere? I am still searching for a solution for NextCloud as LXC.
Greetings

Link to comment
Just now, Snolte said:

Is there a collection of templates somewhere? I am still searching for a solution for NextCloud as LXC.

Not yet, this will take some additional time... :)

 

However Nextcloud should be pretty simple and you basically can follow every guide which tells you how to install Nextcloud on Linux (preferably with MariaDB and Redis)

  • Like 1
Link to comment
  • 2 weeks later...
8 minutes ago, emrepolat7 said:

The WebUI URL is indeed useful, but currently, it only supports a single link. Could you consider implementing support for multiple links?

Can you please explain what the exact use case for this would be.

 

I'm not really tempted to implement this because the dropdown will get longer and longer and longer and longer...

This can get also pretty complicated quick in terms of the plugin and introduces other concerns (at least to me).

 

Why not install something like homarr in the container, link that to the WebUI button and use that to get to your services (there are maybe also better alternatives out there then homarr)?

Link to comment

Thank you for your prompt reply.

 

I understand your concerns about the dropdown becoming unwieldy with multiple links, and the potential complexities it may introduce to the plugin. I appreciate your perspective on this matter.

 

I thought it would be easy to implement, but from what I've seen, it seems that the request is not as easily implementable as I initially thought.

 

The reason I was looking for support for multiple links is to streamline access to various services that I have installed with docker compose. 

 

Thank you for your suggestion and insights. If you have any further recommendations or considerations, I'd be grateful to hear them.

Link to comment
18 minutes ago, emrepolat7 said:

I thought it would be easy to implement, but from what I've seen, it seems that the request is not as easily implementable as I initially thought.

Yes, it's a bit hard to implement and it would be more beneficial to have a configuration page for LXC like it exists for Docker but that is in my opinion a bit overkill since a LXC container shouldn't be changed after you've set it up and also the configuration can get messed up really quick and you are presented with a non functional container if you change the wrong settings.

 

My concern is that it causes more confusion then it solves, I hope you understand this...
 

18 minutes ago, emrepolat7 said:

Thank you for your suggestion and insights. If you have any further recommendations or considerations, I'd be grateful to hear them.

You could maybe use homarr, overseer,... even on Unraid set it up and link the services there so that you have one place to quickly access all your services without even needing to open Unraid.

 

EDIT: Also a member recommended: Homepage (you'll find that also in the CA App).

 

Hope that makes sense.

Link to comment
  • 4 weeks later...

Getting an error after editing config to allow nesting

 

 

# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template: --dist debian --release bullseye --arch amd64
# Template script checksum (SHA-1): 78b012f582aaa2d12f0c70cc47e910e9ad9be619
# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)


# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.arch = linux64

# Container specific configuration
lxc.rootfs.path = dir:/mnt/disks/DATA/lxc/DebianLXC/rootfs
lxc.uts.name = DebianLXC

# Network configuration
lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = br0
lxc.net.0.name = eth0

lxc.net.0.hwaddr=52:54:00:99:D2:F0
lxc.start.auto=0
~                

 

Reason for allowing nesting, trying to install snap/LXD in debian but keep getting - 

 

error: system does not fully support snapd: cannot mount squashfs image using "squashfs": mount:

 

Read somehwere I need to enable nesting and allow FUSE. Not sure what all this means to be honest. Any ideas?

image.thumb.png.ff71388028da8ab3e6ff66844e1af53c.png

Edited by mikeyosm
Link to comment
37 minutes ago, mikeyosm said:

Reason for allowing nesting

Nesting is usually not necessary in privileged containers especially when you are just trying to install snap.

 

I will look into that but I think that you not drop certain caps.

 

Do I get that right that you want to install LXD in LXC? May I ask why?

Link to comment
15 minutes ago, ich777 said:

Nesting is usually not necessary in privileged containers especially when you are just trying to install snap.

 

I will look into that but I think that you not drop certain caps.

 

Do I get that right that you want to install LXD in LXC? May I ask why?

Yeah, I am playing with a VDI broker called Ravada. My plan was to set up a Deb/Ub LXC/LXD, install Ravada, create a Windows VM and set up a VDI pool.

Link to comment

@ich777   I want to thank you for this very nice and useful plugin / functionality.   I am using it to have a Debian with ssh + rsync as backup server on my unraid system. 👍

 

I understood that "unprivileged" containers are not (yet) supported or are they?  I can imagine that mapping the UID and GID correctly to the unraid versions adds a new dimension of problems.

 

Now my real question: how do you install a LXC container from an existing tar file?

Is it as simple as using the following in the lxc directory?

 lxc-create -t local -n NextCloudPi  /mnt/user/isos/NextCloudPi_LXC_x86_v1.53.0.tar.gz

I will have to study this all a bit more.

 

Again many thanks for this very useful plugin!

Link to comment
3 hours ago, hansan said:

Now my real question: how do you install a LXC container from an existing tar file?

Can you give me a link to that .tar.gz file so that I can see what's in there?

 

3 hours ago, hansan said:

I understood that "unprivileged" containers are not (yet) supported or are they?  I can imagine that mapping the UID and GID correctly to the unraid versions adds a new dimension of problems.

I haven't looked into that much because other things are more important currently but yes, that adds another layer of complexity.

Link to comment
15 hours ago, mikeyosm said:

Yeah, I am playing with a VDI broker called Ravada. My plan was to set up a Deb/Ub LXC/LXD, install Ravada, create a Windows VM and set up a VDI pool.

Add this to your config:

lxc.mount.auto=cgroup:rw
lxc.mount.entry = /dev/fuse dev/fuse none bind,create=file,optional

 

In the container install these packages:

snapd squashfuse fuse

 

After that you should be able to install/run snaps:

grafik.png.84deac5fbfb1b87d9bea15f6a3602aaa.png

  • Thanks 1
Link to comment
25 minutes ago, ich777 said:

Can you give me a link to that .tar.gz file so that I can see what's in there?

 

Thanks for being willing to have a look into this.  The images come from here: Github with NextcloudPi releases

 

The link to the image that I think I want to use is:

https://github.com/nextcloud/nextcloudpi/releases/download/v1.53.0/NextCloudPi_LXC_x86_v1.53.0.tar.gz

 

I am aware that there are also different docker containers for Nextcloud, but I think that LXC will give a bit better performance with the different daemons/services needed and I made good experience with NextcloudPi native on a Odroid HC2. 

(It is my plan to replace all my "low" powered servers by one big unraid based server.)

Link to comment
13 minutes ago, hansan said:

I am aware that there are also different docker containers for Nextcloud, but I think that LXC will give a bit better performance

I don't think so, personally I run Nextcloud directly on SWAG and it runs very very fast.

 

15 minutes ago, hansan said:

The link to the image that I think I want to use is:

I would recommend doing that:

  1. Create a LXC container with the name NextlcoudPi and don' start it
    grafik.thumb.png.44addd449a8882ef3d85cc25724eec6e.png
     
  2. Go to the rootfs directory from the created container and remove everything
    grafik.png.2facb9f6f754cb970408bcca8e118c4b.png
     
  3. Extract the tar archive directly into the rootfs (I've download it directly to /root - please note that you have to be in the directory where you want to extract the files in the following example)
    grafik.png.03b02baeeafed897bb1adeb86e17a15f.png
     
  4. Start the container
    grafik.png.d98bfffcc443ef679ab4854cf7198762.png
     
  5. Connect with the IP address from the LXC container through your browser to NextcloudPi (ignore the errors about self signed certificates)
    grafik.thumb.png.779b6e6a94eb1c58d0750c45bdaf7bac.png

 

 

In my opinion you would be better off using Docker or even installing it yourself in LXC but that is just my opinion. ;)

Link to comment

I tried your suggestion and that works well.  Overwriting the rootfs is enough to get this debian based nextcloudpi lxc container working. Thanks for the advice and trying it for me.

 

I got this using a LXC container as preferred method from here: https://help.nextcloud.com/t/getting-started-with-nextcloudpi-on-proxmox/113487   But probably is the difference not big and mainly depending on details such as use case and running environment.

 

The NextCloudPi LXC container is giving an "problem"  it wants to put the data on a btrfs or zfs file system and does not like the fuse or xfs file system of the unraid array.  I can override this complain, but I will first check if a docker container works better.

Link to comment
2 hours ago, hansan said:

I got this using a LXC container as preferred method from here

But this methode is not listed as recommended…

 

2 hours ago, hansan said:

The NextCloudPi LXC container is giving an "problem"  it wants to put the data on a btrfs or zfs file system and does not like the fuse or xfs

May I ask where you‘ve set the your main path for LXC?

 

You will run into problems when you‘ve set it to /mnt/user/… and if it‘s located on the Array you will get horrible performance from your containers.

However you could mount a directory in the container config which is on a BTRFS/ZFS filesystem but as said above I would recommend to put the main path for LXC on such a disk with BTRFS/ZFS because you‘ve get the benefit of filesystem native snapshots if you configure th filesystem type correctly in the LXC settings.

Link to comment
  • 2 weeks later...

I am sorry for my late response.  I was rather busy at work.

 

On 1/14/2024 at 12:19 PM, ich777 said:

But this methode is not listed as recommended…

It is seen as "best" method in terms of resources but it doesn't work (yet) with proxmox. But that doesn't stop me. 😃

 

On 1/14/2024 at 12:19 PM, ich777 said:

May I ask where you‘ve set the your main path for LXC?

 

You will run into problems when you‘ve set it to /mnt/user/… and if it‘s located on the Array you will get horrible performance from your containers.

However you could mount a directory in the container config which is on a BTRFS/ZFS filesystem but as said above I would recommend to put the main path for LXC on such a disk with BTRFS/ZFS because you‘ve get the benefit of filesystem native snapshots if you configure th filesystem type correctly in the LXC settings.

The main lxc container is on my cache btrfs file system. But that would not be big enough for all the files I store in nextcloud. 

That data is at a disk on my main array and I map it into the container with :

xc.mount.entry = /mnt/disk1/NextCloudData  /mnt/cache/lxc/NextCloudPi2/rootfs/mnt/disk none bind 0 0

With this I get around the fuse file system that lives on the /mnt/user "structure".

 

I had to modify some of the handy NextCloudPi scripts that checks if someone tries to use FAT or other unsuited filesystems to accept the XFS file system of the main array. 

 

It looks like that it all works. Performance is ok, but I have at home not a big "deployment" either.  I don't know if I will run into problems at a later time (and loosing all my data😱)

 

Anyhow I like your plugin very much. This type of containers match much better with many years of linux experience than the docker ones. Those I will leave to the young whippersnappers😀 and just use them as "app" for a single function.

  • Like 1
Link to comment

Is it possible to limit the memory usage of the LXC container?  Do a configuration error from my side was a process in my container eating all my memory and brought my unraid server down.  😧  I am looking for a way to prevent this from happening again.

 

I have added the following the the config file for the container:

lxc.cgroup.memory.limit_in_bytes = 8192M

I am not 100% certain if this is really helping, because most tools like "free" and "htop" are still reporting the memory status of the host.

 

Is there a more confirmed trick to limit the memory usage or should this work?

 

I have read that lxcfs can help to separate the container more from the host so that "free" and "htop" report only the container memory.

Link to comment
7 minutes ago, hansan said:

Is it possible to limit the memory usage of the LXC container?

Sure, please read this post:

 

7 minutes ago, hansan said:

I am not 100% certain if this is really helping, because most tools like "free" and "htop" are still reporting the memory status of the host.

Please read the full post from above since I explain how it works.

 

7 minutes ago, hansan said:
cgroup

Please also note that Unraid uses cgroupv2

 

If you change your config like I explained in that post it will work and the container will crash/restart instead of the host if the set limit of RAM is reached.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...