Jump to content

[Plugin] LXC Plugin


Recommended Posts

2 hours ago, juanrodgil said:

Something weird is happening to me with this plugin.

I installed the LXC plugin, go to settings, change the directory to /mnt/cache/lxc/ and updated.

Then i go to the LXC Tab and i try to create a new one based on archlinux, and it seems to work, but when i press the "done" buttom after it finished, the container dissapear.

 

On the folder /mnt/cache/lxc/ i see that a folder named "cache" was created with the files from the archlinux template and while the server was creating the container other folder "arch-multimedia" was created, but ... when it finish only the "cache" folder is there.

 

On the window i see this output:

Creating container, please wait until the DONE button is displayed!

Using image from local cache

Unpacking the rootfs



To connect to the console from the container, start the container and select Console from the context menu.

If you want to connect to the container console from the Unraid terminal, start the container and type in:

lxc-attach arch-multimedia

It is recommended to attach to the corresponding shell by typing in for example:

lxc-attach arch-multimedia /bin/bash

 

 

On the logs i see that the containar was created

Jul  7 12:16:04 hades-raid root: LXC: Creating container arch-multimedia
Jul  7 12:16:18 hades-raid root: LXC: Container arch-multimedia created

 

I try with others templates, several ubuntu versions, and debian, but in all the cases the folder with the LXC container is removed and i didnt see any error on the logs.

 

This happens for me too, on unraid 6.12.2. It worked great on the latest RC though

 

I can see that the script is making the LXC with all its files in the file system, but then it delete it again automatically.

Edited by Aumoenoav
  • Like 1
Link to comment
2 hours ago, juanrodgil said:

Then i go to the LXC Tab and i try to create a new one based on archlinux, and it seems to work, but when i press the "done" buttom after it finished, the container dissapear.

36 minutes ago, Aumoenoav said:

I can see that the script is making the LXC with all its files in the file system, but then it delete it again automatically.

It seems that something is wrong with the newest LXC container builds form today, I've already created an issue on GitHub over here: Click

 

There is nothing I can do about that, this is a thing that Linux Containers have to fix.

 

BTW The alpine edge image was just working fine a few hours ago.

  • Like 1
Link to comment
1 hour ago, Aumoenoav said:

the alpine image sticks _sometimes_

What you are describing is not possible since if it works then it will work.

The image index is downloaded once and valid for about 7 days I think and in that 7 days it is pulled from the cache and if it worked once it will always work in that 7 days (or at least how long the index is valid, I'm not sure about the 7 days).

 

1 hour ago, Aumoenoav said:

but other images doesnt. 

Yes, look at the linked GitHub Issue from above, two other users also reported that it's not working with images from today.

 

I've also reported that on their forums here.

 

If you have a GitHub account maybe also make a short post here.

Link to comment
15 hours ago, ich777 said:

It seems that something is wrong with the newest LXC container builds form today, I've already created an issue on GitHub over here: Click

 

There is nothing I can do about that, this is a thing that Linux Containers have to fix.

 

BTW The alpine edge image was just working fine a few hours ago.

Just to bump this, as of 115EST installation of alpine 3.18 was functional. Thank god I read through a bunch of comments. I was losing my mind. Ubuntu, Fedora, Debian creations gave the impression of success but failed. I didn't try any others. 

 

Thanks,

Edited by darthkielbasa
  • Like 1
Link to comment
1 minute ago, darthkielbasa said:

Just to bump this, as of 115EST installation of alpine 3.18 was functional. Thank god I read through a bunch of comments. I was losing my mind. Ubuntu, Fedora, Debian creations gave the impression of success but failed. I didn't try any others. 

Yes, in my opinion this is a bit sad since nobody has replied yet not on GitHub or on their forums but of course it's weekend...

 

Everything that was built before 2023-07-07 should work fine: Click

  • Like 2
Link to comment
10 minutes ago, ich777 said:

Yes, in my opinion this is a bit sad since nobody has replied yet not on GitHub or on their forums but of course it's weekend...

 

Everything that was built before 2023-07-07 should work fine: Click

Thank you sir. This is my first adventure into lxc containers so I automatically assumed I had F'd something up as per usual.

 

I'll investigate the syntax and whatnot for installing from the terminal. Thanks for creating and maintaining the plugin. 

  • Like 1
Link to comment
29 minutes ago, sluggathor said:

normal lxc commands obviously dont work here.

You are talking about LXD commands and not about LXC commands.

 

I don‘t want to use LXD because it needs Python and of course because it was now ripped out of the hands from the community.

 

If you need anything feel free to post here.

Link to comment

I'm trying to create and start a container but it is failing:

 

root@trantor:/mnt/disks/data# lxc-start -F oracle
lxc-start: oracle: ../src/lxc/conf.c: lxc_setup_console: 2156 No space left on device - Failed to allocate console from container's devpts instance
lxc-start: oracle: ../src/lxc/conf.c: lxc_setup: 4471 Failed to setup console
lxc-start: oracle: ../src/lxc/start.c: do_start: 1272 Failed to setup container "oracle"
lxc-start: oracle: ../src/lxc/sync.c: sync_wait: 34 An error occurred in another process (expected sequence number 4)
lxc-start: oracle: ../src/lxc/start.c: __lxc_start: 2107 Failed to spawn container "oracle"
lxc-start: oracle: ../src/lxc/tools/lxc_start.c: main: 306 The container failed to start
lxc-start: oracle: ../src/lxc/tools/lxc_start.c: main: 311 Additional information can be obtained by setting the --logfile and --logpriority options

 

and df -H

 

Filesystem           Size  Used Avail Use% Mounted on
rootfs                17G  1.3G   16G   8% /
tmpfs                 34M  5.1M   29M  15% /run
/dev/sda1             16G  800M   15G   6% /boot
overlay               17G  1.3G   16G   8% /lib/firmware
overlay               17G  1.3G   16G   8% /lib/modules
devtmpfs             8.4M     0  8.4M   0% /dev
tmpfs                 17G     0   17G   0% /dev/shm
cgroup_root          8.4M     0  8.4M   0% /sys/fs/cgroup
tmpfs                135M   15M  120M  11% /var/log
tmpfs                1.1M     0  1.1M   0% /mnt/disks
tmpfs                1.1M     0  1.1M   0% /mnt/remotes
tmpfs                1.1M     0  1.1M   0% /mnt/addons
tmpfs                1.1M     0  1.1M   0% /mnt/rootshare
/dev/sdx             800G  451G  348G  57% /mnt/disks/scratch
/dev/md1              12T   12T   69G 100% /mnt/disk1
/dev/md2             8.0T  8.0T   35G 100% /mnt/disk2
/dev/md3              12T   12T  161G  99% /mnt/disk3
/dev/md4              12T   12T   41G 100% /mnt/disk4
/dev/md5             8.0T  7.9T  116G  99% /mnt/disk5
/dev/md6              12T   11T  1.8T  86% /mnt/disk6
/dev/md7              12T   12T   11G 100% /mnt/disk7
/dev/md8              10T  9.9T  122G  99% /mnt/disk8
/dev/md9             8.0T  7.9T  106G  99% /mnt/disk9
/dev/md10            8.0T  8.0T   36G 100% /mnt/disk10
/dev/md11             10T  9.7T  351G  97% /mnt/disk11
/dev/md12            8.0T  7.9T  166G  98% /mnt/disk12
/dev/md13             10T  9.9T  121G  99% /mnt/disk13
/dev/md14             12T   12T   28G 100% /mnt/disk14
/dev/md15             12T  2.1T   10T  17% /mnt/disk15
/dev/sdb1            250G  119G  132G  48% /mnt/disks/data
/dev/sdp1            4.0T  963G  3.1T  25% /mnt/disks/backup
/dev/sds1            5.0T  2.0T  3.1T  40% /mnt/disks/transfer
/dev/sdu1            4.0T  161G  3.9T   5% /mnt/disks/nvr
/mnt/disks/data      250G  119G  132G  48% /share-ro/data
/mnt/disks/scratch   800G  451G  348G  57% /share-ro/scratch
/mnt/disks/transfer  5.0T  2.0T  3.1T  40% /share-ro/transfer
/mnt/disks/backup    4.0T  963G  3.1T  25% /share-ro/backup
/mnt/disk2           8.0T  8.0T   35G 100% /share-ro/17
/mnt/disk4            12T   12T   41G 100% /share-ro/29
/mnt/disk8            10T  9.9T  122G  99% /share-ro/a01
/mnt/disk11           10T  9.7T  351G  97% /share-ro/a02
/mnt/disk1            12T   12T   69G 100% /share-ro/a03
/mnt/disk5           8.0T  7.9T  116G  99% /share-ro/23
/mnt/disk6            12T   11T  1.8T  86% /share-ro/27
/mnt/disk7            12T   12T   11G 100% /share-ro/a05
/mnt/disk9           8.0T  7.9T  106G  99% /share-ro/30
/mnt/disk13           10T  9.9T  121G  99% /share-ro/a01
/mnt/disk12          8.0T  7.9T  166G  98% /share-ro/31
/mnt/disk15           12T  2.1T   10T  17% /share-ro/a07
/mnt/disk10          8.0T  8.0T   36G 100% /share-ro/20
/mnt/disk3            12T   12T  161G  99% /share-ro/28
/mnt/disk14           12T   12T   28G 100% /share-ro/a06
/dev/loop3           1.1G  5.0M  948M   1% /etc/libvirt
/dev/loop2            43G   19G   24G  44% /var/lib/docker

 

Link to comment
3 hours ago, srirams said:

Attached!

I would strongly recommend that you upgrade to Unraid 6.12.2, since you are not on 6.12.x I can't tell if everything in terms of LXC is working correctly from the Diagnostics.

 

Going through the Diagnostics I noticed you also have also some commands in the go file, are all of them needed? Especially the ones with the mount and chmod?

I also noticed in the syslog that something is constantly spinning up the disks and therefore maybe shorten the lifespan of the drives maybe that's also worth investigating.

 

Also it seems that you have installed LXC on a UD disk, what filesystem is this disk formatted with? Are sure that it is not mounted as read only because it seems that it is mounted as read only.

 

I have now tried to install Oracle 9 on my machine and it works perfectly fine and also starts fine too.

Link to comment
  • 3 weeks later...

started using lxc container. its fast!

but how do we backup the lxc. 

maybe once a day or weekly.

because its stored in ssd - not array.

 

i see function snapshot. is that only way to do it? 

also if the first lxc failed somehow, do we just stop and fire up the snapshot backup? all settings should be the same right.

Edited by safiedin
Link to comment
14 hours ago, safiedin said:

i see function snapshot. is that only way to do it? 

For now yes.

This is also only a combination of commands executed in the background.

 

I will maybe add a backup script that you can fire from a User Script maybe with xz compression and also another script will allow you to take snapshots from a User Script.

 

As you can read in the first post it‘s still in development and currently I don‘t have much spare time but these are all things which are on the "features to do" list.

 

14 hours ago, safiedin said:

because its stored in ssd - not array.

But you also can put LXC on a mirrored pool.

 

14 hours ago, safiedin said:

also if the first lxc failed somehow, do we just stop and fire up the snapshot backup? all settings should be the same right.

I don‘t fully understand… You can create a „new“ container from a Snapshot.

Just for clarification the Snapshot function saves the whole container including the configuration.

 

 

EDIT: With the next plugin release that I push I will integrate a command that is system wide available and allows you to take snapshots from containers where you can specify how many old snapshots you want to keep, this can be then easily integrated into a User Script.

 

Please update the container to the latest version 2023.07.29 and you have now a new command:

lxc-autosnapshot

 

This command lets you snapshot a container and also specify the snapshots that you want to keep, the usage would be as follows for a LXC container with the name DebianLXC and you want to keep the last 3 snapshots:

lxc-autosnapshot DebianLXC 5

This can easily be integrated into a User Script if you want to have scheduled snapshots (the output will be written to the syslog).

Link to comment

Good day, I've got nextcloud all in one in an lxc container. Im trying to set up nextcloud backups and failing. 

According to their docs i need to do this:

 

```

Failure of the backup container in LXC containers

If you are running AIO in a LXC container, you need to make sure that FUSE is enabled in the LXC container settings. Otherwise the backup container will not be able to start as FUSE is required for it to work.

```

 

Is anyone aware of what command I'd need to add to the conf file to 'enable FUSE' ?

 

Thank you

Link to comment
21 minutes ago, juan11perez said:

Is anyone aware of what command I'd need to add to the conf file to 'enable FUSE' ?

Have you yet tried to do the following:
https://github.com/containers/podman/issues/6961#issuecomment-657929781

 

Please note that you maybe have to create the path /dev/fuse inside the container.

 

EDIT: Are you yet on the latest LXC plugin version, I've implemented a few nifty scripts (lxc-autobackup & lxc-autosnapshot) which can be easily run from the command line.

If you configure Global backup config in the plugin settings itself you will even see the backups on the LXC page, if you have any further questions let me know.

Link to comment

Thank you very much for the prompt reply. 

I have been using the snapshots manually and it works fine. The reason I looked at the nextcloud 'native' backup is because, when i did have to recover the container from the snapshot I had to change the container name to xxxnew or something else. 

 

I don't know if I was doing it wrong or if there's a way to sort of replace the old container with the recovery container to keep my OCD at bay.

 

I have been making backups with rsync, but what I found was that if I delete the container from LXC and then restore it from the backup via rsync, nextcloud just doesn't work. I think it fails to bring back some files or something like that.

 

I just updated to the latest plugin version (thank you) and I configured Global backups.  Would restoring from this backups prevent the issue I mentioed when using rsync?

 

Thank you again

 

Link to comment
39 minutes ago, juan11perez said:

I have been making backups with rsync, but what I found was that if I delete the container from LXC and then restore it from the backup via rsync, nextcloud just doesn't work. I think it fails to bring back some files or something like that.

Hmmm, this is really strange.

 

39 minutes ago, juan11perez said:

I just updated to the latest plugin version (thank you) and I configured Global backups.  Would restoring from this backups prevent the issue I mentioed when using rsync?

I would suggest that you try it with the existing container, you even can replace the container if you specify the same exact name.

 

A word of warning, if you are using a browser which is Chrome based (EDGE, Chrome,...) and you are creating a backup from the GUI, please leave the tab in the foreground because if you switch to another tab Chrome will pause the tab and if it finishes in the meantime and you come back to the tab after some time it will never display the DONE button <- this will not happen on Firefox and there is nothing I can do about that.

 

Please also use the settings with caution, if you use compression ration 9 (the backups will be of course be well compressed and small in terms of size) it will take a huge amount of RAM to take the backup, about 12GB to be precise.

 

If you configure to use all cores on your server the WebGUI can get quite slow and unresponsive because then it is using all of your Cores at full blast.

 

You can also set Use Snapshot to Yes, because this will take a temporary snapshot from the container, start it right after the snapshot finished, start a backup from the snapshot and finally delete the snapshot <- this is handy if you got containers that needs to be back up and running quickly if you want to use a high compression ration.

 

If you want to take a backup from the command line, when global configuration is enabled do that:

lxc-autobackup --name=<CONTAINERNAME>

 

when global configuration is disabled:

lxc-autobackup --name=<CONTAINERNAME> --path=<PATHTOBACKUPLOCATION> --compression=9 --threads=all

 

when global configuration is disabled with a temporary snapshot:

lxc-autobackup --from-snapshot --name=<CONTAINERNAME> --path=<PATHTOBACKUPLOCATION> --compression=9 --threads=all

 

All of the above command can be also used in User Scripts to schedule backups or (even snapshots with lxc-autosnapshot).

 

 

To restore a container from the command line, when global configuration is enabled do that:

lxc-autobackup --restore --name=<CONTAINERNAME> --newname=<NEWCONTAINERNAME>

 

when global configuration is disabled:

lxc-autobackup --name=<CONTAINERNAME> --path=<PATHTOBACKUPLOCATION> --newname=<NEWCONTAINERNAME>

 

In case for restore you can also use the existing container name to overwrite it <- this is specific to this script that I've wrote.

 

Please feel free to try it with a existing container and also let me know how it goes. Again if you have configured the global backup settings then you can do everything from the GUI too.

Maybe try it with your Nextcloud container and restore it to a different name (this can be also be done from the gui if the global backup settings are enabled).

 

It should be even possible to specify a remote mounted share though Unassigned Devices for your backups.

 

 

If you find a bug or anything that doesn't work please let me know.

 

Hope that helps

Link to comment

I have problems starting LXC since I downgraded back from 6.12 to 6.11. Now LXC containers won't start anymore.

 

This is the message I get:

lxc-start -F DNS
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted
[!!!!!!] Failed to mount API filesystems.
Exiting PID 1...

 

In the downgrade notes I found following

Quote

If you revert back from 6.12 to 6.11.5 or earlier, you have to force update all your Docker containers and start them manually after downgrading. This is necessary because of the underlying change to cgroup v2 in 6.12.0-rc1.

 

Docker is working fine after an force upgrade, the command docker info shows 

Cgroup Driver: cgroupfs
 Cgroup Version: 1

 

Not sure why LXC won't start anymore. Any suggestions I can do?

Link to comment
2 minutes ago, SirLupus said:

Not sure why LXC won't start anymore.

About what containers are we talking about?

 

Have you read the note about cgroupv2 on the first page?

 

Why did you even downgrade to 6.11.5? This won‘t solve issues and I have to mark some of my plugins as incompatible with Unraid version below 6.12.x

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...