Joly0
-
Posts
167 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Joly0
-
-
Just a technical question: Does the plugin constantly check if the lxc folder exists and if not, creates it? If so, i might have found a bug. I had lxc disabled and my cache drive empty, where lxc would store the lxc folder (/mnt/cache/lxc). Now even though i deleted everything from my cache drive, randomly this folder structure appeared on my cache pool: /mnt/cache/lxc/cache
This made me unable to reformat my cache drive (which i was trying to do). Took me some time to notice the folder on the cache drive
-
16 minutes ago, Niklas said:
?
-
4 minutes ago, Niklas said:
Did you remove it like my instructions said? You can't just delete the docker-folder manually. It will not remove the datasets. That's two different things. If you deleted the docker folder only, you'll have to remove all datasets manually. It is mentioned here before too so read back some posts.Yes, i stopped docker, deleted the docker directory throug the docker settings and removed the the folder in the system share afterward aswell
-
19 minutes ago, JorgeB said:
Yes, as long as docker is no longer using those.
And how can i efficently remove them? There are hundreds of them. I cant go through them one by one and delete them manually
-
9 minutes ago, JorgeB said:
These will be created if you used docker in folder mode, only options is to manually delete them.
Is it safe to delete them? Can i somehow "look into them" to make sure, there is no important data?
And how would i delete them? There are hundreds of them
-
Hey guys, i have recreated my btrfs cache pool to zfs and build up everything, but i have noticed, that i have tons of datasets randomly being created with random has names
I have already tried deleting and recreating my docker directory on that cache pool, but that didnt help. So where are those datasets coming from and how can i remove them? Its quite annoying with the zfs master pluging, as i cant hide those
-
Hey guys, i have written above already. I have a ton of datasets or snapshots (i am not sure) and i dont know where they come from?
Others said its because of docker folder, but it isnt, i removed it and they still are there...
-
2 hours ago, Niklas said:
Running 6.12.x?
You would have to exclude /cache/ and that's usually a no go for obvious reasons.
To fix it you will probably have to create a new share with the name docker and switch over to /mnt/user/docker in docker settings, like this:
https://docs.unraid.net/unraid-os/release-notes/6.12.0/#docker
Before doing that, you need to do this:
"Bring up the Docker settings page and set Enable docker to No and click Apply. After docker has shut down click the Delete directory checkbox and then click Delete. This will result in deleting not only the various files and directories, but also all layers stored as datasets."
You can then go to the Apps-tab -> Previous Apps, select and re-install all of your containers with retained config.
Create flash backup before anything.
Edit
After all this, you should be able to exclude /docker/.*Thank you, thats a great tip. I will do that. Though isnt it possible to copy over everything from system/docker/ to then docker/ without loosing or having to recreate all the dockers?
-
-
1 minute ago, Niklas said:
Try like
/docker/.*
or
/system/docker/.* maybe. Hm
Nope, neither of those helped
-
Just now, Niklas said:
Seen people use the wrong directory before when using the docker dir. What's the path set in Settings - Docker? The "Docker directory:"-path.
-
-
Is it safe to change the backing storage type from default to zfs when there are already existing containers running? Will it make a difference? Can i convert existing ones to the new backing storag type? What do i have to do so?
Btw, some "tooltips" (i mean those help texts) for various options would be helpful. Like for this setting "Default LXC backing storage type:". I have some idea what that means, but for others it might not, what it involes, what it does, etc. Just an idea for the future
-
And another thing, is there somewhere the dockerfile and scripts that were used to create this docker? Would like to take a look at this but cant find anything anywhere
-
I am curious if it is possible to run this with an amd gpu instead of an nvidia one? For example invokeai supports running with rocm instead of cuda, but i seem not to see any possibility to get this running with rocm, only cuda. How to do that?
-
Ok, i could help with testing. If you tell me what to do, i can try that
-
Btw, just another thing, afaik invokeai supports rocm. Is it possible to get this working in this image aswell?
-
Btw, just to let you and others know, there is an issue/typo in your xml file. In the line were you specify the port, there is a missing > right before the actual port number
Edit: I mean the second xml, not the first one, that one is correct for no specific reason. The second one is missing the >
- 1
-
I have literally no idea how this route thing works to get container to host communication working....
-
Ok, i was able to get host to container networking working by running these commands:
ip link add mac0 link eth0 type macvlan mode bridge ip addr add 192.168.1.150/24 dev mac0 ip link set mac0 up
But i am not quite sure how to add container to host networking now
-
Hey guys, i am trying to setup a docker environment on ubuntu server and would really like my host and my macvlan containers to talk to each other.
Does anyone know, how limetech implemented that feature and how i could reproduce this on ubuntu?
I would highy appreciate any answer that helps me solve this issue
-
already done this, or to be precise, already have been doing this since ever
But ok, i did this and it worked, so i guess the update broke the setting there. It was still enabled, but it seems like it was disabled. Disabling and reenabling it helped, thx
- 1
-
Hey guys, i think the recent unraid update (6.12.4) broke something regarding lxc networking. I have updated and switched to macvtap driver (disabled bridging) and now my containers are unable to ping my lxc container and vice-versa. The lxc cointainer is also unable to ping the host. I have tested eth0 and virhost0 as network interfaces for lxc, but both arent working. Is there anything i can do?
-
Ok, kann mir das mal ein bisschen genauer erklären, wie ich das also jetzt umsetzen kann?
Ich habe eine Fritzbox und würde halt im internen netz gerne meinen pihole auch über ipv6 anbieten, nicht außerhalb. Bin auf unraid 6.12.4 und nutze für pihole zum beispiel schon das neue custom: eth0 netzwerk, aber halt nur ipv4. Wie setzr ich das also jetzt ordentlich um?
Wäre nett, wenn mir das einer genauer erklären könnte, soweit ich die letzten nachrichten verstehe soll es ja wohl doch iwie so gehen ohne GUA
[Plugin] LXC Plugin
in Plugin Support
Posted
Yes, exactly. I disabled docker, lxc and vm´s, deleted or moved everything from my cache pool to my array and tried to format my pool, but couldnt because that path existed. After deleting/moving i made sure, no files were on my cache pool anymore