Jump to content

ich777

Community Developer
  • Posts

    15,755
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. You will see the container memory only with cgroup v1. I‘ve already looked into this and there is no easy fix for this on Unraid but I‘s on my to do list. Certain file paths are perhaps too long but it will create the tar file anyways. Have you yet tried the integrated snapshot feature?
  2. Please post your Diagnostics.
  3. What do you mean exactly with that? Do you have nvidia-persistenced enabled? If so, you kill it by doing: kill $(pidof nvidia-persistenced) from a Unraid Terminal. Can you please double check if the UUID from the GPU matches in the template? Can you also maybe post a screenshot from the container template so that I can see which parameters you've added for the Nvidia Driver to work? Please note that on most newer driver version the Key "all" at the GPU UUID causes issues and you should always put in your UUID from the card. Also please add a Variable, as described in the second post from this thread, with the Key: "NVIDIA_CAPABILITIES" and as Value: "all" This should fix the issue If the above doesn't help, please try to click on the Docker page (with Advanced View turned on - don't forget to turn it off again). If that all doesn't help then please try to delete the container (only the container on the Docker page) -> go to the Docker page and at the bottom click Add Container -> from the drop-down select your Unmanic (by that you ensure that all your paths and settings that you've already in the old template are preserved) -> click Apply to install the container again.
  4. Thank you for the report, I pushed a fix for this, please update the plugin to version 2023.01.26
  5. Wie hast du denn deinen appdata share konfiguriert und wo liegt denn dein Docker image? Poste doch bitte mal deine Diagnostics dann sieht man evtl. das Problem.
  6. I also can't tell why it's not working on your machine, just look one post above yours, even if I start a container in the foreground it is working fine with cgroup v2 but not with cgroup v1. BTW, I've gone through the new Diagnostics that you've sent, you still haven't enabled cgroup v2 there on your Unraid installation. Can you maybe post a screenshot where you've enabled cgroup v2?
  7. I don‘t understand. How do you check that it always creates a Debian one? Have you read the first post that you have to enable cgroup v2 on Unraid to be able to run newer containers which use systemd? EDIT: I've tried it now on my system with cgroup v1 (I'm not really a Ubuntu person so I have to try it on my own): Create the LXC Container: This is the screen where it pulled the image: After I go back to the LXC tab: This is the configuration: And after that I tried to start the container from the Unraid terminal and got this: Well it was not starting on cgroup v1 after I switched to cgroup v2 it looked completely different: So it seems you are using cgroup v1 on your system but I still don't know what you mean that it fails to create a Ubuntu LXC container...? BTW cgroup v2 will be the default on Unraid 6.12.0
  8. Are we talking about the Java version or the Bedrock version? If Bedrock Edition, set the GAME_V to your prefered game version. If the Java version set GAME_V to „custom“ (without double quotes), download the server.jar manually and place it in the directory for Minecraft on your server and make sure that the JAR_NAME is set to „server“ (without double quotes). JAR_NAME needs to be the exact name as the downloaded file.
  9. Open up a console from the container and enter these exact commands: su $USER ${STEAMCMD_DIR}/steamcmd.sh \ +force_install_dir ${SERVER_DIR} \ +login anonymous \ +app_update ${GAME_ID} \ +quit I think you want to install other game content correct? If so please change the ${GAME_ID} with the game ID from the other game. A more easy solution would be to copy over the game contents from your local computer to the directory on your server for GarrysMod.
  10. Did you add your Steam credentials? If yes please remove it and start over. Are all paths in the template set correct and is your appdata directory set to use cache only or prefer in the share settings?
  11. Good that you've run it on both terminals but I actually meant that you should run it in a container Terminal. The first screenshot that you've posted shows clearly that the wget command can pull the version number but in the second window you see that outputs nothing, even the apt-get update command fails, this is a clear indication that the container has no Internet connection or at least the DNS resolving isn't working properly (even apt-get update tells you that it couldn't resolve the host name). I would suggest that you put the --dns parameter again in the Extra Parameters with Advanced View turned on but this time try Google or Cloudflare DNS server and see if that changes anything and you can execute the commands from a container terminal, this would be the Cloudflare DNS: --dns=1.1.1.1 If that also doesn't help I noticed that you don't specified a IP for the LANCache-Prefill container on br0, can you try to set one and see if that changes something? I really don't know what is going on on your system because this has nothing to do with my container and it rather seems that something is miss configured. If you already have set the DHCP server to your LANCache-DNS please remember that the container will get the DNS server from your LANCache container assigned (when not explicitly specifying another one like above with Cloudflare DNS) but also keep in mind that your local computer maybe has still the old DNS server assigned. Just in case the LANCache-DNS isn't working properly.
  12. My containers work a little different, at least most of them because they pull the applications on start and check for updates on start/restart from the container itself, so to speak no app is shipped with my container by default. The container tries to contact the GitHub API what latest version is available and then downloads it, in your case it seems the container can't connect to the GitHub API whatsoever. BTW Once the app is installed and next time it can't communicate with the API the container will start just fine. Open up a terminal and see if this command is working or what the output is: apt-get update and also this command: wget -qO- https://api.github.com/repos/tpill90/battlenet-lancache-prefill/releases/latest | jq -r '.tag_name' I'm really not that familiar with Unify but these should be the incoming rules and not the outgoing if I'm not mistaken? Do you have some kind of AdBlocking on your network somewhere?
  13. I can't imagine that your board doesn't support Above 4G Decoding, maybe look for something like Extended Address Space in the PCI section from the BIOS and enable it.
  14. Please double check if the container has access to the Internet, are you sure that your LANCache-DNS container is resolving DNS queries correctly? My assumption is that it doesn't resolve DNS requests properly because it can't even grab the latest version number from the prefills and therefore it fails to download the applications itself. You can try to not set the parameter --dns in the LANCache-Prefill container and see if it downloads the applications properly, if yes then LANCache-DNS is not resolving DNS queries properly.
  15. Usually it should refresh the page once to update the status from the container status. The LXC pluin is confirmed to work on Chrome, Brave and Firefox. Anyways the browser shouldn't affect the behaviour of the plugin whatsoever.
  16. I really can't see anything obvious from the logs and why it isn't working on your system. If you start the container from the GUI can you open up a terminal too or does this fail too? Have you yet tried after uninstalling the plugin to also remove the lxc directory that was created on your cache and install the plugin again and see if that makes any difference? Otherwise I really don't know what to do next because this is really something that I've never seen before. Is it somehow possible that the LXC container image that was downloaded for Debian in your case got somehow corrupted? Have you yet tried any other distribution than Debian? Please try removing the download folder which is located in your cache folder in your lxc directory and try to install the Debian again, maybe it can't pull the image correctly, that's my best guess.
  17. I really can't tell what's happening on your system and why it don't want to work. Now contacted someone else where I know LXC is running and he confirmed that he is also on LXC-5.0.2 and has no issue whatsoever like me. Can you maybe post your Diagnostics again so that I can go through them, maybe some other left over package on your system causes that. The share lxc on your system is set to Only use cache correct?
  18. This is tough since AV1 isn't even working properly on Windows (stutters, artifacts and even crashing the whole system) and on Linux there needs some serious work to be done too to get the HW transcoding working. The next thing is that on Linux you need the Intel Media Drivers package too which needs to be integrated into the containers themself to even support encoding/decoding in the container. I've seen some serious progress recently in the Intel Media Driver GitHub but it's a long way that those things work reliably (on Windows too) like for the already existing Intel iGPUs with QuickSync for h264 (AVC) or h265 (HEVC). EDIT: I'm also not sure if much devices can decode AV1 currently, at least I think Apple devices, especially iOS lacking support for AV1 currently, but that might change in the near future.
  19. What do you want to do with the card on Unraid?
  20. May I ask, was the card working before or did you just install it? May I also ask for what do you want to use the card? If you want to use it for transcoding, your Skylake iGPU should do the job just fine up to h265 (HEVC) see: here Please make sure that you've enabled Above 4G Decoding and Resizabel BAR support (if you have that option) in your BIOS. If that doesn't help try to boot with Legacy Boot (CSM) instead of UEFI?
  21. Please upload your Diagnostics.
  22. Can you please post a screenshot from your settings? Where is your LXC directory located? Please double check that your LXC directory is not on a share which is moved by the Mover. Can you please open up a Unraid terminal and post the output from: ls -la /boot/config/plugins/lxc/packages I've never seen such an issue before, have you any custom scripts in place which are actually messing with the cgroup?
  23. Nice, keep in mind this is possible necessary every time you cold boot or restart Unraid... Exactly usually like I've wrote above IP:PORT/dns-query
  24. As said above, no issue over here and I can start or restart any LXC container that I want. I would strongly recommend that you remove the packages from /boot/extra and try if it's the same without the packages, if not, one of the packages causes the issue.
  25. You have a ton of packages in /boot/extra, do you really need all of them? It is possible that some of the packages are messing with the cgroups. Was LXC working before or is this the first time you've installed LXC? Have you yet tried to reboot?
×
×
  • Create New...