Jump to content

ich777

Community Developer
  • Posts

    15,758
  • Joined

  • Days Won

    202

Everything posted by ich777

  1. But this is something you have to tell the manufacturer and that they post the drivers upstream to the Kernel. No worries but I also have to say that if the driver breaks because of a Kernel update I don‘t think it should be Limetechs problem to fix that because it‘s a third party driver. It‘s also the same over here, if the driver breaks I can only create an issue in the repo from the maintainer and then wait for a fix (or fix it myself it‘s a small issue). I think you get my point here. Again, there is no upstream driver that manages to read or write all values, at least not without risk damaging your hardware. Thats a slippery slope from the manufacturer in my opinion.
  2. I esited my post above. Please run the command from all connected VPN containers.
  3. Have you disabled IPv6 for this container? With IPv6 it is possible that it leaks your IP. May I ask how this thing is working to detect your real IP? Do you create a link on your local PC or do you download something from your local PC that you put then into your downloader (btw do you use a VPN too to create the link/file that tests your VPN in the container)? Can you try to execute this from within the container: curl https://raw.githubusercontent.com/macvk/dnsleaktest/master/dnsleaktest.sh -o dnsleaktest.sh
  4. Keep in mind this is a third party driver (there is no in-tree Kernel driver) and I‘ve already got users with (lets call them) exotic motherboards which had issues with this driver because the NCT drivers are all over the place and the manufacturer doesn‘t seem to be interested in supporting Linux at all (it may be also the case that they only have a small Linux team and they don‘t have the time/resources). Sad to say, because of the above it is such an edge case. It‘s the same story with the 2.5 Gbit/s Realtek network drivers.
  5. Why should it be related to Unraid? You are using a Docker container which is isolated from the Host, you are using a browser which is running not on Unraid, you are using the Plex web client. Hope that explains it a bit better. 😉 Please report that on the Plex forums if you want a fix for that.
  6. I would recommend that you first try it with the official Plex container. How are you trying to transcode? Please note that the web client from Plex has issues when enabling transcoding. The driver is working properly from what I can see. The logs from Plex, especially the transcoding log would be helpful but that is something for the Docker container thread. Again, I would first recommend that you try the official Plex container.
  7. Can you maybe try to start over by deleting the container, deleting the directory for Icarus from your appdata and pull a fresh copy from the CA App? Maybe something went wrong at the first start up, which can take quite some time and it has to complete successful otherwise it can cause issues.
  8. Something seems different form your log, the last line should be something like this: [2023.08.26-18.48.34:956][344]LogIcarusGameModeSurvival: -------- Server is now empty -------- I attach my log here from a startup after a update and I now tried it and it's just working fine. On what Unraid version are you? Make sure that you are on something recent like 6.12.3 (I'm on 6.12.4-rc19 <- next branch). It should also automatically be listed under local (even if you haven't opened the ports yet: icarus.log Here is also a screenshot:
  9. Please see this specific comment on Reddit on how to disable the check (please don‘t forget to stop the container in the first place before editing any file): I really don’t want to change much in that container since this is clearly an issue from the application itself and nothing I can fix. Sure it‘s a cool game and game idea but I really can‘t do much about this specific issue and the workaround from above is not my favorite…
  10. This happens from time to time and from what I can tell this is most certainly caused by WINE (it also on my system but really rare). Simply restart the container and it should work, I just tried it and everything is working fine over here, first attempt failed after installing the container: sotf.log
  11. I think I'm not following... Just create another path inside the container template and use the path where the NSF share is mounted on your Unraid host as the Host path.
  12. No, I don‘t think so. Please try again and see if it‘s the same.
  13. This needs to be fixed by the Kernel team and not by Unraid. Please remeber Unraid is based on Linux and the drivers that ship with the Kernel.
  14. Restart the container and see if the same happens again. This is the first time that I hear from that issue and as long as the megasync appdata directory stays where it is and is not moved it should work fine. It seems that in your case everything is configured correctly. Is the path to your cache driver /mnt/cache/...?
  15. I think this is most certainly a configuration issue. Can you please post a screenshot from your Docker template and also from you share settings for the appdata share? I'm assuming that you you have configured that your appdata share is moved from the Cache to the Array and you use the path /mnt/cache/appdata/... in the Docker template correct? I'm also assuming that CA Backup was running in the meantime so that the container was stopped at least once (containers are by default stopped when CA Backup is kicking in). So this means in your current configuration it will happen again.
  16. You can‘t update the template alone. You can either note down the settings, remove the container that you have installed currently and pull a fresh copy from the CA App where you fill in the setting that you‘ve wrote down in the new template or you can simply add this one variable.
  17. This has nothing to do with the container itself and no, container templates are not updated automatically.
  18. Thats not possible and the container was not designed for that. Have you yet considered to configure the built in sm_admin menu?
  19. Then you have a really old template. Go to the CA App and pull a new template with the new settings. You can of course create the variable manually in your template too.
  20. Set this to 'true' and BepInEx is installed: Please note that I really don't know if this is still working because I usually don't support modding but nobody complained that it is not working. However if this variable is set to true the game will be also started with BepInEx and you can add which ever mod you want to the container.
  21. Maybe, but this is most certainly not compatible with my container since I've never designed my container for such things but it can be compatible but I also have to say that modding form my containers is always up to the users (like @Cyd did very well with his cluster set up) because I can't know every mod and so on...
  22. Isn't this usually a indication of some incompatible mods? I'm really not sure since I'm not familiar enough with clusters and ARK. Have you yet looked at @Cyd's GitHub for this over here.
  23. I just tested it and got the same error. I would recommend that you report that to the developers since it seems to me there is a bug in the code: Assertion failed: Ptr [File:Runtime/Core/Public\Misc/LazySingleton.h] [Line: 109] libc++abi: Pure virtual function called! Signal 6 caught. libc++abi: Pure virtual function called! Signal 6 caught. libc++abi: __cxa_guard_acquire detected recursive initialization Signal 6 caught.
  24. Oh, this is not my container. Sorry, can't help with that. Please click on the container icon on the Docker page and select "Support", this will take you to the appropriate thread/site (I don't know if the other maintainer uses the Unraid forums to give support. If the container does not have such an entry then you have to go to the GitHub/Repository from the maintainer and report it there.
×
×
  • Create New...