xorinzor

Members
  • Posts

    120
  • Joined

  • Last visited

Everything posted by xorinzor

  1. If that's the only solution, then I'll have to do that. But it really feels more like a hack then a fix. Never experienced such issues in any of my minecraft containers. AFAIK the service inside the container is none the wiser about the mapped ports.
  2. I am aware of this, but it is exposed on the host. From a security standpoint of view it really is a no-go to expose services that shouldn't (have to) be exposed. Even if it's on the local network. I'm not using either br0 or host. I have custom bridge docker networks. Ports from the containers themselves are available to other containers within the same network, but as long as I do not have a port mapping defined, it will not be exposed on the host and available from the network that my unraid server is in. For example, I have a network called "webhost" in which I have my nginx, php and mysql containers. I only have to expose nginx on the host, but do not want my mysql service to be exposed. I'm confused about this, these 2 statements seem contradictory. I use the Auto Update Applications plugin to automatically update the containers and have had a couple of times now that some docker containers are unable to start due to conflicting Port mappings that have been re-added after they got updated. I'm not really familiar with how the template system works. But was under the impression that Unraid keeps track of these fields in the XML on a per-container basis. Perhaps an solution would be to instead of removing the field, marking them as "disabled" or leaving their value empty? This way, if the template updates, the field still exists, but is just "disabled".
  3. That could theoretically be every container with the current way the templates work in Unraid, especially with the new fields potentially being added, it'd be more of a hassle to figure out why a container suddenly breaks because a new field got added that I then don't have, vs having to re-remove a port (although the downtime is annoying). If a specific field already existed before, and I chose to remove if, that should take precedence over any future updates for that specific field. That's why I think they should be keeping track of this and process it when the template updates (so every field that's marked as removed, will stay removed). Obviously new variables and ports should still be added, but existing ones should conform to whatever the user configures it to be (including a removed state). Because I use a lot of networks between my containers to further isolate them from each other, and they generally use internal communication. Exposing the port on the host is of no benefit to me, and I do not want to add an additional firewall to my unraid server to isolate it as a host. Rather I'd just prevent any ports from opening up, and only having to open up ports (if required) to my WAN in the router.
  4. I've had a few occurrances lately where some of my docker containers would be stopped. Because after they were automatically updated, some of the port assignments that I removed came back. I found that this apparently has been discussed before, but there doesn't really seem to have been implemented a fix. Rather more of a workaround was discussed. Are there any plans on creating an actual fix for this? I don't want to create unnecessary port assignments to my docker containers just so they don't keep re-appearing and causing issues for my containers. Perhaps if a user removes a port, it should be marked as "deleted" in the template and thus should not re-appear even if there is an update for the template (this would require actually processing the template file and comparing changes during an update). If an actual solution (not a workaround) exists already I'd be happy to hear that too.
  5. Would it somehow be possible to do reserve ram for this? I have 64GB of RAM in my server and it'd be nice to allocate a few GB for this. But I'd also like to guarantee that no other processes use the RAM (there's way more then enough, so I can spare to allocate a bunch). EDIT: nvm, forgot about the head_size being 60MB (or even more).
  6. Does the TRIM plugin also work for unassigned devices?
  7. It's less than ideal, but with the proper security precautions a risk I'm willing to take. Hopefully Valve will be releasing their new SteamOS soon and they don't make use of this same system. Either way, it's functionality that's a requirement for my setup, so I don't really have much of a choice 😅
  8. Please add the feature for local flash backup, rather then online flash backup. I'm aware there's the CA Appdata Backup / Restore plugin available currently, but it's also deprecated since 6.9.
  9. Jup, that fixed it! So now we know that Proton will require Privileged rights when run in a docker container
  10. Seems like it's something else going on. /bin/sh\0-c\0/debian/.steam/debian-installation/ubuntu12_32/reaper SteamLaunch AppId=292030 -- '/debian/.steam/debian-installation/steamapps/common/SteamLinuxRuntime_soldier'/_v2-entry-point --verb=waitforexitandrun -- '/debian/.steam/debian-installation/steamapps/common/Proton - Experimental'/proton waitforexitandrun '/debian/.steam/debian-installation/steamapps/common/The Witcher 3/bin/x64/witcher3.exe'\0 Game process added : AppID 292030 "/debian/.steam/debian-installation/ubuntu12_32/reaper SteamLaunch AppId=292030 -- '/debian/.steam/debian-installation/steamapps/common/SteamLinuxRuntime_soldier'/_v2-entry-point --verb=waitforexitandrun -- '/debian/.steam/debian-installation/steamapps/common/Proton - Experimental'/proton waitforexitandrun '/debian/.steam/debian-installation/steamapps/common/The Witcher 3/bin/x64/witcher3.exe'", ProcID 1336, IP 0.0.0.0:0 ERROR: ld.so: object '/debian/.steam/debian-installation/ubuntu12_32/gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS32): ignored. GameAction [AppID 292030, ActionID 1] : LaunchApp changed task to WaitingGameWindow with "" ERROR: ld.so: object '/debian/.steam/debian-installation/ubuntu12_64/gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS64): ignored. GameAction [AppID 292030, ActionID 1] : LaunchApp changed task to Completed with "" ERROR: ld.so: object '/debian/.steam/debian-installation/ubuntu12_32/gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS32): ignored. ERROR: ld.so: object '/debian/.steam/debian-installation/ubuntu12_32/gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS32): ignored. ERROR: ld.so: object '/debian/.steam/debian-installation/ubuntu12_32/gameoverlayrenderer.so' from LD_PRELOAD cannot be preloaded (wrong ELF class: ELFCLASS32): ignored. pid 1341 != 1338, skipping destruction (fork without exec?) pressure-vessel-wrap[1338]: E: Cannot run /debian/.steam/debian-installation/steamapps/common/SteamLinuxRuntime_soldier/pressure-vessel/bin/pv-bwrap: wait status 256 pressure-vessel-wrap[1338]: E: Diagnostic output: bwrap: No permissions to creating new namespace, likely because the kernel does not allow non-privileged user namespaces. On e.g. debian this can be enabled with 'sysctl kernel.unprivileged_userns_clone=1'. When I run the sysctl command it just returns: sysctl: cannot stat /proc/sys/kernel/unprivileged_userns_clone: No such file or directory My guess is this is something to do with the fact that it's running inside of a docker container?
  11. Do you experience any issues using Proton in the DebianBuster-Nvidia container? Games such as Fallout 4 & The Witcher 3 which are (according to protondb.com) very compatible, just won't launch. Bastion is one of the games that I did manage to launch, so I know it is at least capable of running games.
  12. Ended up updating the BIOS and disabling thunderbolt after which it suddenly worked not sure which one (or possibly both) was the solution. Either way, I'm happy it all works again!
  13. Been trying a lot of values, with reboots inbetween. Unfortunately that didn't seem to fix it. I did end up noticing that for the latest nvidia driver no unlock script exists yet, but after switching back to the latest available that still didn't fix it. Really at a loss here. Will have a look at the BIOS tomorrow, see if anything else stands out. Pics or it didn't happen
  14. Hm okay, will have to do some more digging then. It's indeed the only GPU in the system. I have not yet tried different DFP_NR values since the error really seems to be specifically about what display it's trying to use. (ie; the DISPLAY variable). AMD could very well be a bit behind in virtualization, but the old xeon CPU is from 2009, whereas the 5900X from 2020. I sincerely hope they managed to catch up on the difference over all those years😅 I'll just do some more digging, also with other docker containers to check if it's limited to this container, or if others are affected too. Thanks
  15. Before: H55M-E33 with a Xeon L3426 Now: Asus ProArt X570-Creator Wifi with an AMD 5900X I tried multiple values, but none seem to work. Is there any command to figure out what displays (if any) are available?
  16. I have your debianbuster-nvidia container installed. This worked perfectly fine before I swapped out my motherboard, cpu & flash device. After the upgrade it keeps complaining about not being able to find display ":0". Double checked the Nvidia GUID, also checked my plex docker container, which still works with Hardware acceleration during transcoding, so passing through the GPU still seems to work. At first I thought it might be because I had no monitor connected to the GPU, but after adding a HDMI dummy (which I also used before the upgrade) and rebooted the server it is still throwing the same error. Any ideas? The CPU virtualization features are only used for VM's right? I don't have VMs enabled in my Unraid installation. This is a bit from the container log (please note I tried different values for the display. ie: ":0", ":0.0", ":1"): WebSocket server settings: - Listen on :8080 - Flash security policy server - Web server. Web root: /usr/share/novnc - No SSL/TLS support (no cert file) - Backgrounding (daemon) ---Starting Pulseaudio server--- E: [pulseaudio] client-conf-x11.c: xcb_connection_has_error() returned true Can't open display :0.0 ---------------------------------------------------------------------------------------------------- Listing possible outputs and screen modes: '' ---------------------------------------------------------------------------------------------------- Can't open display :0.0 Can't open display :0.0 ---Looks like your highest possible output on: '' is: ''---
  17. Trying to figure out how this script works. How does plex know to load the initial data from RAM, and the rest of it from the disk? Would like to adapt the script to use an SSD instead of RAM for the first bit of data (since that could just be persistent, and has A LOT more space to store all of it). edit: looking more at the script, it looks like the head and tail commands are used for printing the beginning and end of the file to /dev/null. So I guess that by doing so, the system automatically caches that bit of the file?
  18. Disabling and re-enabling doesn't work for me, I've done that in the past for some other reasons, but this container always has had the same problem. Still unresolved, I don't believe it has anything to do with the custom networks since all of my other containers work with custom networks just fine.
  19. Can you post the output of the commands that have been mentioned in previous replies? Could help establish a baseline. I think everyone here is roughly having the same problem.
  20. I recently got a managed switch and have since been playing around with it, configuring different vlans to make my network more secure. For example, the management interfaces from the router and managed switch can only be accessed via the management vlan. This vlan doesn't have internet access however (by design). If I configure the untagged vlan from unraid to be this management vlan, and have another vlan for docker I'd be able to restrict access to the unraid interface too. If I make the management-vlan the untagged vlan for unraid (ie: it's main eth0 interface), will unraid still be able to perform updates? and if not, is it possible to make unraid use a separate vlan for updating? (ie: use the regular vlan other devices on my network use for generic internet access) Additional information: I'm using unraid 6.8.2
  21. This is a support topic for Wireguard, I'm afraid your issue lies somewhere else. I don't even think it's related to Unraid, you could create a topic here, or maybe some other forum.
  22. Hm, even those settings are the same as mine. Did you get to test a connection to wireguard via the local network?
  23. What output do you get on the command below? sysctl -a | grep -e "ipv4.ip_" -e "wg0" Don't be suprised, the output is quite a lot
  24. can you connect from the local network to the server? This would further narrow down if it's related to the port forwarding, or something else. EDIT: Can you also put a screenshot of your routing table here? (viewable at /Settings/NetworkSettings)