• Posts

  • Joined

  • Last visited


  • Gender
  • URL
  • Location
    the Netherlands

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

xorinzor's Achievements


Apprentice (3/14)



  1. Have there been any updates on this?
  2. Just gonna add this here for others to find if they're googling for it. I tried playing Halo Master Chief Collection (MCC) on Steam-headless. But the game would randomly freeze in the main menu, often when trying to change the settings. Audio would keep playing. After a bit of debugging it turns out that it tried to open too many files. Adding the following to the extra parameters in the docker container configuration fixed this: --ulimit nofile=100000:200000 Disclaimer: I just added random high values for both soft and hard limits here as I wanted to test if it would resolve my issue. I don't know what possible implications this will have, use at your own risk.
  3. Found some info on that back then, but wasn't able to find it again. I did however stumble upon another solution by using KeePassXC (tutorial here, if others are interested). This solution can easily be done via the init.d script and wouldn't require any modifications of the docker container Would still be nice to configure a variable for the docker container to add a NoVNC Password or (preferably) disable NoVNC entirely.
  4. You can add your own container init scripts to install any keyring software that you need Yes, but it's a real hassle to unlock them automatically due to the the auto login. There's a way to do it, but it requires changing config files which won't be persistent. Additionally, it'd be a nice addition if there's a boolean variable in the docker config that would enable password-protected access to the NoVNC (as well as another option to disable NoVNC entirely).
  5. @Josh.5 Any chance you can also implement a Password keyring that unlocks automatically too? Reason being, I also want to install the Minecraft Launcher, but it saves it's login via the Gnome keyring. Or if there's a better way I'd be happy to hear that too
  6. Ah okay, so once my server has updated to 6.10 this should no longer be happening? That's great news. Thanks. For now I'll manually edit the XML's that are problematic.
  7. Tried this method But only after enabling the Template Authoring mode and not finding the TemplateURL field I continued reading the thread only to read that that field got removed (does the Template Authoring Mode even serve a purpose now?). It'd be really great if you could re-implement any kind of way for us to take manual control, or just allow empty fields (which are then ignored entirely).
  8. @Squid Have you been able to think of anything to facilitate / fix this issue?
  9. I agree with @Squid here. Well, for docker containers, my use case isn't really all that weird It might in the unraid userspace not be all that common, but this is quite a normal way to use docker containers. Which isn't an issue, but I do think there's room for improvement in this particular case. Overall my experience with managing docker containers in unraid has been really positive.
  10. If that's the only solution, then I'll have to do that. But it really feels more like a hack then a fix. Never experienced such issues in any of my minecraft containers. AFAIK the service inside the container is none the wiser about the mapped ports.
  11. I am aware of this, but it is exposed on the host. From a security standpoint of view it really is a no-go to expose services that shouldn't (have to) be exposed. Even if it's on the local network. I'm not using either br0 or host. I have custom bridge docker networks. Ports from the containers themselves are available to other containers within the same network, but as long as I do not have a port mapping defined, it will not be exposed on the host and available from the network that my unraid server is in. For example, I have a network called "webhost" in which I have my nginx, php and mysql containers. I only have to expose nginx on the host, but do not want my mysql service to be exposed. I'm confused about this, these 2 statements seem contradictory. I use the Auto Update Applications plugin to automatically update the containers and have had a couple of times now that some docker containers are unable to start due to conflicting Port mappings that have been re-added after they got updated. I'm not really familiar with how the template system works. But was under the impression that Unraid keeps track of these fields in the XML on a per-container basis. Perhaps an solution would be to instead of removing the field, marking them as "disabled" or leaving their value empty? This way, if the template updates, the field still exists, but is just "disabled".
  12. That could theoretically be every container with the current way the templates work in Unraid, especially with the new fields potentially being added, it'd be more of a hassle to figure out why a container suddenly breaks because a new field got added that I then don't have, vs having to re-remove a port (although the downtime is annoying). If a specific field already existed before, and I chose to remove if, that should take precedence over any future updates for that specific field. That's why I think they should be keeping track of this and process it when the template updates (so every field that's marked as removed, will stay removed). Obviously new variables and ports should still be added, but existing ones should conform to whatever the user configures it to be (including a removed state). Because I use a lot of networks between my containers to further isolate them from each other, and they generally use internal communication. Exposing the port on the host is of no benefit to me, and I do not want to add an additional firewall to my unraid server to isolate it as a host. Rather I'd just prevent any ports from opening up, and only having to open up ports (if required) to my WAN in the router.
  13. I've had a few occurrances lately where some of my docker containers would be stopped. Because after they were automatically updated, some of the port assignments that I removed came back. I found that this apparently has been discussed before, but there doesn't really seem to have been implemented a fix. Rather more of a workaround was discussed. Are there any plans on creating an actual fix for this? I don't want to create unnecessary port assignments to my docker containers just so they don't keep re-appearing and causing issues for my containers. Perhaps if a user removes a port, it should be marked as "deleted" in the template and thus should not re-appear even if there is an update for the template (this would require actually processing the template file and comparing changes during an update). If an actual solution (not a workaround) exists already I'd be happy to hear that too.
  14. Would it somehow be possible to do reserve ram for this? I have 64GB of RAM in my server and it'd be nice to allocate a few GB for this. But I'd also like to guarantee that no other processes use the RAM (there's way more then enough, so I can spare to allocate a bunch). EDIT: nvm, forgot about the head_size being 60MB (or even more).