OdinEidolon

Members
  • Posts

    27
  • Joined

  • Last visited

Everything posted by OdinEidolon

  1. Still doubtful on how I'd do that. There must be a simple way! Creating a custom `/etc/netdata/override/netdata.conf` does not work, and modifying the default file is overwritten on every container update. I guess one could add a path to the docker container template, but it does not seem like a proper solution to me.
  2. Hi and thanks for this docker template! I am trying to add settings to the `/override` folder, correctly mounted to `/etc/netdata/override` as per template default variables. I'd like to add simple settings that apply to `netdata.conf`, such as `hostname = hello-world`, without having to modify the `netdata.conf` file manually. Is that possible, or are `override` config only available for alarms and such? I can't find any documentation about that.
  3. Hi @ich777, I am testing out the PhotoPrism docker, thanks for that. I see you mention that one could use an external MariaDB docker as DB. Why should I? Would I have more stability/speed by doing so rather than using the SQlite that you built in?
  4. Same. I followed the suggestions from the last few pages: 1) loading a different docker image, and reloading :latest 2) Deleting MariaDB log files But my NC is still unable to start.
  5. Does anybody have any hint about what's going on here? I do not understand ifthis is an issue on duckDNS's side or some configuration mishap.
  6. SWAG stopped working for me, using duckdns. It worked OK for the last several months. I did not do any config change. Here's the docker log. Any idea? [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... using keys found in /config/keys [cont-init.d] 30-keygen: exited 0. [cont-init.d] 50-config: executing... Variables set: PUID=99 PGID=100 TZ=Europe/Berlin URL=mydomain.duckdns.org SUBDOMAINS=wildcard EXTRA_DOMAINS= ONLY_SUBDOMAINS=true VALIDATION=duckdns CERTPROVIDER= DNSPLUGIN= EMAIL=mymail@mail.com STAGING=false grep: /config/nginx/resolver.conf: No such file or directory Setting resolver to 127.0.0.11 grep: /config/nginx/worker_processes.conf: No such file or directory Setting worker_processes to 4 Using Let's Encrypt as the cert provider SUBDOMAINS entered, processing Wildcard cert for only the subdomains of mydomain.duckdns.org will be requested E-mail address entered: mymail@mail.com duckdns validation is selected the resulting certificate will only cover the subdomains due to a limitation of duckdns, so it is advised to set the root location to use www.subdomain.duckdns.org Different validation parameters entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created Saving debug log to /var/log/letsencrypt/letsencrypt.log No match found for cert-path /config/etc/letsencrypt/live/mydomain.duckdns.org/fullchain.pem! Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /var/log/letsencrypt/letsencrypt.log or re-run Certbot with -v for more details. Generating new certificate Saving debug log to /var/log/letsencrypt/letsencrypt.log Account registered. Requesting a certificate for *.mydomain.duckdns.org Hook '--manual-auth-hook' for mydomain.duckdns.org ran with output: OKsleeping 60 Hook '--manual-auth-hook' for mydomain.duckdns.org ran with error output: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 2 0 2 0 0 3 0 --:--:-- --:--:-- --:--:-- 3 Certbot failed to authenticate some domains (authenticator: manual). The Certificate Authority reported these problems: Domain: mydomain.duckdns.org Type: dns Detail: DNS problem: SERVFAIL looking up TXT for _acme-challenge.mydomain.duckdns.org - the domain's nameservers may be malfunctioning Has anybody had any problem with duckdns recently? Of course I checked that all the settings, including the token, are correct.
  7. Was it just a matter of opening more ports?
  8. I personally have not tested yet... Here is a script to deploy the client and the server: https://github.com/GrimKriegor/TES3MP-deploy Here is a guide, it says it should be rather simple, but does not mention Linux: https://steamcommunity.com/groups/mwmulti/discussions/1/133258593388999187/ Here an older guide, but specific to Linux: https://steamcommunity.com/groups/mwmulti/discussions/1/133258092238983950/ Hope it helps a little bit. Once again, thank you very much.
  9. Hey @ich777, would you be able to provide a TES3MP server? Here's the github repo: https://github.com/TES3MP/openmw-tes3mp
  10. Just FYI The NWN server is broken if using the 'latest' NWN because the new 'latest' download is not https://github.com/nwnxee/unified/releases/download/buildlatest/NWNX-EE.zip but https://github.com/nwnxee/unified/releases/download/latest/NWNX-EE.zip
  11. Done! Thank you so much! Had to manually mount user.sh this way in the advanced docker options in Unraid: --mount type=bind,source=/my/path/to/user.sh,target=/opt/scripts/user.sh user.sh does: #!/bin/bash # Should be located in /opt/scripts/user.sh SSHD_CONFIG="Port 22\nPermitRootLogin no\nChallengeResponseAuthentication no\nUsePAM yes\nX11Forwarding no\nPrintMotd no\nAcceptEnv LANG LC_*\n" PASSWORD="myveryownpassword" # setting up keys is too much hassle ;) if [ ! -f /usr/sbin/sshd ]; then echo "### Installing missing pacakges ###" apt update && apt-get -y install ssh nano echo -e $SSHD_CONFIG > /etc/ssh/sshd_config echo "nwnee:$PASSWORD" | chpasswd fi echo "### Starting ssh ###" service ssh start
  12. Correct me if I am wrong. I would do this by: Configuring the container to open port 22 to the Unraid server (to port 22222 or something) entering the container by docker exec -it NWN /bin/bash apt update; apt install ssh edit ssh config in /etc/ssh/sshd_config service ssh start set ssh service to start on container boot (i do not know how to do this) set a new password for the nwnee user inside the container ssh nwnee@172.18.0.12 (or the exact container IP) screen -xS nwnee Am I missing anything? The only thing I think I have no idea how to do is step 6: I have tried both creating /etc/init/ssh.conf and using update-rc.d. Any suggestion?
  13. OK, I see. But then how can I access the container controls (via screen) without being root? I need to give access to the container to some friends who are not allowed to access my server as root.
  14. Hi @ich777, I am trying to give access to the NWN server to somebody outside my house (but inside my VPN). To do so I: created a new user in Unraid called nwnee (uid:gid 1002:1002), to which my friends can ssh to run the docker with additional options `--user nwnee` chowned -R the NWN files to user nwnee also tried setting docker UID and GID env vars to 1002:1002 instead of 99:100. However I still get this error when running the docker: /usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: "/opt/scripts/start.sh": stat /opt/scripts/start.sh: permission denied": unknown. The docker run line is: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='NWN' --net='lsio' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'LOG_LVL'='7' -e 'MOD_NAME'='World of Greyhawk CEP 2_65' -e 'NWNEE_V'='latest' -e 'MAX_CLIENTS'='10' -e 'MINLEVEL'='1' -e 'MAXLEVEL'='40' -e 'PAUSEAPLAY'='1' -e 'PVP'='2' -e 'SERVERVAULT'='1' -e 'ELC'='0' -e 'ILR'='0' -e 'ONEPARTY'='0' -e 'DIFF'='4' -e 'AUTO_SAV_I'='60' -e 'SRV_NAME'='giofonchio' -e 'PPW'='m' -e 'APWD'='mm' -e 'PUBLIC_SRV'='0' -e 'RLD_W_E'='0' -e 'GAME_PARAMS'='-dmpassword mmm' -e 'UID'='1002' -e 'GID'='1002' -e 'UMASK'='000' -p '5121:5121/udp' -v '/mnt/cache/appdata/nwnee':'/nwnee':'rw' -v '/mnt/user/Storage/Games/nwnee/wog':'/nwnee/Neverwinter Nights':'rw' -dit --restart=unless-stopped --user nwnee 'ich777/nwnee-server' Of course if I do not specify docker run --user try to access the docker as user nwnee I get permission denied: nwnee@giofonchio> docker exec -u nwnee -ti NWN screen -xS nwnee Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/NWN/json: dial unix /var/run/docker.sock: connect: permission denied I'm not sure how to fix this. Can you help?
  15. It's a good solution, yes. Thank you, will test in a few days.
  16. Sorry, I somehow missed your comment. My bad! So, basically how the server works is that you launch it with a module, and it JustWorksTM. However, there are a number of things you can do if you can access the server directly. Options can be changed in-game, you can load and save games (very useful if you and your party want to retry a game section, or to save before a major boss), you can kick users whose NWN has crashed (happens often unfortunately) so that they can log back in quickly, etc. This might not be useful if hosting a always-on persistent world, but it is essential if playing a module with a group of friends. As described in Beamdog's docker (link), this is accomplished by running the docker with -dit and then attaching to it (with docker attach). Once you are attached to it, write help and you'll be able to read on STDOUT all the available commands (load, save, kick, etc.), which you can execute. For example: save 77 test_name ### saves the savegame "test_name" on slot 77 load 77 ### loads slot 77 difficulty 4 ### sets difficulty to max oneparty 1 ### allow only one party kick 6 ### kick player 6 from the server ### many more commands are available With the modification currently detaching STDIN this does not work and the server is effectively useful only for playing persistent worlds.
  17. Hi @ich777, I might have found a bug in the new NWN docker. In the latest commits, you introduced a SIGTERM handler in start.sh. However, this means you background the started server. This detaches STDIN. The NWN server requires an attached STDIN to give it commands (load, save, set options, etc.). This is detailed in the Beamdog docker as well. Thus, with this update which detaches STDIN, one cannot control the server directly anymore. Would you be able to fix it? Reverting the addition of the handler would be enough. Thanks!
  18. Is there anybody that can answer my question? Because my drives are all off right now, but the fan is still spinning 100%, for some unfathomable reason. Is there any log I can access?
  19. I have a question regarding System AutoFan. Does the plugin take into account the temperature of the drives, that of the CPU, or both? Because I'd like to control both, but clearly an identical threshold for both would make no sense. It'd be nice to have two different threshold groups, and apply the fastest fan speed coming from both calculations.
  20. Hi and thanks for this docker! Would you be able to suggest how to make Netdata automatically recognise HDDtemp's data coming from Atribe's HDDtemp docker image? (support here: )
  21. Tanks for the reply! Yes, I believe you can save ingame if you login as admin, but i'm not sure that's true. I can't find how to do it anyway. What we did when using the beamdog official docker is use docker attach, which would give us a console in which to type commands. Now I found out that we could do that thanks to the -dit Docker options (I'm a docker noob, sorry), so adding them to your container works as well. However, I still think the GAME_PARAMS variable seems to do nothing. I pass '-load 9' as value (without quotes) and still game number 9 is not loaded (and nothing to that effect is logged).