aterfax

Members
  • Posts

    61
  • Joined

  • Last visited

Everything posted by aterfax

  1. Was this an issue for other users? Did this get resolved in the container?
  2. Likewise, reverting now. If the docker image has been updated / rebased on Debian a cursory Google suggests that this may be related: https://github.com/matrix-org/synapse/issues/4431 Edit: Reverting to the previous image works fine. Filed an issue on Github for this.
  3. By default the apt mirroring seems to be missing the cnf directory. This caused my ubuntu machines to throw the 'Failed to fetch' error from my original post. I added the cnf.sh linked from above to the config directory, edited it to support "jammy" then ran it from inside the running container to download the missing files. Script as follows: #!/bin/bash cd /ubuntu-mirror/data/mirror/ for p in "${1:-jammy}"{,-{security,updates}}\ /{main,restricted,universe,multiverse};do >&2 echo "${p}" wget -q -c -r -np -R "index.html*"\ "http://archive.ubuntu.com/ubuntu/dists/${p}/cnf/Commands-amd64.xz" wget -q -c -r -np -R "index.html*"\ "http://archive.ubuntu.com/ubuntu/dists/${p}/cnf/Commands-i386.xz" done This resolved the 'Failed to fetch' problem. This could be added to the container in one way or another I suspect, but it does not appear to be something that needs to be ran regularly according to the link above.
  4. Is the ich777/ubuntu-mirror docker working correctly? Seeing errors: E: Failed to fetch https://ubuntu.mirrors.domain.com/ubuntu/dists/jammy/main/cnf/Commands-amd64 404 Not Found [IP: 192.168.1.3 443] Which may relate to: https://www.linuxtechi.com/setup-local-apt-repository-server-ubuntu/ Note : If we don’t sync cnf directory then on client machines we will get following errors, so to resolve these errors we have to create and execute above script. Edit: Following the instructions above to grab the cnf directory seems to have resolved the issue.
  5. General warning from me, I seem to be getting horrible memory leaks with Readarr. After a few hours it consumes all RAM / CPU and grinds my Gen 8 microserver to a halt.
  6. To be explicit with my volume mounts for SSL working: /data/ssl/server.crt → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/cert.pem /data/ssl/ca.crt → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/chain.pem /data/ssl/server.key → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/privkey.pem I do not recall the exact details of why the above is optimal but I suspect that Poste is handling making it's own full chain cert which results in some cert mangling if you do give it your fullchain cert rather than each separately (various internal services inside the docker need different formats) - I believe that without the mounts as above the administration portal will be unable to log you in. @brucejobs You might want to check if this is working for you / Poste may have fixed the above. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- To move back to the Swag docker itself. My own nginx reverse proxy config for the Swag docker looks like: # mail server { listen 443 ssl http2; server_name mailonlycert.DOMAIN.com; ssl_certificate /etc/letsencrypt/live/mailonlycert.DOMAIN.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mailonlycert.DOMAIN.com/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass https://10.0.0.1:444; proxy_read_timeout 90; proxy_redirect https://10.0.0.1:444 https://mailonlycert.DOMAIN.com; } } Some adjusting if you have multiple SSL certs would be needed and you should take care if using specific domain certs ala documentation here: https://certbot.eff.org/docs/using.html#where-are-my-certificates The SSL configuration is effectively duplicated from: /config/nginx/ssl.conf thus could be simplified if you are only using one certificate file with: include /config/nginx/ssl.conf Likewise for the proxy configuration you can simplify if content with the options in /config/nginx/proxy.conf with: include /config/nginx/proxy.conf; ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- When using includes just be sure that the included file has what you need e.g. The option: proxy_http_version 1.1; Is particularly important if you are using websockets on the internal service. In some cases (Jellyfin perhaps) you may also want additional statements like the following for connection rate limiting: Outside your server block: limit_conn_zone $binary_remote_addr zone=barfoo:10m; Inside your server block: location ~ Items\/.*\/Download.* { proxy_buffering on; limit_rate_after 5M; limit_rate 1050k; limit_conn barfoo 1; limit_conn_status 429; } ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Cheers, (hope there's no typos!)
  7. The SWAG docker uses certbot........ https://github.com/linuxserver/docker-swag/blob/master/Dockerfile /mnt/usr/appdata/swag/KEYS/letsencrypt/ is a symlink - due to the way docker mounts things you are better avoiding trying to mount anything symlinked as the appropriate file pathing must exist in the container. If you mount the symlink it will point at things that do not exist within a given container which is why the method 1 below required you to mount the entire config folder - a bad practice. Mounting a symlink of a cert directly will work, but mounting directly from /mnt/usr/appdata/swag/KEYS/letsencrypt/ will almost certainly break the minute you attempt to get multiple certs for more than one domain. You appear to be talking about generating a single certificate with multiple subdomains.... this is going to generate only one certificate and one folder it sits in. Feel free to make an issue on their Github if you are so convinced that you know their container better than they do: https://github.com/linuxserver/docker-swag/issues You can argue the toss about how you should correctly mount things - but you really should be doing method 2. https://hub.docker.com/r/linuxserver/swag I am not using the letsencrypt docker, I am using swag which is a meaningless distinction since they are the same project with a different name due to copyright issues. You do not really appear to be reading anything linked properly nor understanding anything fully. I'm not continuing with this dialogue.
  8. You are familiar with the concept of the past I assume? Edit: Here are the docs you need, your certificate files are not in the keys folder: https://certbot.eff.org/docs/using.html#where-are-my-certificates
  9. 1 - Yes these are paths. 2 - I added mail.domain.com - naturally you need to have setup your CNAME / MX records and forward ports in your router/network gateway to the Poste.io server. If you wish for the web GUI of poste.io also to be accessible externally you will also need to setup the correct reverse proxy with SWAG. I don't know what you mean by .config 3 - I blanked the folder in the screen shot as I do not want to share my domain name, but yes the subfolder with your domain name / subdomain name is where the PEM files should be if you have setup SWAG correctly to get SSL certificates. If you are lacking certificates you have probably got it setup incorrectly.
  10. EDIT: UPDATED MAY 2021! I ended up mounting the default certificate files in the docker directly to the certificates from my letsencrypt docker: To be explicit with my volume mounts for SSL working: /data/ssl/server.crt → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/cert.pem /data/ssl/ca.crt → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/chain.pem /data/ssl/server.key → /mnt/user/appdata/letsencrypt/etc/letsencrypt/live/mailonlycert.DOMAIN.com/privkey.pem I do not recall the exact details of why the above is optimal but I suspect that Poste is handling making it's own full chain cert which results in some cert mangling if you do give it your fullchain cert rather than each separately (various internal services inside the docker need different formats) - I believe that without the mounts as above the administration portal will be unable to log you in. As I mentioned further up in this thread you can alternatively mount .well-known folders between your Poste IO and letsencrypt docker - this will not work if your domain has HSTS turned on with redirects to HTTPS (or this was the case with the version of letsencypt in the docker a while ago as it was reported here: https://bitbucket.org/analogic/mailserver/issues/749/lets-encrypt-errors-with-caprover )
  11. You have advanced view turned on, you can toggle this at the top right.
  12. I too would like these to be added however I have solved the problem elsewhere before: https://forums.unraid.net/topic/71259-solved-docker-swarm-not-working/?do=findComment&comment=760934
  13. Many thanks for the explanation - I was not aware that certain things would have issues with the virtual paths as it were. The script for L4D you sent me works great and starts everything in a screen which is super. In terms of screen / tmux support for gameservers - I have gotten quite used to using https://linuxgsm.com/ which keeps each server console in its own tmux and has a robust logging setup and a lot of support for many game servers. The access to the console via SSH or if in a docker potentially via the Unraid or portainer console is certainly my preference to RCON given RCON's shortcomings. The developer (Dan Gibbs) has actually been looking for someone able to maintain / make a docker from his project for some time - so you might find working with his project easier to maintain / supports more game server types? His solution as it stands is excellent supporting automatic install of plugins etc... I'd love to see a collab!
  14. I do mean over SSL - you can access your docker consoles from the docker page inside Unraid after all or in setups like mine I have access to all running dockers via a portainer instance being hosted with SSL protection. I was using /mnt/user/appdata/l4d2 before getting it working on the cache drive- is there a reason why you cannot run this from an appdata dir like all my other dockers? I really have no idea what could be causing a segfault in /mnt/user/appdata/l4d2 unless the file system is doing something whacky in the background with the cache. E.g. below:
  15. Very odd - I now seem to have it working ok but only when on the cache drive.... Edit - the ask for access via screen for server console is kinda agnostic of the server type - some things will not support rcon and rcon itself is an insecure protocol - I'd rather just login over SSL and deal with things via portainer or via Unraid docker console.
  16. Just purged the folder and reinstalled the app from the community apps plugin again today, same issue - I have been installing to /mnt/user/appdata/l4d2 rather than cache from the start - I am on Unraid 6.8.3 - I can try using my cache drive - pretty sure my appdata folder is set to prefer mode so I will swap that now and see if that helps. See logs: Update state (0x101) committing, progress: 58.67 (4812218723 / 8202656842) Update state (0x101) committing, progress: 58.68 (4813019123 / 8202656842) Update state (0x101) committing, progress: 58.69 (4814223871 / 8202656842) Update state (0x101) committing, progress: 58.70 (4815004139 / 8202656842) Update state (0x101) committing, progress: 58.71 (4815629318 / 8202656842) Update state (0x101) committing, progress: 58.72 (4816202362 / 8202656842) Update state (0x101) committing, progress: 58.72 (4816703629 / 8202656842) Update state (0x101) committing, progress: 58.73 (4817122866 / 8202656842) Update state (0x101) committing, progress: 58.74 (4817962729 / 8202656842) Update state (0x101) committing, progress: 58.75 (4818946359 / 8202656842) Update state (0x101) committing, progress: 58.76 (4819688920 / 8202656842) Update state (0x101) committing, progress: 58.76 (4820238981 / 8202656842) Update state (0x101) committing, progress: 58.77 (4820707914 / 8202656842) Update state (0x101) committing, progress: 62.82 (5152699145 / 8202656842) Update state (0x101) committing, progress: 62.84 (5154537738 / 8202656842) Update state (0x101) committing, progress: 73.26 (6009561231 / 8202656842) Update state (0x101) committing, progress: 73.27 (6010194234 / 8202656842) Update state (0x101) committing, progress: 73.28 (6010693799 / 8202656842) Update state (0x101) committing, progress: 73.29 (6011708900 / 8202656842) Update state (0x101) committing, progress: 73.30 (6012476650 / 8202656842) Update state (0x101) committing, progress: 73.31 (6013542086 / 8202656842) Update state (0x101) committing, progress: 73.32 (6014328143 / 8202656842) Update state (0x101) committing, progress: 73.39 (6019990404 / 8202656842) Update state (0x101) committing, progress: 73.40 (6020942346 / 8202656842) Update state (0x101) committing, progress: 73.41 (6021653635 / 8202656842) Update state (0x101) committing, progress: 92.92 (7622095264 / 8202656842) Update state (0x101) committing, progress: 92.93 (7623095284 / 8202656842) Update state (0x101) committing, progress: 92.95 (7624106147 / 8202656842) Update state (0x101) committing, progress: 92.96 (7625166870 / 8202656842) Update state (0x101) committing, progress: 93.03 (7630646624 / 8202656842) Update state (0x101) committing, progress: 93.04 (7631921404 / 8202656842) Update state (0x101) committing, progress: 97.85 (8026301170 / 8202656842) Success! App '222860' fully installed. ---Prepare Server--- ---Server ready--- ---Start Server--- Server will auto-restart if there is a crash. Setting breakpad minidump AppID = 222860 Using breakpad crash handler Forcing breakpad minidump interfaces to load Looking up breakpad interfaces from steamclient Calling BreakpadMiniDumpSystemInit Segmentation fault Add "-debug" to the /serverdata/serverfiles/srcds_run command line to generate a debug.log to help with solving this problem Fri 14 Aug 2020 10:30:46 PM BST: Server restart in 10 seconds Setting breakpad minidump AppID = 222860 Using breakpad crash handler Forcing breakpad minidump interfaces to load Looking up breakpad interfaces from steamclient Calling BreakpadMiniDumpSystemInit Segmentation fault
  17. More debug - Further debug log output - /serverdata/serverfiles/srcds_run: 1: /serverdata/serverfiles/srcds_run: gdb: not found WARNING: Please install gdb first. goto http://www.gnu.org/software/gdb/ Server will auto-restart if there is a crash. Setting breakpad minidump AppID = 222860 Using breakpad crash handler Forcing breakpad minidump interfaces to load Looking up breakpad interfaces from steamclient Calling BreakpadMiniDumpSystemInit Segmentation fault Add "-debug" to the /serverdata/serverfiles/srcds_run command line to generate a debug.log to help with solving this problem Thu 13 Aug 2020 11:59:30 PM BST: Server restart in 10 seconds ---Server ready--- ---Start Server--- Enabling debug mode /serverdata/serverfiles/srcds_run: 178: ulimit: error setting limit (Operation not permitted) /serverdata/serverfiles/srcds_run: 1: /serverdata/serverfiles/srcds_run: gdb: not found WARNING: Please install gdb first. goto http://www.gnu.org/software/gdb/ Server will auto-restart if there is a crash.
  18. Ah right - could I recommend launching all servers inside a screen or a tmux as an enhancement? As for the error with left4dead - I purged and reinstalled twice leaving it to validate the second time and it resulted in the following (part of why I wanted the console): Add "-debug" to the /serverdata/serverfiles/srcds_run command line to generate a debug.log to help with solving this problem Thu 13 Aug 2020 10:48:24 PM BST: Server restart in 10 seconds Setting breakpad minidump AppID = 222860 Using breakpad crash handler Forcing breakpad minidump interfaces to load Looking up breakpad interfaces from steamclient Calling BreakpadMiniDumpSystemInit Segmentation fault
  19. Any of the source servers e.g. left4dead2 which now appears to be stuck in a bootloop.
  20. Does this docker have any support for accessing the running console of the game through screen or tmux? I cannot see any documented way to access the running console to run server commands?
  21. I might be wrong, but when I have set different UID and GID this does not seem to be respecting that. Edit: Looking at the files, the UID and GID are currently hardcoded in the dockerfile and I do not see anything in the start scripting to change from the default values of 99 and 100. So if you need to change this, you need to use the dockerfile and build the image yourself currently. Might be an improvement to add something to the built in start script to account for this. Many thanks @ich777
  22. Looks like this is related to the VM XML config: -<hostdev type="pci" mode="subsystem" managed="yes"> <driver name="vfio"/> -<source> <address bus="0x04" function="0x0" slot="0x00" domain="0x0000"/> </source> <rom file="/boot/vbios/gtx 1060 dump.rom"/> <address type="pci" bus="0x00" function="0x0" slot="0x05" domain="0x0000"/> </hostdev> -<hostdev type="pci" mode="subsystem" managed="yes"> <driver name="vfio"/> -<source> <address bus="0x04" function="0x1" slot="0x00" domain="0x0000"/> </source> <address type="pci" bus="0x00" function="0x0" slot="0x06" domain="0x0000"/> </hostdev> Assuming these are the GPU and GPU audio devices you have the audio in a different slot in the VM. These need to be matching the physical layout from the source hardware, i.e. same slot, different function. Correct the second address to the same slot and one function higher i.e. : <address type="pci" bus="0x00" function="0x1" slot="0x05" domain="0x0000"/> Which will probably fix the issue. See reasoning below: This is assuming that the passthrough is actually working* and the VM can actually see the GPU hardware.
  23. You should read what I said then - "I still get an RMRR warning on a ethernet passthrough on my server" i.e. I get the warning on a device that is passing through correctly. Ergo, red herring.
  24. I still get an RMRR warning on a ethernet passthrough on my server- given it is working in the VM, I assume you are getting a red herring of a warning. Edit: I suppose you can test by setting up a Ubuntu / linux VM and taking a look for the device.
  25. IOMMU RMRR warning is probably a red herring. I suspect you need to follow / attempt to avoid error 43 with the link I sent above: https://mathiashueber.com/fighting-error-43-nvidia-gpu-virtual-machine/ Specifically - KVM hidden via Libvirt xml - you will need to edit your VM in xml mode. (Top right toggle in the VM editor.) Another user fixed it here: You should probably search the forums first?