Ding Dong Del

Members
  • Posts

    21
  • Joined

  • Last visited

Everything posted by Ding Dong Del

  1. +1 from me - I know most of the time you guys hear about what isn't working. @binhex your work is a key part of my homelab / selfhosted setup, thank you.
  2. Got the same error on first run Was a permissions error for me - fixed it by: - in the docker configuration, switch to advance mode and added the following as extra parameters "--user 99:100" - I also had to change the logs (and content under it) to be owned by nobody.users (99:100) in my set up - assume these were created as root when I installed the app and it tried to run the first time as root. I've not gone any further yet other than to confirm it now starts and I can get to the startup (config) screen. @FoxxMD - thank you for putting this together, looks cool to play with. Allowing users to specify a PUID, and GUID variable to set the user/group they want this to run as seems to be a familiar pattern (at least on the LSIO containers) that will help with this I think. Edit: Quick test using SQLlite was able to run up with the test data ok - FWIW
  3. Awesome @bonienl, worked a treat, thank you! And a general thank you for the great work you and community / limetech do - really awesome product!
  4. Hi all, please sing out if this is the wrong thread and/or more info is needed to troubleshoot. I am following along @ljm42 's writeup (thank you @ljm42). I've come across what I think is an error in page validation logic when attempting to configure a tunnel via UNRAID->Settings->VPN Manager. I have my own domain where the top level is .management When attempting to enter the domain name I get an error (looks like a javascript validation error - including it here in case someone else is searching for the same error message) The error message (chrome) is: "Please match the format requested." "IP Adress or FQDN" The error message (safari) is: "Match the requested format." From what I can tell, page is expecting that the top level domain (TLD) will be 8 characters or less. Some (not very scientific) examples / tests: fred.management - ERROR management.fred - Saves FINE fred.fredfred - Saves FINE fred.fredfredf - ERROR (note one additional character in the TLD) I've tried clearing my browser cache in case I had some js validation file/library cached. Not really sure who to address this to as I don't know who is the author of the Wireguard VPNManager page.
  5. I haven't been able to find any more info. Things I have tried. Changed logging to debug, and then trace - have seen the backend disconnection error messages, but nothing captured in either of these files. I have lidarr set up behind an nginx reverse proxy, and get the same error regardless of whether I am going direct via a local addressed, or via the external address. will keep digging
  6. +1 Have tried adding TZ variable, and separately a path to /etc/localtime but still getting the same error. (https://hub.docker.com/r/linuxserver/ombi/ suggests exceptions will be thrown without one set.)
  7. FWIW I have been running the volkion container for months now, and have just switched to the linuxserver.io container (because they rock). I am seeing the same error as reported above, while configuring the application (and wasn't on the volkion container. I assume that the version of the app within the container is not necessarily the same. Off to get some dinner but will dig a bit more and see if I can find some info that is a bit more helpful.
  8. So I’ve tried any number of things and am clearly missing something. Details of latest specific attempt below at the bottom of my post, would appreciate any insight. I’ve tried any number of combinations of things: I have confirmed that port 80 is not blocked by temporarily forwarding external port 80 in my router to my Krusader container and have been able access successfully it via my dynamic domain name. I have tried combinations of IPV4, and IPV6, host networking, bridge networking, I am currently using eth0 networking so that my LE container has a dedicated IPV4 address on my local 192.168.1.0/24 network. In bridged mode, I confirmed there were no port conflicts by using 88, and 440 (respectively). I also used 80, and 443 on the host, after having moved the unraid UI to 81, and 444 respectively - a config I’ve used successfully on other hosts. I’ve added —cap-add=NET_ADMIN as per the linxserver page for the LE container HOST netstat output root@cerberus:~# netstat -tulpn | grep -i listen 80,443 in this instance are the unraid UI’s - don’t think this is relevant given that the container has it’s own dedicated local ip (192.168.1.1 vs. 192.1.68.1.10 for the host - cerberus) tcp 0 0 0.0.0.0:37 0.0.0.0:* LISTEN 1597/inetd tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 1643/smbd tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1583/rpcbind tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 8610/nginx: master tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 1597/inetd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1605/sshd tcp 0 0 0.0.0.0:23 0.0.0.0:* LISTEN 1597/inetd tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 8610/nginx: master tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 1643/smbd tcp 0 0 0.0.0.0:59331 0.0.0.0:* LISTEN 1587/rpc.statd tcp6 0 0 :::58185 :::* LISTEN 1587/rpc.statd tcp6 0 0 :::139 :::* LISTEN 1643/smbd tcp6 0 0 :::111 :::* LISTEN 1583/rpcbind tcp6 0 0 :::8080 :::* LISTEN 29105/docker-proxy tcp6 0 0 :::80 :::* LISTEN 8610/nginx: master tcp6 0 0 :::22 :::* LISTEN 1605/sshd tcp6 0 0 :::443 :::* LISTEN 8610/nginx: master tcp6 0 0 :::445 :::* LISTEN 1643/smbd I've also attached my letsencrypt log file from the LE container - note it still talks about the bind to :80 using IPV4 failing. letsencrypt.log
  9. @Nem, it was probably equal parts ignorance, and equal parts stubbornness I was playing with / had set up my reverse proxy virtual directories first, before deciding to set up a vpn. Even though if I used a vpn to connect home, I didn't really need anything exposed, I was pretty pleased with myself having set up the virtual dirs, so didn't want to take them down..... Silly I know.... Same thing, I don't use the webgui of ovpn - I always use a vpn client. By the time I realised I could use the port share feature of ovpn I was too lazy to undo my work (even though it probably ended up being more work setting up the stream approach). so I set them both up to be available on 443 because I didn't want to open too many ports on my fw, and because I wanted to be able to get to either from behind e.g. a corporate firewall which I am sure will have outbound 443 open (but probably couldn't rely on much else being open)
  10. I only use duckdns because I only have a dynamic IP address through my ISP (both domain names point to the same IP address). You can use your own Dynamic DNS provider or hostname(s) assuming you can set up the relevant DNS entries.
  11. My bad - I left out a key piece! I am using hostnames to get to either openvpn (via a vpn client), or my nginx reverse proxies. so using the example above, I have two duckdns domains set up (fred, and barney) and if I want to get to sonarr e.g. I use: https://fred.duckdns.org/sonarr when I want to connect to my vpn, I point my vpn client at barney.duckdns.org. I've done it this way on purpose so that both the reverse proxy, and openvpn are listening on 443 externally so that if behind a firewall I can still use my vpn. that is a slightly different use case than you described.
  12. Hi @aptalca, I can't get to that installation until the end of the week - from memory there was a docker-proxy process listening on the host on the ports specified in the container config (e.g. :80) from the screenshot above. container is in bridge mode at the moment, and was mapping port 80 (host) to 80 (container) so was thinking that docker-proxy process listening on :80 on the host "didn't look wrong".
  13. Hi Nem, I've been able to do it by using the stream directive in nginx, which uses SNI to direct the ssl stream to the right service. Add the following to your LE's nginx.conf stream { map $ssl_preread_server_name $name { fred.duckdns.org fred; default barney; } upstream barney { #openvpn container server 192.168.2.37:9443; } upstream fred { # upstream nginx virtual hosts such as sonarr, radarr, nzbget, etc. server 192.168.2.37:4430; } server { listen 443 so_keepalive=on; proxy_pass $name; ssl_preread on; } } and then something like this in your LE's site-configs/default #main server block server { listen 4430 ssl default_server; root /config/www; index index.html index.htm index.php; server_name fred.duckdns.org; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; client_max_body_size 0; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } location /lidarr { include /config/nginx/proxy.conf; proxy_pass http://192.168.2.37:8686/lidarr; } location /nzbget/ { include /config/nginx/proxy.conf; proxy_pass http://192.168.2.37:6789; } location /nzbhydra/ { include /config/nginx/proxy.conf; proxy_pass http://192.168.2.37:5075; } location /radarr { include /config/nginx/proxy.conf; proxy_pass http://192.168.2.37:7878/radarr; } location /sonarr { include /config/nginx/proxy.conf; proxy_pass http://192.168.2.37:8989/sonarr; } } The first (stream) block basically uses the requested host name to direct the request to either openvpn, or sends traffic on to nginx (same instance) for processing by your virtual host (note sure if virtual host is the right name here) Ports / ip addresses are relevant for my installation - adjust for your own setup
  14. @fmp4m when you talk about upnp forcing a seperate config, how did you check / determine that? I've been using this LE container for months fine until the tls method was disabled. I've been pulling the little hair I have left out trying to work out why HTTP val is failing for me. I've tracked it down to, within the docker container for LE - when I look at the debug log, I see it throwing an error that it can't bind to the port that I have said to use for HTTP. My next steps were to try and track down why the bind (for the LE webserver that is spun up for validation when --standalone is being used) is failing - I wonder if you are on to something. (I've attached a screenshot of the error - I've had to fly out of town this morning so can't get to more log detail at this time, sorry.) Based on having had this working previously (and "admin'ing" and unraid set up at a friends house where it is working fine there - I *haven't* upgraded their LE container just yet.....) I am very confident that I have my configs set up correctly. I must be doing something wrong/differently. There is absolutely nothing listening on *any* port within the container itself (as you can also see from the screen shot below) - well you could if I hadn't snipped it in my rush to get out the door - but trust me, there was NOTHING returned from the command below.
  15. Hi jsbroks, read back over the last few pages in this post. aptalca, and chbmb (and others) have provided a fair bit of detail on what steps to take