ablaine

Members
  • Posts

    4
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ablaine's Achievements

Noob

Noob (1/14)

0

Reputation

  1. seems like latest update (qbit 4.2.2) broke webUI authentication? Unable to log in with either my normal webui user/pass or the default (admin/adminadmin). I'm getting: WebAPI login failure. Reason: invalid credentials, attempt count: 1 and after 5 attempts -- WebAPI login failure. Reason: IP has been banned Looked in the config files, tried removing/recreating them -- nothing is working. seems like latest update (qbit 4.2.2) broke webUI authentication? Unable to log in with either my normal webui user/pass or the default (admin/adminadmin). I'm getting: WebAPI login failure. Reason: invalid credentials, attempt count: 1 and after 5 attempts -- WebAPI login failure. Reason: IP has been banned Looked in the config files, tried removing/recreating them -- nothing is working. It was fine as of 12 hours ago.
  2. First of all: thank you for this! I love how easy it is was to set up and get going. Second: is there any way to pass the vpn functionality to another container? I have a standalone firefox container, but it's using my normal (non-vpn) network/ip. I need to be able to use that firefox within the container on the same ip the qbit container is using, and I'd need both behind the vpn... but I have no idea how to get that to work. I tried including network_mode service:qbittorrentvpn in the compose for the firefox container, but it gave me an error about ports not being compatible while using network_mode. Thanks in advance for the help!
  3. Thanks for the response. Re: the proxy conf file, I made those changes after the default wasn't working, after seeing some example versions of that file online. I was worried that the ($upstream_sonarr) value wasn't working properly. I've reverted my changes (deleted my conf and renamed the clean sample version), but the issue still exists. Letsencrypt certification does appear to be working correctly. Here's a log from the le container viewed within Kitematic (with email/domain edited out): ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ 2018-06-18T04:50:38.547439400Z 2018-06-18T04:50:38.547441200Z Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donations/ ------------------------------------- GID/UID ------------------------------------- 2018-06-18T04:50:38.589599400Z User uid: 1000 User gid: 1000 ------------------------------------- 2018-06-18T04:50:38.589618200Z [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... using keys found in /config/keys [cont-init.d] 30-keygen: exited 0. [cont-init.d] 50-config: executing... Variables set: PUID=1000 PGID=1000 TZ=America/Los_Angeles URL=domain.net SUBDOMAINS=tv,movies,downloads,requests,ombi,transmission,radarr,sonarr,jackett EXTRA_DOMAINS= ONLY_SUBDOMAINS=false DHLEVEL=4096 VALIDATION=dns DNSPLUGIN=cloudflare [email protected] STAGING= 2018-06-18T04:50:44.317418600Z Backwards compatibility check. . . No compatibility action needed 4096 bit DH parameters present SUBDOMAINS entered, processing SUBDOMAINS entered, processing Sub-domains processed are: -d tv.domain.net -d movies.domain.net -d downloads.domain.net -d requests.domain.net -d ombi.domain.net -d transmission.domain.net -d radarr.domain.net -d sonarr.domain.net -d jackett.domain.net E-mail address entered: [email protected] dns validation via cloudflare plugin is selected Certificate exists; parameters unchanged; attempting renewal <-------------------------------------------------> 2018-06-18T04:50:48.076788400Z <-------------------------------------------------> cronjob running on Sun Jun 17 21:50:48 PDT 2018 Running certbot renew Saving debug log to /var/log/letsencrypt/letsencrypt.log 2018-06-18T04:51:03.055017000Z ------------------------------------------------------------------------------- Processing /etc/letsencrypt/renewal/domain.net.conf ------------------------------------------------------------------------------- Cert not yet due for renewal Plugins selected: Authenticator dns-cloudflare, Installer None 2018-06-18T04:51:03.198860000Z ------------------------------------------------------------------------------- 2018-06-18T04:51:03.198871200Z The following certs are not due for renewal yet: /etc/letsencrypt/live/domain.net/fullchain.pem expires on 2018-09-15 (skipped) No renewals were attempted. No hooks were run. ------------------------------------------------------------------------------- [cont-init.d] 50-config: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Server ready Unless you were referring to another log? I'm starting to think it's not an issue with the reverse proxy setup as much as it is a firewall/gateway issue. I'm not sure how to even go about testing things on that end though. I've already added port forwards for 80, 443, 8080 through the windows firewall settings (and my router), and I don't have any other form of firewall/antivirus on my system.
  4. Hey all, I'm new here, but I'm at the point where I really need to stop bashing my head against the wall and seek help for this. I'm doing my best to set up an automated media server from my home pc. I've gotten it to the point where it works pretty much perfectly... internally. I have containers for Transmission-vpn, Sonarr, Radarr, Jackett, Ombi, etc. However, I really want to be able to access some of these containers externally as well (ombi) or view the status of my downloads in an android app like nzb360 (which supports sonarr, radar, transmission). I was really excited when I came across the linuxserver/letsencrypt image (as I am on a Win10 pc and am unable to use alternatives like Traefik because I can't chmod permissions for the ssl key file -- but that's another topic), and the setup/config for it seemed pretty straightforward. In terms of the domain itself, I purchased a domain name from google domains and transferred it to Cloudflare DNS. There I set up some A records (www.*, *.domain.net) and CNAME records for the subdomains for each container I want to make available externally. I have also forwarded both ports 80 and 443 on my dd-wrt router. I'm using docker-compose to make it a lot easier to test changes and bring up/down the containers as I go. Here is the compose entry for letsencrypt (minus sensitive info [email, domain name, etc]): letsencrypt: image: linuxserver/letsencrypt container_name: le ports: - "80:80" - "443:443" volumes: - ${CONFIG}/letsencrypt:/config restart: always depends_on: - transmission-vpn - sonarr - radarr - ombi - jackett environment: - PUID=${PUID} - PGID=${PGID} - [email protected] - URL=domain.net - SUBDOMAINS=tv,movies,downloads,requests,ombi,transmission,radarr,sonarr,jackett - ONLY_SUBDOMAINS=false - VALIDATION=dns - DNSPLUGIN=cloudflare - DHLEVEL=4096 - TZ=America/Los_Angeles My \letsencrypt\nginx\site-confs\default file looks like this: # main server block server { listen 443 ssl default_server; root /config/www; index index.html index.htm index.php; server_name domain.net; # enable subfolder method reverse proxy confs include /config/nginx/proxy-confs/*.subfolder.conf; # all ssl related config moved to ssl.conf include /config/nginx/ssl.conf; client_max_body_size 0; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # With php7-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php7-fpm: #fastcgi_pass unix:/var/run/php7-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } # sample reverse proxy config for password protected couchpotato running at IP 192.168.1.50 port 5050 with base url "cp" # notice this is within the same server block as the base # don't forget to generate the .htpasswd file as described on docker hub # location ^~ /cp { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; # include /config/nginx/proxy.conf; # proxy_pass http://192.168.1.50:5050/cp; # } } # sample reverse proxy config without url base, but as a subdomain "cp", ip and port same as above # notice this is a new server block, you need a new server block for each subdomain #server { # listen 443 ssl; # # root /config/www; # index index.html index.htm index.php; # # server_name cp.*; # # include /config/nginx/ssl.conf; # # client_max_body_size 0; # # location / { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; # include /config/nginx/proxy.conf; # proxy_pass http://192.168.1.50:5050; # } #} # enable subdomain method reverse proxy confs include /config/nginx/proxy-confs/*.subdomain.conf; And I've renamed the subdomain files I want to use under \proxy-confs\ and they look like this (sonarr example): # make sure that your dns has a cname set for sonarr and that your sonarr container is not using a base url # to enable password access, uncomment the two auth_basic lines server { listen 80; server_name sonarr.domain.net; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name sonarr.domain.net; access_log /var/log/nginx/sonarr.domain.net.log; location / { proxy_pass http://127.0.0.1:8989; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect off; proxy_buffering off; } } To my eye, all of that looks like it *should* be working and allowing me to access sonarr from "sonarr.domain.net" -- but instead I get "ERR_CONNECTION_TIMED_OUT" page. I can ping sonarr.domain.net -- and it returns a reply, along with my valid WAN IP. But I can't reach it in a browser window, and I have no idea what the cause of the issue is. If anyone can help me figure this out, I would be eternally grateful. I've spent the past week or two staying up late trying to get all of this set up correctly, and I feel like I'm *SO CLOSE*! Thanks in advance! -Adam