Jump to content

Kaizac

Members
  • Content Count

    280
  • Joined

  • Days Won

    1

Everything posted by Kaizac

  1. Does it still work when you put the CDN back on and the pings give you their IP back? You probably don't want to hear this, but the way you configured your subdomains now opening it all to the internet is really asking for trouble. The adviced procedure is to use a VPN (which you can easily set up since you are on Pfsense) and then access your dockers like Sabnzbd and radarr. Only dockers like Nextcloud should be opened to the internet. Please make sure you have an idea what you are doing, cause right now it seems to me like you are just following some guides and not really understanding what is going on. Also I wonder if you run into problems with your Nextcloud since you put MariaDB on bridge and your Nextcloud on proxynet. I expect them to have problems connecting, but maybe they work fine?
  2. No, that's not correct. I have LE running with Cloudflare through their CDN network. Your configuration is not correct. First you have your docker on 180/443 and in Pfsense you open up 80 and 443?? That should be 80 forwarding to 180 and 443 to 443. But then in your Nextcloud config you hard redirect to port 444. So if I were you I would walk through your config from the beginning, seems like you skipped through some steps. And for your LE validation you can use the Cloudflare.ini file if you aren't using that already.
  3. You don't want rclone cache since you use VFS. Rclone cache is just a temporary storage folder from which it gets uploaded to your gdrive. VFS is superior. What you can do though is use an SSD to use for your Plex to use as transcoder location. But if you are mostly direct streaming this won't help. So first check what kind of streams you are having. And like I said, you're probably better of reducing the buffer commands in your mount command. You can look the seperate commands up and see whether lowering them will help your RAM usage.
  4. The error you get is probably because you maxxed your 750gb daily upload to gdrive. You are writing directly to the Gdrive so your local drives are not hit or bottlenecking you. With your setup you are not using rclone cache. When you stream however it is putting your buffer in your RAM. Also depending on your Plex setup it might be transcoding to RAM aswell. So if you have enough RAM there is no problem. If you are running short you can play with the mount settings to put less in buffer.
  5. Did you have a mount running? If so your rclone is busy. Best way to make sure is just a reboot of your server with no mounts on. So if you have scripts for mounts, disable those.
  6. You made me remember, it's not the IP/CDN protection it's a setting in Cloudflare. Someone else in this topic mentioned it. You have to disable the HTTPS rewrites. So I got most of my subdomains working. Two aren't though or not as desired (Nextcloud and OnlyOffice). Both which require a more specific configuration. So what I can do is put my older NGINX config in, but then it has includes which it can't find. I see that the standard configs are including files like block-exploits.conf. Are those accessible and editable somewhere? I can't find them, so I wonder if they are hardcoded/somewhere hidden.
  7. Ok so I changed this and it give the error below. So then I disabled the Cloudflare CDN protection. And it works. So is it possible to get this working with the Cloudflare CDN/protection on you think? Failed authorization procedure. bitwarden.mydomain (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from https://bitwarden.mydomain/.well-known/acme-challenge/Z6vJRYrurz18JbcCPEeexbC1IhmWJoxfOFIY3jVRatw [2606:4700:30::681b:80cc]: "<!DOCTYPE html>\n<!--[if lt IE 7]> <html class=\"no-js ie6 oldie\" lang=\"en-US\"> <![endif]-->\n<!--[if IE 7]> <html class=\"no-js "
  8. But why? It's incredibly inefficient, straining your server needlessley and you have configure 2 dockers. You can have both, local and WAN access to the same docker. You just need to configure it well. So your DuckDNS doesn't need to be on the docker network. It can just be in host mode on your Unraid box. For your LE docker I would also give that docker it's own IP and make sure your redirect your router to that IP (I assume this is what you also did for your current setup?). And then in your nginx config you use the ip of your Plex docker and both WAN as LAN access should work.
  9. Try giving plex it's own IP first by putting it on br0 or something. That will put it on your LAN. If you can access it locally then, you know that's the issue.
  10. Seperate network? What does that mean? If you mean a VLAN and you haven't enabled access from your LAN to that VLAN your router/firewall is blocking your local access.
  11. Did you configure also enable access from outside your network in Plex and open port 32400 in your router to your docker? If so, disable that all. Your plex docker should only be accessible through your LE setup. And what mode is Plex on? Own IP, or bridge or host, or?
  12. Well you have a nginx config for your Plex set up already right? The thing you made an image of? Can't you just copy paste my code there? Make a backup of your own file before testing though.
  13. Yep. Try my config if you want. My subdomain is plex.MYDOMAIN. So if that is the same for your case you only need to change the IPDOCKER to your Plex docker's ip. #Must be set in the global scope see: https://forum.nginx.org/read.php?2,152294,152294 #Why this is important especially with Plex as it makes a lot of requests http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html / https://www.peterbe.com/plog/ssl_session_cache-ab ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; #Upstream to Plex upstream plex_backend { server IPDOCKER:32400; keepalive 32; } server { listen 80; #Enabling http2 can cause some issues with some devices, see #29 - Disable it if you experience issues listen 443 ssl http2; #http2 can provide a substantial improvement for streaming: https://blog.cloudflare.com/introducing-http2/ server_name plex.*; send_timeout 100m; #Some players don't reopen a socket and playback stops totally instead of resuming after an extended pause (e.g. Chrome) #Faster resolving, improves stapling time. Timeout and nameservers may need to be adjusted for your location Google's have been used here. resolver 1.1.1.1 1.0.0.1 valid=300s; resolver_timeout 10s; #Use letsencrypt.org to get a free and trusted ssl certificate ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_protocols TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; #Intentionally not hardened for security for player support and encryption video streams has a lot of overhead with something like AES-256-GCM-SHA384. ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; #Why this is important: https://blog.cloudflare.com/ocsp-stapling-how-cloudflare-just-made-ssl-30/ ssl_stapling on; ssl_stapling_verify on; #For letsencrypt.org you can get your chain like this: https://esham.io/2016/01/ocsp-stapling ssl_trusted_certificate /config/keys/letsencrypt/chain.pem; #Reuse ssl sessions, avoids unnecessary handshakes #Turning this on will increase performance, but at the cost of security. Read below before making a choice. #https://github.com/mozilla/server-side-tls/issues/135 #https://wiki.mozilla.org/Security/Server_Side_TLS#TLS_tickets_.28RFC_5077.29 #ssl_session_tickets on; ssl_session_tickets off; #Use: openssl dhparam -out dhparam.pem 2048 - 4096 is better but for overhead reasons 2048 is enough for Plex. ssl_dhparam /config/nginx/dhparams.pem; ssl_ecdh_curve secp384r1; #Will ensure https is always used by supported browsers which prevents any server-side http > https redirects, as the browser will internally correct any request to https. #Recommended to submit to your domain to https://hstspreload.org as well. #!WARNING! Only enable this if you intend to only serve Plex over https, until this rule expires in your browser it WONT BE POSSIBLE to access Plex via http, remove 'includeSubDomains;' if you only want it to effect your Plex (sub-)domain. #This is disabled by default as it could cause issues with some playback devices it's advisable to test it with a small max-age and only enable if you don't encounter issues. (Haven't encountered any yet) #add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; #Plex has A LOT of javascript, xml and html. This helps a lot, but if it causes playback issues with devices turn it off. (Haven't encountered any yet) gzip on; gzip_vary on; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain text/css text/xml application/xml text/javascript application/x-javascript image/svg+xml; gzip_disable "MSIE [1-6]\."; #Nginx default client_max_body_size is 1MB, which breaks Camera Upload feature from the phones. #Increasing the limit fixes the issue. Anyhow, if 4K videos are expected to be uploaded, the size might need to be increased even more client_max_body_size 0; #Forward real ip and host to Plex proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; #When using ngx_http_realip_module change $proxy_add_x_forwarded_for to '$http_x_forwarded_for,$realip_remote_addr' proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; #Websockets proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #Disables compression between Plex and Nginx, required if using sub_filter below. #May also improve loading time by a very marginal amount, as nginx will compress anyway. #proxy_set_header Accept-Encoding ""; #Buffering off send to the client as soon as the data is received from Plex. proxy_redirect off; proxy_buffering off; # add_header Content-Security-Policy "default-src https: 'unsafe-eval' 'unsafe-inline'; object-src 'none'"; add_header X-Frame-Options "SAMEORIGIN"; add_header X-Content-Type-Options nosniff; add_header Referrer-Policy "same-origin"; add_header Cache-Control "max-age=2592000"; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; add_header X-Download-Options noopen; add_header X-Permitted-Cross-Domain-Policies none; location / { #Example of using sub_filter to alter what Plex displays, this disables Plex News. sub_filter ',news,' ','; sub_filter_once on; sub_filter_types text/xml; proxy_pass http://plex_backend; } #PlexPy forward example, works the same for other services. #location /plexpy { # proxy_pass http://127.0.0.1:8181; #} }
  14. I think it helps if you post your nginx config for Plex. Might be that you disabled local resolving there.
  15. I have the NginxProxyManager docker on it's own IP in the same VLAN as my other dockers. All other dockers also have their own IP in this VLAN. So I put the NginxProxyManager on ports 80 and 443 and I opened and forwarded these ports on my router to the IP of the NginxProxyManager. Then when I add my proxy hosts and request the certificates I always get the error "Internal Error". When I look in my log it says the following: Failed authorization procedure. bitwarden.mydomain.com (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://bitwarden.mydomain.com/.well-known/acme-challenge/As3xDn2mZgCJzRpsFyGtlXKog3UZBRzrsHVaActeN6s: Connection refused
  16. I want to make sure I understand correctly before I sink any more time into this... If I give all my dockers it's own IP (from ranges of VLAN's I configured) and I also give NginxProxyManager it's own IP (thus it is able to see the other dockers through LAN) it still doesn't work? If so I don't understand why NginxProxyManager doesn't work and the LE docker does.
  17. Did someone get OnlyOffice working with Nextcloud? I've searched all over the internet, but can't find any solution. Only the line which I had to add which is 'onlyoffice' => array ( 'verify_peer_off' => TRUE, Which makes OnlyOffice connect, but fails when opening the file giving me an unknown error.
  18. Did someone get this working with Nextcloud? I've searched all over the internet, but can't find any solution. Only the line which I had to add which is 'onlyoffice' => array ( 'verify_peer_off' => TRUE, Which makes OnlyOffice connect, but fails when opening the file giving me an unknown error.
  19. When I give the Onlyoffice Community Edition it's own IP I can't access it anymore. What can I do to fix this? I don't want to host it on the same IP as my Unraid server. EDIT: never mind, Unraid kept going to the 8081 port instead of the 80/443 port. Got it working now.
  20. No it will not. In your union you define which folder is Read Only (RO) and which one is Read Write (RW). So if you followed the tutorial you will have your local folder as your RW folder and your remote is RO. When you then move files to the union folder it will be written to your local folder. So all your dockers like Sab you can point to your union folder as through the union it knows it has to move the files to the rclone_upload folder. In this setup files only get moved through using a script. If you want to direct write to your mount, you will have to do that through the mount_rclone folder. Through which you are directly accessing your cloud folders. Makes sense?
  21. I had/have the same problem which I'm still trying to work out. But the .unionfs folder is sort of like a recycling bin. The errors you get are caused because you don't have permission to the folder. The only way to get this fixed is running the safe permissions on the share. Run it on the right share though where the problem is. In your case this is probably the rclone_upload/google_vfs folder. The union folder shows both your cloud files as your local files (which are in your upload folder) so it's not moving anything to the union since the union does not contain any files itself. Upload of your local files is done through the upload script. So if that one runs you know it's uploading. And if you want to see how much files are still stored locally you can just check the rclone_upload folder.
  22. Like nuhll said, try a reboot after you unmount again and then run the mount commands through run in background. If that doesn't work PM me your contact info like Discord or something and I'll get in touch.
  23. Yeah the mount didn't work, otherwise you would see 1 PiB in Krusader. During rclone config did you configure gdrive remotes as team drive and select the right one for each remote?