Jump to content

Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Posts posted by Kaizac

  1. 5 minutes ago, ofthethorn said:

     

    On it. Is this what you mean? https://imgur.com/a/NDRj9Sn

    Yep. Try my config if you want. My subdomain is plex.MYDOMAIN. So if that is the same for your case you only need to change the IPDOCKER to your Plex docker's ip. 

     

    #Must be set in the global scope see: https://forum.nginx.org/read.php?2,152294,152294
    #Why this is important especially with Plex as it makes a lot of requests http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html / https://www.peterbe.com/plog/ssl_session_cache-ab
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    
    #Upstream to Plex
    upstream plex_backend {
        server IPDOCKER:32400;
        keepalive 32;
    }
    
    server {
    	listen 80;
    	#Enabling http2 can cause some issues with some devices, see #29 - Disable it if you experience issues
    	listen 443 ssl http2; #http2 can provide a substantial improvement for streaming: https://blog.cloudflare.com/introducing-http2/
    	server_name plex.*;
    
    	send_timeout 100m; #Some players don't reopen a socket and playback stops totally instead of resuming after an extended pause (e.g. Chrome)
    
    	#Faster resolving, improves stapling time. Timeout and nameservers may need to be adjusted for your location Google's have been used here.
    	resolver 1.1.1.1 1.0.0.1 valid=300s;
    	resolver_timeout 10s;
    
    	#Use letsencrypt.org to get a free and trusted ssl certificate
    	ssl_certificate /config/keys/letsencrypt/fullchain.pem;
    	ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
    
    	ssl_protocols TLSv1.1 TLSv1.2;
    	ssl_prefer_server_ciphers on;
    	#Intentionally not hardened for security for player support and encryption video streams has a lot of overhead with something like AES-256-GCM-SHA384.
    	ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    	
    	#Why this is important: https://blog.cloudflare.com/ocsp-stapling-how-cloudflare-just-made-ssl-30/
    	ssl_stapling on;
    	ssl_stapling_verify on;
    	#For letsencrypt.org you can get your chain like this: https://esham.io/2016/01/ocsp-stapling
    	ssl_trusted_certificate /config/keys/letsencrypt/chain.pem;
    
    	#Reuse ssl sessions, avoids unnecessary handshakes
    	#Turning this on will increase performance, but at the cost of security. Read below before making a choice.
    	#https://github.com/mozilla/server-side-tls/issues/135
    	#https://wiki.mozilla.org/Security/Server_Side_TLS#TLS_tickets_.28RFC_5077.29
    	#ssl_session_tickets on;
    	ssl_session_tickets off;
    
    	#Use: openssl dhparam -out dhparam.pem 2048 - 4096 is better but for overhead reasons 2048 is enough for Plex.
    	ssl_dhparam /config/nginx/dhparams.pem;
    	ssl_ecdh_curve secp384r1;
    
    	#Will ensure https is always used by supported browsers which prevents any server-side http > https redirects, as the browser will internally correct any request to https.
    	#Recommended to submit to your domain to https://hstspreload.org as well.
    	#!WARNING! Only enable this if you intend to only serve Plex over https, until this rule expires in your browser it WONT BE POSSIBLE to access Plex via http, remove 'includeSubDomains;' if you only want it to effect your Plex (sub-)domain.
    	#This is disabled by default as it could cause issues with some playback devices it's advisable to test it with a small max-age and only enable if you don't encounter issues. (Haven't encountered any yet)
    	#add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
    
    	#Plex has A LOT of javascript, xml and html. This helps a lot, but if it causes playback issues with devices turn it off. (Haven't encountered any yet)
    	gzip on;
    	gzip_vary on;
    	gzip_min_length 1000;
    	gzip_proxied any;
    	gzip_types text/plain text/css text/xml application/xml text/javascript application/x-javascript image/svg+xml;
    	gzip_disable "MSIE [1-6]\.";
    
    	#Nginx default client_max_body_size is 1MB, which breaks Camera Upload feature from the phones.
    	#Increasing the limit fixes the issue. Anyhow, if 4K videos are expected to be uploaded, the size might need to be increased even more
    	client_max_body_size 0;
    
    	#Forward real ip and host to Plex
    	proxy_set_header Host $host;
    	proxy_set_header X-Real-IP $remote_addr;
    	#When using ngx_http_realip_module change $proxy_add_x_forwarded_for to '$http_x_forwarded_for,$realip_remote_addr'
    	proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    	proxy_set_header X-Forwarded-Proto $scheme;
    
    	#Websockets
    	proxy_http_version 1.1;
    	proxy_set_header Upgrade $http_upgrade;
    	proxy_set_header Connection "upgrade";
    
        #Disables compression between Plex and Nginx, required if using sub_filter below.
    	#May also improve loading time by a very marginal amount, as nginx will compress anyway.
        #proxy_set_header Accept-Encoding "";
    
    	#Buffering off send to the client as soon as the data is received from Plex.
    	proxy_redirect off;
    	proxy_buffering off;
    	
    #	add_header Content-Security-Policy "default-src https: 'unsafe-eval' 'unsafe-inline'; object-src 'none'";
    	add_header X-Frame-Options "SAMEORIGIN";
    	add_header X-Content-Type-Options nosniff;
    	add_header Referrer-Policy "same-origin";
    	add_header Cache-Control "max-age=2592000";
    	add_header X-XSS-Protection "1; mode=block";
    	add_header X-Robots-Tag none;
    	add_header X-Download-Options noopen;
    	add_header X-Permitted-Cross-Domain-Policies none;
    
    	location / {
    		#Example of using sub_filter to alter what Plex displays, this disables Plex News.
    		sub_filter ',news,' ',';
    		sub_filter_once on;
    		sub_filter_types text/xml;
    		proxy_pass http://plex_backend;
    	}
    
    	#PlexPy forward example, works the same for other services.
    	#location /plexpy {
    	#	proxy_pass http://127.0.0.1:8181;
    	#}
    }

     

    • Like 1
  2. 6 hours ago, Djoss said:

    If it works with LE, it should work with this one also.  How are you configured exactly?  You containers have their own IP on different VLANs?

    I have the NginxProxyManager docker on it's own IP in the same VLAN as my other dockers. All other dockers also have their own IP in this VLAN. So I put the NginxProxyManager on ports 80 and 443 and I opened and forwarded these ports on my router to the IP of the NginxProxyManager.

     

    Then when I add my proxy hosts and request the certificates I always get the error "Internal Error". When I look in my log it says the following:

    Failed authorization procedure. bitwarden.mydomain.com (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://bitwarden.mydomain.com/.well-known/acme-challenge/As3xDn2mZgCJzRpsFyGtlXKog3UZBRzrsHVaActeN6s: Connection refused

     

  3. I want to make sure I understand correctly before I sink any more time into this... If I give all my dockers it's own IP (from ranges of VLAN's I configured) and I also give NginxProxyManager it's own IP (thus it is able to see the other dockers through LAN) it still doesn't work?

     

    If so I don't understand why NginxProxyManager doesn't work and the LE docker does.

  4. Did someone get OnlyOffice working with Nextcloud? I've searched all over the internet, but can't find any solution. Only the line which I had to add which is

     

      'onlyoffice' =>
        array (
      'verify_peer_off' => TRUE,

     

    Which makes OnlyOffice connect, but fails when opening the file giving me an unknown error.

  5. Did someone get this working with Nextcloud? I've searched all over the internet, but can't find any solution. Only the line which I had to add which is

     

      'onlyoffice' =>
        array (
      'verify_peer_off' => TRUE,

     

    Which makes OnlyOffice connect, but fails when opening the file giving me an unknown error.

    • Upvote 1
  6. When I give the Onlyoffice Community Edition it's own IP I can't access it anymore. What can I do to fix this? I don't want to host it on the same IP as my Unraid server.

     

    EDIT: never mind, Unraid kept going to the 8081 port instead of the 80/443 port. Got it working now.

  7. 34 minutes ago, mestep said:

     

    I don't mean to sound stupid or anything. If I put stuff in the union folder instead of the upload folder, it will still get uploaded to the cloud, right? Right now I have sabnzbd set to download to the upload folder, can/should I have it set to download to union?

    No it will not. In your union you define which folder is Read Only (RO) and which one is Read Write (RW). So if you followed the tutorial you will have your local folder as your RW folder and your remote is RO. When you then move files to the union folder it will be written to your local folder. So all your dockers like Sab you can point to your union folder as through the union it knows it has to move the files to the rclone_upload folder.

     

    In this setup files only get moved through using a script. If you want to direct write to your mount, you will have to do that through the mount_rclone folder. Through which you are directly accessing your cloud folders.

     

    Makes sense?

  8. 6 hours ago, shimlapinks said:

     I spent two nights trying to get this to work and I'm finally just about there, thank you for this.

     Just 1 thing, when I delete files it is putting them back where I deleted them from.

    I.e:

    • If I delete a file directly (this is how you are meant to delete media files right?)  from mount_unionfs>google_vfs/tvshows/Fawlty Towers....mkv
    • It goes into the

    mount_unionfs>google_vfs>.unionfs>tvshows> folder (which I dont have access to unless I reset the permissions in Unraid to that folder)

    (The .mkv is also now in the rclone_upload folder.)

    • So if I run the unionFS cleanup script, it gives the below output and then puts the file right back to where I deleted it from, any ideas please?

     

    3.02.2019 23:30:43 INFO: starting unionfs cleanup.
    rm: cannot remove '/mnt/user/mount_rclone/google_vfs/mnt/user/mount_unionfs/google_vfs/.unionfs/TVShows/Fawlty Towers - S01E01 - A Touch of Class - (1975) - h264-576p.mkv': No such file or directory
    rm: cannot remove '/mnt/user/mount_rclone/google_tdrive_vfs/mnt/user/mount_unionfs/google_vfs/.unionfs/TVShows/Fawlty Towers - S01E01 - A Touch of Class - (1975) - h264-576p.mkv': No such file or directory
    Script Finished Wed, 13 Feb 2019 23:30:43 +0000

     

     

    Edit: PS I am using teamdrive and the UnionFS cleanup script from the original post.

    I had/have the same problem which I'm still trying to work out. But the .unionfs folder is sort of like a recycling bin. The errors you get are caused because you don't have permission to the folder. The only way to get this fixed is running the safe permissions on the share. Run it on the right share though where the problem is. In your case this is probably the rclone_upload/google_vfs folder.

     

     

    6 hours ago, mestep said:

    So it looks like things move from the upload folder instantly to the union folder. Is this the way it should be?

     

    Also how do I know that files are getting uploaded and are off my local unraid box?

    The union folder shows both your cloud files as your local files (which are in your upload folder) so it's not moving anything to the union since the union does not contain any files itself. Upload of your local files is done through the upload script. So if that one runs you know it's uploading. And if you want to see how much files are still stored locally you can just check the rclone_upload folder.

    • Like 1
  9. 1 minute ago, mestep said:

    i think i got it working, will have to take a little to check to make sure.

     

    So the rclone_unionfs folder is where I will point Plex/Sonarr to.

     

    Sonarr download folder and whatever else I want to upload to gdrive will be the rclone_upload folder.

     

    When it finishing syncing, the files will appear in the unionfs folder, but they are dummy files taking no space on my actual unraid server.

     

    Do I have this correct?

    Correct!

  10. 1 minute ago, tmoran000 said:

    I am sorry that I am so confused over all of this. but the number of names it requires doesnt make sense to me.. It has me name it (Gdrive_Stream) which is what.. the Team drive name? and then it has me make a crypt (Gdrive_StreamEN) What is this? the folder inside the Team drive called Gdrive_Stream . or is Gdrive_StreamEN:Secure...is Secure the name of the folder inside of the Team Drive titled Gdrive_Stream... its a Team Drive named Gdrive_Stream with a folder inside called Secure where I want the files to be stored but the config has me using 3 names all different so I am getting so confused. I looked through the tutorial at the beginning and still does not make sense.. ugh I feel like such a bother to keep asking all these questions.. 

    No worries, it is confusing at first.

     

    So in your webui you created 2 team drives. Call them whatever you want. I don't like to give Google too much of an incline what I'm doing so I just name them Backup and Files.

     

    Within the team drive you create the folder. This name is important and in your case would be Secure for both team drives. So I think if you just create 1 folder in each team drive through the webui called Secure you got it working. Might need to unmount first before remounting.

     

  11. 1 minute ago, tmoran000 said:

    [Gdrive_Backup]
    type = drive
    scope = drive
    token = (removed)
    team_drive = (removed)

    [Gdrive_SecureEN]
    type = crypt
    remote = Gdrive_Backup:Secure
    filename_encryption = standard
    directory_name_encryption = true
    password = (removed)
    password2 = (removed)

    [Gdrive_Stream]
    type = drive
    scope = drive
    token = (removed)
    team_drive = (removed)

    [Gdrive_StreamEN]
    type = crypt
    remote = Gdrive_StreamEN:Secure
    filename_encryption = standard
    directory_name_encryption = true
    password = (removed)
    password2 = (removed)

     

     

    this is what was built from putting the info in through SSH

    You made the crypt for Streaming point at itself. You put StreamEN instead of Stream.

  12. 1 minute ago, tmoran000 said:

    Once I get it working and see the settings that are working going back over will make more sense for sure.

     

    I updated the commands you fixed but I am getting an error on the logs that I have seen before when I have tried the different commands..

     

    ####

    Script Starting Wed, 13 Feb 2019 15:44:27 -0500

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_plugin/log.txt

    Script Finished Wed, 13 Feb 2019 15:44:27 -0500

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_plugin/log.txt

    2019/02/13 15:44:27 Failed to create file system for "Gdrive_StreamEN:": can't point crypt remote at itself - check the value of the remote setting

    Please post your rclone config here. Make sure you delete any info like client id

  13. Try sticking to the scripts in the opening post, they are much better than the mount command you are using.

     

    So to answer your question. You only mount the 2 encrypted remotes.

     

    So you will have 2 mount commands like:

    rclone mount --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 256M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off Gdrive_SecureEN: /mnt/user/rclone_mount/Gdrive_Backup &

     

    rclone mount --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 256M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off Gdrive_StreamEN: /mnt/user/rclone_mount/Gdrive_Stream &

     

    These 2 you can just input in your terminal or run them in a user script (choose run in background) and should work. After you've done this you should be able to put files on your mount.

     

    If that happens I would really advise you to read the start post of DZIMM again to see if it makes more sense.

  14. Ok that seems to be fine. Did you safe your passwords for your encrypted mount? If you lose those, you can never access your files again. Just be careful with that.

     

    So now you can create the folders you need for mounting the 2 encrypted mounts. If you want to follow the tutorial of this topic you should create a share called mount_rclone. Within that folder you create 2 empty folders: Gdrive_Stream and Gdrive_Backup or whatever you like.

     

    Then in your mount command you mount the remote to this folder. If it succeeded you can directly copy files to this folder and will show up on your Gdrive web interface.

     

    When that works you can continue with making a union and setting up all the other scripts which is the easiest part.

  15. 5 hours ago, tmoran000 said:

    So I am clear, I set up a 2 team Drives? 1 for encrypted back up, and 1 for new Encrypted files to stream to Plex.... Also when I make the remotes would . have something like Gdrive and Secure_backup (encrypted) to backup what I have now on  1 Team drive and Gdrive2 and Secured_Files (encrypted) for Team drive 2 to host new files for plex?

    Yes that's what I did. So you make a remote Gdrive and then when setting up your encrypted remote in rclone you put it as Gdrive: or Gdrive:Backup if you want to use a folder within the Team Drive.

     

    Same for your files to stream. Gdrive2 (or Gdrive_Streaming) as remote and the encrypted remote would be Gdrive2: or Gdrive2:Files or whatever names you like.

     

    Important part for Gdrive is that you can upload only 750GB per user per day. So if you want to do some heavier uploading you can create 2 API's and use 2 different email accounts for your rclone config when creating the token.

×
×
  • Create New...