Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. Nope. I tried to do from scratch using the instructions, but I couldn't get it to work and ran out of time so I had to disable auth. I'll have another go when I have time. I found the instructions confusing as I'm not that technical, but that could be me - I need to dig up the blog that I followed before as that was easy to follow and covered just the unraid docker scenario.
  2. Hmm they used to work, I must have deleted by accident. I'm going to do some work on the script soon so I'll add this to the list of things to fix.
  3. Yes please. This thread was setup so we could improve the setup together. Fingers crossed we can implement a one-provider solution using rclone union.
  4. Thanks deleted. I'm going to go and read up first on how the auth thing works again and implement from scratch. I did this ages ago following an online guide and I can't remember how I did it, so I guess it's time for a refresher.
  5. Swag logs: Brought to you by linuxserver.io ------------------------------------- To support the app dev(s) visit: Certbot: https://supporters.eff.org/donate/support-work-on-certbot To support LSIO projects visit: https://www.linuxserver.io/donate/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... using keys found in /config/keys [cont-init.d] 30-keygen: exited 0. [cont-init.d] 50-config: executing... Variables set: PUID=99 PGID=100 TZ=Europe/London URL=mydomain.com SUBDOMAINS=onlyoffice,ha,nextcloud,www,qbittorrent,unraid,help,synclounge EXTRA_DOMAINS= ONLY_SUBDOMAINS=false VALIDATION=http DNSPLUGIN= [email protected] STAGING=false SUBDOMAINS entered, processing SUBDOMAINS entered, processing Sub-domains processed are: -d onlyoffice.mydomain.com -d ha.mydomain.com -d nextcloud.mydomain.com -d www.mydomain.com -d qbittorrent.mydomain.com -d unraid.mydomain.com -d help.mydomain.com -d synclounge.mydomain.com E-mail address entered: [email protected] http validation is selected Certificate exists; parameters unchanged; starting nginx [cont-init.d] 50-config: exited 0. [cont-init.d] 60-renew: executing... The cert does not expire within the next day. Letting the cron script handle the renewal attempts overnight (2:08am). [cont-init.d] 60-renew: exited 0. [cont-init.d] 99-custom-files: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-files: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Server ready nginx: [alert] detected a LuaJIT version which is not OpenResty's; many optimizations will be disabled and performance will be compromised (see https://github.com/openresty/luajit2 for OpenResty's LuaJIT or, even better, consider using the OpenResty releases from https://openresty.org/en/download.html) The nginx logs were very long, so I hope this snippet is useful:
  6. Organizr is working, I just can't access the sites that use the Organizr auth usergroups
  7. Thanks for replying so quickly. I tried editing line 35 based on the link, but it's still not working for me: # make sure that your dns has a cname set for organizr server { listen 443 ssl; listen [::]:443 ssl; server_name www.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; proxy_pass http://192.168.50.17:80; } location ~ /auth-([0-9]+) { # This is used for Organizr V2 internal; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; # proxy_pass http://192.168.50.17:80/api/?v1/auth&group=$1; proxy_pass http://192.168.50.17:80/api/v2/auth?group=$1; proxy_set_header Content-Length ""; } }
  8. Thanks, but I'm not sure what to do to get access to my sites. This is what I currently have for radarr and the others are similar. Can you tell me what to change please: location ^~ /radarr { auth_request /auth-0; # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; # set $upstream_radarr radarr; proxy_pass http://192.168.50.93:7878; } location ^~ /radarr/api { auth_request /auth-0; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; # set $upstream_radarr radarr; proxy_pass http://192.168.50.93:7878; }
  9. Ahh I understand what you are saying now. I'm on the move so I can't check easily, but I think the upload script has delete-empty-src-dirs set to on, which explains this behaviour. Edit: I just checked - it is on All of this is controlled in your share settings in unraid
  10. Nope, I added this to the script to try and help first-timers. Do you do this before or after the mount script? It's a bad idea to do b4 the script runs, as you might get mounting issues - "mountpoint isn't empty" errors. Once rclone and mergerfs are mounted, it's 100% safe to 'create' folders in mergerfs (in reality, the folder is created in /local until uploaded to gdrive) and this is what you should do. That's the whole point - radarr/sonarr/manual rips etc being added to mergerfs that are accessible to plex regardless of what stage they are at - still residing locally or moved to gdrive. Thanks for the beer just now - if only I could go somewhere to buy one right now!
  11. Thanks for the Swedish Beer donation via Paypal whoever sent it!
  12. I do - just a bit of paranoia that if something went wrong with the upload, then the streaming mount wouldn't be impacted. Probably overkill as I think I've only had 1 or 2 API bans in over 2 years and none since I started doing this.
  13. Edit the script if you want to, or let the script create the directory. create another instance of the upload script and choose copy not move or sync Yes - create more instances of the mount script and disable the mergerfs mount if you don't need. If you want the other drives in your mergerfs mount, add the extra rclone mount locations as extra local folder locations in the mount script that creates the mergerfs mount.
  14. of course not It supports whatever backends rclone supports. What the streaming experience is like for non-google storage? I don't know - check the rclone forums
  15. You don't have to encrypt your files if you don't want to. Just create an unecrpyted rclone remote. This is very easy to do - if you need help doing this there are other threads (although you can probably work out what you need to do in this thread) as this thread is for support of my scripts In my scripts, RcloneMountShare="/mnt/wherever_you_want/mount_rclone" - doesn't matter as these aren't actually stored anywhere LocalFilesShare="/mnt/ua_hdd_or_whatever_you_called_it/local" - for the files that are pending upload to gdrive MergerfsMountShare="/mnt/wherever_you_want/mount_mergerfs" - doesn't matter as these aren't actually stored anywhere I've just checked my readme, and once you've worked out how to setup your remotes, which isn't covered but shows what they should look like afterwards, all the information you need is there https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/README.md
  16. In your logs: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service/sa_gdrive.json: no such file or directory
  17. @live4ever all looks ok. Look in /mnt/user/mount_rclone and /mnt/user/local and you should see the source of the weird files - maybe you did a duff upload somewhere. Either way - if you don't need them (unlikely), just delete and they should go away.
  18. There's your problem - your remotes aren't setup properly or don't exist: - superplex_media_vfs - superplex If you'd read your logs you would have seen this.
  19. you don't have a remote called gdrive_upload_vfs in your rclone config.
  20. I would: 1. create another version of the mount script to mount your unencrypted media in another location e.g. /mnt/user/mount_rclone/2nd_remote and remember to chose not to create a mergerfs mount 2. then in your main mount script set LocalFilesShare2 to that location i.e. LocalFilesShare2="/mnt/user/mount_rclone/2nd_remote" Looking at your last post with your mount script it looks like you're using an old version or you've cut bits out as you don't have a LocalFilesSharex section. Make sure you are using the latest from https://github.com/BinsonBuzz/unraid_rclone_mount
  21. you won't get API bans if you use these scripts. Teamdrives don't need client ID and secrets if setup correctly. I know this has become a long thread, but if you read the early posts and use the search a lot of the questions you are asking are easy to find answers to.
  22. @animeking script looks fine and you should be seeing files from /mnt/user/local/gdrive_vfs and /mnt/user/mount_rclone/gdrive_vfs in /mnt/user/mount_mergerfs/gdrive_vfs. To test , add something manually to /mnt/user/local/gdrive_vfs and see if it appears in the mergerfs folder. Then run the upload script and see if it moves from /mnt/user/local/gdrive_vfs to /mnt/user/mount_rclone/gdrive_vfs - or just drag/copy it to (I'd do a small file) /mnt/user/mount_rclone/gdrive_vfs and see if still shows up in /mnt/user/mount_mergerfs/gdrive_vfs If not, post your mount logs please after running please.
  23. Post your mount script and rclone config please - remember to blank out your passwords etc. Can you use the Code button on the forum to post as it's easier to read and reply
  24. Best thing to do is move the files within gdrive so you don't hit the 750GB/day limit. I think the answer your config question is yes...if you've setup the SAs, you've done the hard bit.
×
×
  • Create New...