Bjur

Members
  • Posts

    183
  • Joined

  • Last visited

Everything posted by Bjur

  1. Okay have you tried it. Just read earlier today on Rclone it was not a good idea, so I just want to make sure before I try it out. I don't won't to mess with the upload I'm running since it will take a long time to finish. Thanks for the help.
  2. @watchmeexplode5: Thanks. So I think I will create a manual download folder in /mnt/local/Downloads /mnt/local/Downloads/temp /mnt/local/Downloads/finished and then let Filebot move them to: /mnt/local/googleFi_crypt /mnt/local/googleSh_crypt What will happen when you move files to local folder while an upload is running. Will that be fine or will it mess the upload up?
  3. @watchmeexplode5 Would it make more sense to me to make a Download folder to /local/download instead of /local/REMOTE/downloads. Limit the local folder share to only 1 disk and the completed folders as: /local/googleFi_crypt/movies or /local/googleSh_crypt/series It should then be easy to move the completed files quickly and still have a separate download folder before the remotes. Does that makes sense?
  4. Thanks. If using Krusader nothing is there even when enabling show hidden files. When browsing in shell it showed mountcheck file. Even when deleting it wouldn't mount correctly and show folders/mountcheck in mergerfs. I think I got it working now, but will report back if not. Thanks.
  5. it places a mountcheck file in /mnt/user/rclone_mount/googleSh_crypt and if I delete it and run the script again the same error occurs.
  6. On my latest reboot I get this from the second mountscript: And the mergerfs directory is empty. Script Starting May 14, 2020 11:30.01 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_sh/log.txt 14.05.2020 11:30:01 INFO: Creating local folders. 14.05.2020 11:30:01 INFO: *** Starting mount of remote googleSh_crypt 14.05.2020 11:30:01 INFO: Checking if this script is already running. 14.05.2020 11:30:01 INFO: Script not running - proceeding. 14.05.2020 11:30:01 INFO: Mount not running. Will now mount googleSh_crypt remote. 14.05.2020 11:30:01 INFO: Recreating mountcheck file for googleSh_crypt remote. 2020/05/14 11:30:01 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "googleSh_crypt:" "-vv" "--no-traverse"] 2020/05/14 11:30:01 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf" 2020/05/14 11:30:03 DEBUG : mountcheck: Modification times differ by -15m0.133405247s: 2020-05-14 11:30:01.832405247 +0200 CEST, 2020-05-14 09:15:01.699 +0000 UTC 2020/05/14 11:30:05 INFO : mountcheck: Copied (replaced existing) 2020/05/14 11:30:05 INFO : Transferred: 32 / 32 Bytes, 100%, 19 Bytes/s, ETA 0s Transferred: 1 / 1, 100% Elapsed time: 1.6s 2020/05/14 11:30:05 DEBUG : 11 go routines active 2020/05/14 11:30:05 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "googleSh_crypt:" "-vv" "--no-traverse"] 14.05.2020 11:30:05 INFO: *** Creating mount for remote googleSh_crypt 14.05.2020 11:30:05 INFO: sleeping for 5 seconds 14.05.2020 11:30:10 INFO: continuing... 14.05.2020 11:30:10 INFO: Successful mount of googleSh_crypt mount. 14.05.2020 11:30:10 INFO: Mergerfs already installed, proceeding to create mergerfs mount 14.05.2020 11:30:10 INFO: Creating googleSh_crypt mergerfs mount. 14.05.2020 11:30:10 INFO: Checking if googleSh_crypt mergerfs mount created. 14.05.2020 11:30:10 INFO: Check successful, googleSh_crypt mergerfs mount created. 14.05.2020 11:30:10 INFO: Starting dockers. sonarr 14.05.2020 11:30:10 INFO: Script complete Script Finished May 14, 2020 11:30.10 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_sh/log.txt 14.05.2020 11:34:57 INFO: Creating local folders. 14.05.2020 11:34:57 INFO: *** Starting mount of remote googleSh_crypt 14.05.2020 11:34:57 INFO: Checking if this script is already running. 14.05.2020 11:34:57 INFO: Script not running - proceeding. 14.05.2020 11:34:57 INFO: Mount not running. Will now mount googleSh_crypt remote. 14.05.2020 11:34:57 INFO: Recreating mountcheck file for googleSh_crypt remote. 2020/05/14 11:34:57 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "googleSh_crypt:" "-vv" "--no-traverse"] 2020/05/14 11:34:57 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf" 2020/05/14 11:34:58 DEBUG : mountcheck: Modification times differ by -4m55.337592306s: 2020-05-14 11:34:57.169592306 +0200 CEST, 2020-05-14 09:30:01.832 +0000 UTC 2020/05/14 11:35:00 INFO : mountcheck: Copied (replaced existing) 2020/05/14 11:35:00 INFO : Transferred: 32 / 32 Bytes, 100%, 20 Bytes/s, ETA 0s Transferred: 1 / 1, 100% Elapsed time: 1.5s 2020/05/14 11:35:00 DEBUG : 11 go routines active 2020/05/14 11:35:00 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "googleSh_crypt:" "-vv" "--no-traverse"] 14.05.2020 11:35:00 INFO: *** Creating mount for remote googleSh_crypt 14.05.2020 11:35:00 INFO: sleeping for 5 seconds 2020/05/14 11:35:01 Fatal error: Directory is not empty: /mnt/user/mount_rclone/googleSh_crypt If you want to mount it anyway use: --allow-non-empty option 14.05.2020 11:35:05 INFO: continuing... 14.05.2020 11:35:05 INFO: Successful mount of googleSh_crypt mount. 14.05.2020 11:35:05 INFO: Check successful, googleSh_crypt mergerfs mount in place. 14.05.2020 11:35:05 INFO: dockers already started. 14.05.2020 11:35:05 INFO: Script complete Script Finished May 14, 2020 11:35.05 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_sh/log.txt
  7. Thanks @Kaizac, @watchmeexplode5 and @DZMM for sharing some light on the topic. @DZMM sorry for bringing it up in this thread. It was my fault not @watchmeexplode5 I can agree there should be a donate button for the great work you guys have done.
  8. I don't know I just started, so that's why I'm asking people why has more experience with this. If Google stops the unlimited service because of people encrypting, would there then be a longer grace period to get the stuff local or will they just freeze peoples things? Will this be a likely scenario.
  9. I tried it. I'm not that worried I just want to find the quickest way to get my data in the cloud. If I move it directly to mergerfs it moves it to another drive which takes some time instead of the same drive. I can't get the mergerfs to the same drive. How can I do that? I tried in share settings but it gets overruled.
  10. Okay, just before I move my precious data and understand. So the best think for me would be to move all the data to local first and upload it from there. Afterwards it will be moved completely locally from my drives but seen in mergerfs folder?
  11. I think I got it working now. What I did was change virtualIP from 2 to 3, change bindingIP to 253 instead of 252, create an extra mergerfs_sh folder and change mountcheck to mountcheck2. The last to edits made it work. So next step is to get my shares uploaded. My question is now. If I want to upload /mnt/user/videos to my Team Share drive. Should I move it to /mnt/user/local/GCryptFi (name of remote)and upload it to Team Share drive or Should I move it to /mnt/user/mergerfs/GCryptFi (name of remote) and upload it to Team Share drive What would be the difference with how I upload it afterwards? When doing test it's much more quick to move to local folder, but will it give the same in the end?
  12. Thanks both. /mnt/user/mount_rclone/ I find both my mounts in these, but only the connected one googleFi_crypt has a mountcheck in it. The other non working googleSh_crypt does not. --rc removed from both scripts did not make any difference. In regards to the mergerfs on disks, I didn't initially create a mountshare manually, but the script did create it and placed it on disk 3. When I go into share and set the 2 folder to only be on disk 1+2 it still is on disk 3. My local normal folders is what I want to upload to the cloud without it having to take long time to move to the folders which should be uploaded. Hope it makes sense.
  13. See below and not visible when I tried at clone. Addon question: I'm about to move from /mnt/user/videos to /mnt/user/mergerfs. The problem is the mount wants to go on disk 3 and my videos are on disk 1&2 so it will take a long time. How do I force mergerfs mount to 1&2 initially and afterwards to 3? I've tried in share but it's not a normal share and even when I include only disk 1&2 it still resides on disk 3. Mount 1: # REQUIRED SETTINGS RcloneRemoteName="googleFi_crypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="sabnzbd plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable MountFolders=\{"downloads/complete,downloads/incomplete,downloads/temp,media/movies,media/tv"\} # comma separated list of folders to create within the mount # Note: Again - remember to NOT use ':' in your remote name above # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="ignore" LocalFilesShare4="ignore" # Add extra commands or filters Command1="--rc" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address NetworkAdapter="eth0" # choose your network adapter. eth0 recommended VirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them Mount 2: #!/bin/bash ###################### #### Mount Script #### ###################### ### Version 0.96.6 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Change the name of the rclone remote and shares to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: include custom command and bind mount settings # 4. Optional: include extra folders in mergerfs mount # REQUIRED SETTINGS RcloneRemoteName="googleSh_crypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="nzbget plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount # Note: Again - remember to NOT use ':' in your remote name above # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="ignore" LocalFilesShare4="ignore" # Add extra commands or filters Command1="--rc" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address NetworkAdapter="eth0" # choose your network adapter. eth0 recommended VirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them ####### END SETTINGS #######
  14. @watchmeexplode5 here is the log: Script Starting May 07, 2020 00:12.43 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_sh/log.txt 07.05.2020 00:12:43 INFO: Creating local folders. 07.05.2020 00:12:43 INFO: *** Starting mount of remote googleSh_crypt 07.05.2020 00:12:43 INFO: Checking if this script is already running. 07.05.2020 00:12:43 INFO: Script not running - proceeding. 07.05.2020 00:12:43 INFO: Mount not running. Will now mount googleSh_crypt remote. 07.05.2020 00:12:43 INFO: Recreating mountcheck file for googleSh_crypt remote. 2020/05/07 00:12:43 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "googleSh_crypt:" "-vv" "--no-traverse"] 2020/05/07 00:12:43 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf" 2020/05/07 00:12:45 DEBUG : mountcheck: Need to transfer - File not found at Destination 2020/05/07 00:12:47 INFO : mountcheck: Copied (new) 2020/05/07 00:12:47 INFO : Transferred: 32 / 32 Bytes, 100%, 18 Bytes/s, ETA 0s Transferred: 1 / 1, 100% Elapsed time: 1.7s 2020/05/07 00:12:47 DEBUG : 11 go routines active 2020/05/07 00:12:47 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "googleSh_crypt:" "-vv" "--no-traverse"] 07.05.2020 00:12:47 INFO: *** Creating mount for remote googleSh_crypt 07.05.2020 00:12:47 INFO: sleeping for 5 seconds 07.05.2020 00:12:52 INFO: continuing... 07.05.2020 00:12:52 INFO: Successful mount of googleSh_crypt mount. 07.05.2020 00:12:52 INFO: Mergerfs already installed, proceeding to create mergerfs mount 07.05.2020 00:12:52 INFO: Creating googleSh_crypt mergerfs mount. 07.05.2020 00:12:52 INFO: Checking if googleSh_crypt mergerfs mount created. 07.05.2020 00:12:52 INFO: Check successful, googleSh_crypt mergerfs mount created. 07.05.2020 00:12:52 INFO: Starting dockers. Error response from daemon: No such container: nzbget plex sonarr Error response from daemon: No such container: radarr Error response from daemon: No such container: ombi Error: failed to start containers: nzbget, radarr, ombi 07.05.2020 00:12:52 INFO: Script complete Script Finished May 07, 2020 00:12.52 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_sh/log.txt PS: This is after the first mount script is initiated and working.
  15. It's not easy to mount 2 different team shares. Maybe I'm doing something wrong. I have created 2 rclone_mounts scripts for my 2 crypts (googleFi & googleSh). But no matter what I try I can only get 1 mount script connected depending on which one I start first. I've tried with 2 x RcloneRemoteName entries in 1 mount script, but it only uses one, I've tried to create a separate and call mountcheck to mountcheck2 but still the same. How do I do it, so I can have both connected?
  16. Thanks found the error, so fixed. Should I just delete it then?
  17. Hi I'm starting to get unclean shutdown/reboots each time I want that with Unraid. I'm running on Esxi 6.7 and haven't had any problems before. What is the cause to this?
  18. Thanks. I think I will try again with 2 mounts. Do you know how the mountpoint work. After my last reboot I couldn't get the mergerfs mounted. If I delete the mountcheck on gdrive and mergerfs will it then recreate a new after running mountscript? I also had a problem with creating folders in mergerfs after edit mount script but perhaps it was because I changed movies to Movies. After changing it back it worked again.
  19. @watchmeexplode5 In regards to benefits. I can see what you mean, but if I'm going to go with 2 crypts, will I then only have to create 2 mount scripts and 2 upload scripts and adjust them to the different crypts? If that's the case it's only 1 time, I will have to do it to gain performance on T Drive. I don't think running 2 extra scripts should take up much more ressources on the Unraid? In regards to nfo you got me thinking. I made a separate Plex server to test, it did found almost all without nfo. The problem right now is that, if you choose a foreign language when adding library it won't use the English fallback language, so many TV Shows would have empty plots. Also when using foreign language and putting the local media assets in as agent (lowest in list) and using Plex Movie the genre will be mixed up. What I think I will do is move my library to Mergerfs folder, add the folder to my existing Plex library, refresh all metadata so it still uses my existing .nfos and posters which are customized with higher quality, remove old folder, move all nfos from my folders, remove xbmcnfo agent from Plex and then finally run upload scripts. Does that sound reasonable? @DZMM Why are you creating a separate UHD mergerfs + Drive? Plex can filter with UHD resolution, so wouldn't it be better to separate by media types (movies/shows) instead, or?
  20. @watchmeexplode5 Thanks again for your feedback. - Ember is a fantastic and customizable tool, which really have given me a lot of help. Preview Images: I can see I haven't enabled the video preview, which you are referring to, and I'll just keep it disabled now. File Structure: In regards to the file structure, that looks like what I want with the exection, that I'll make more sense to divide it into to drives. 1 for Movies and 1 for TV. I can't see the benefit of having it in 1 location, when there is a limit on the drive. Sonarr/Radarr whatever will just point to each drive and Sab/Filebot can do the same, so I can only see advantages of keeping them separate. Right? Regarding MergerFS mounts: Again why is it better in my case when the clients can already move to the correct location from the start, instead of having to move it around after reaching limit and according to some users after 150k it will slow down? Why should I not separate it? @DZMM What is the benefit for you to have Movies & TV in 1 share drive and not divide it, when ? Plex can use locations if I'm not mistaken.
  21. @watchmeexplode5: Wow thanks again for a walk-through answer. Much appreciated. And thanks to @testdasi and @DZMM for the answers as well. - If I start with the metadata. I use Ember to generate metadata files, so I'm sure all data are as I want them also because some are non English .nfos/posters. I use XBMXnfoMoviesImporter with Plex to add them. I don't know if my usecase for this are not the smartest thing to do anymore or I should just leave Plex take care of it all it the future... I don't know but much of the non English will be more difficult to get a match I think. - chapter/preview images: I'm not sure if I use that with Plex now. I get chapters on some titles, but perhaps it because they are embedded in the files. Where do I check that. I guess the scrubbing is more an AppleTV feature with their remote. I have both AppleTV and Shield but am using Logitech Remote. I have disabled video thumbnail though, because it would take up way to much space for what it does (which shouldn't matter in the future). I don't know how much benefit there is to gain with this. - For Downloader/Filebot folder location: This is where I get a little confused. When you are referring to ../local/REMOTE/xxxx is this in the upload script referred to as LocalFilesShare="/mnt/user/local" option? So that will be my user share folder locally which eventually will get uploaded to the cloud? I have the following now: /mnt/user/local/googleFI_crypt So I should let Sab do the following: temp: /mnt/user/local/googleFI_crypt/Downloads/temp Finished: /mnt/user/local/googleFI_crypt/Downloads/Finished The Downloads folder are excluded in the scripts. Then I should let Filebot look into /mnt/user/local/googleFI_crypt/Downloads/Finished let it move/renaming to mnt/user/local/googleFi_crypt/Movies folder Is this correct understood? I saw a user mentioning an option to let new files be local for 7 days and then upload, but that shouldn't apply here? The local files will be uploaded to cloud when upload script is run? encrypt vs. unencrypted: thanks for the explanation. I think I will go with the encrypted though. Even though they have looked yet, they could be in the future and peoples stuff gets banned/deleted. @testdasi: The metadata stuff is only my own created nfo files etc. Not the Plex metadata. That I will be using normally in appdata folder on my cache SSD. That makes sense right? Thanks for the recommendation to Photos I will look into that after this is up and running:) @DZMM: In regards to only one mergerfs mount. You most likely did. This is very new to me so sorry for the stupid questions. But wouldn't it then make sense to have 2 mount scripts and 2 upload scripts and 2 crypts then. 1 for movies and 1 for shows to divide it if someday will reach the limit? And last but not least when having a lot of small .nfo metadata files, would it be best practice in the /user/mnt/mount_mergerfs/movies folders where all metadata is at, then in the upload script use Command2="--exclude *.nfo" etc? Thanks again all for the BIG help you are providing:)
  22. @watchmeexplode5 Thank you very much for this elaborating answer/guide and also for @DZMM for making it all possible. My videos are already organized like you describe above with Plex friendly format (Filebot is excellent) and I only have year in title also to make things easier. I will try to explain my thoughts on the first point in regards to 2 remote crypts (Videos & Shows). The reason for this is, that I read that Team Drive "only" supports 400000 files in total. When you have metadata files it quickly fills up, and when reaching 400000 files, then I will be having a serious problem, I think? What is your thoughts about that? I will not make a remote upload crypt then, if you don't think that will be necessary then. So to understand completely. The right approach would be to make Sab DL client download to a standard user share and make Filebot move it to the /mnt/user/mergerfs/shows folder? I think I would not make sense to make Sab have temp/finished directory in the /mnt/user/mergerfs/ folders correct? A note on the last section. What would be the thoughts on creating a non secure remote? I thought about creating a separate Team Drive for my home pictures, but that would be Google Photos to use for this, and that have a completely own app and Team Drive would not be used for that right. Lastly, are you using sync or move when you use it. I'm a little bit afraid to use sync, because I read that it can cause problems if you are not cautious how use use it. So my thought is that my current /mnt/user/movies piece by piece (since I don't have enough space for duplication), I will copy to /mnt/user/mergerfs/movies and use move and when I see it's working as it should I will delete most of my local except the ones I really want to not risk loosing. Does that sound sane? Again thanks for the help. It's VERY appreciated and really looking to using this solution. It's the best tech addon I've seen in aged and am excited to try using it:) It will really help and make my HDDs obsolete:) PS. Your webGUI sounds looks very interesting and would really be helpful. Nice work.
  23. In regards to the remote. I have to types of videos and have created 2 remotes and 2 crypts. 1. googleFI_Crypt (movies) links to googleFI containg 1 types of videos I want to stream. 2. googleSh_Crypt (shows) links to GoogleSH containing another types of videos I want to stream. I want to upload my videos to these 2 crypts. Besides that I have a DL client that I will be using for uploading. So from what I can read I should create another remote in rclone config. I would call that remote googleUP. That remote is not encrypted but I would like to have the content from the DL client Sab/sonarr upload the downloaded content to my 2 already created crypts. And the upload remote should be there so if an API ban comes I won't affect my 2 stream shares. But the 2 crypts already created is linked to the original remotes which I did in rclone config. So what should I add to my rclone config file? Hope it's more clear now.
  24. Thanks for the answer and this guide. 1. I read earlier in the thread that you mentioned your plugin wasn't suppose to copy but only move. Will this be a safe approach or should I manually copy my existing videos folder to merger_fs folder? 3. I've read this qouted comment below many times from you in regards to upload remote, but still doesn't get it, sorry. No tutorial needed. If you've setup gdrive_media_vfs to be gdrive:crypt, then just create another remote with another name pointing to the same location i.e. gdrive:crypt with the same passwords. The only difference is create a different client_ID so that one gets the ban, if any. To be honest I've only had an API ban once I think and that was when I didn't know what I was doing yonks ago. When I have my to crypts defined: googleFI_Crypt which links to googleFI googleSh_Crypt which links to googleSh So my folders containing Fi-videos goes to googleFI_Crypt and folders containing Sh-videos goes to googleSh_Crypt. When I have a download client and creates a remote for that like gdrive_media_vfs to follow your naming how do I link it to the 2 existing crypts, which already are linked to the other folders? Where/how do I point the new remote to the same crypt folder? Thanks again for the help much appreciated. My service accounts work, so I'm almost there so want to get the last part right.