Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Can I suggest another upload script, I had lots of issues with big files and running the normal upload script and things getting all screwy in rclone with fuse files and it moving things to bin and re uploading over and over if the file took longer than a hour to upload. Checkers for seeing if the script is running are not working for me with the original upload script.

 

I came across another upload script that seems to work better and also has the benefit of directing where which folders get uploaded to. So you could have 3 or 4 shared drives and as long as you set them up in rclone you can direct certain folders to it. For example a movies shared drive, one for tv, music etc.

 

#!/bin/bash
# RClone Config file
# Custom script specific to USER

# Lockfile
LOCKFILE="/var/lock/`basename $0`"


# Rclone arguments
ARGS=(-P --checkers 3 --log-file /mnt/user/appdata/other/logs/upload_rclone.log -v --tpslimit 3 --transfers 3 --drive-chunk-size 32M --exclude queue/** --exclude nzb/** --exclude intermediate/** --exclude complete/** --exclude downloads/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --exclude *.log~* --delete-empty-src-dirs --bwlimit 8M --tpslimit 3 --min-age 30m --user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36")

# Create exclusion file
#touch /home/user/.config/rclone/upload_excludes


(
  # Wait for lock for 5 seconds
  flock -x -w 5 200 || exit 1

# Move older local files to the cloud
rclone move /mnt/user/local/google_vfs/movies gdrive_media_vfs:movies "${ARGS[@]}"
rclone move /mnt/user/local/google_vfs/tv gdrive_media_vfs:tv "${ARGS[@]}"

) 200> ${LOCKFILE}

this script is just a example and should work for standard install from here, but if you wanted to use extra drives like say ebooks, make a shared drive called ebooks configure that in rclone and add the drive to your mount command. Then for rclone move command in the script I posted here you just add.

rclone move /mnt/user/local/google_vfs/ebooks gdrive_ebooks:ebooks "${ARGS[@]}"

This is just a example but it seems to work much better, also you dont really need so much excludes as the files have already been processed. I added the excludes anyway but im sure they can be removed.

 

 

Edited by Porkie
Link to comment
1 hour ago, Spladge said:

Perhaps mention what those variables are if they need to be used / changed?

There is also an rclone docker from hotio available that I am meaning to try. May make scripting easier.

I set what the variables do at the top in each script. As far as using hotio docker, it has a different purpose unlike the scripts.

Link to comment

Yes - I meant including the variables on the wiki / instructions. The docker would replace the rclone app - what I meant buy that is you could supply a pre filled rclone.config file (with variables) to match up to the script you have here. Just another idea in terms of automating. Not suggesting there is anything wrong.

Link to comment
6 hours ago, senpaibox said:

So I modified the scripts for myself and a few of my friends but I wanted to see if anyone can spot anything wrong with them or ways to improve it. I also wanna say thanks alot for these scripts they been so useful. Also I am a novice when it comes to coding so keep that in mind :P Thanks

 

Github Link

Thanks for this.  I started this thread not just to share, but also to find ways to improve my own scripts.  I'm going to incorporate how you've created the variables (will rename some as I don't think you've used the best names) and a few other things today.   I'm hoping then you'll be able to support pulls for any improvements your end in the future.

Link to comment

Update:

 

Thanks to inspiration from @senpaibox I've made a major revision this evening to the scripts on github

  • They are now much easier to setup through the use of configurable variables
  • Much better messaging
  • Upload script has better --bwlimit options allowing daily schedules, faster or slower uploads without worrying about daily quotas (rclone 1.5.1 upwards needed) e.g. you can now do a 30MB/s upload job overnight for 7 hours to use up your quota, rather than a slow 10MB/s trickle over the day.  Or, schedule a slow trickle over the day and a max speed upload overnight
  • option to bind individual rclone mounts and uploads to different IPs.  I use this to put my mount traffic in a high-priority queue on pfsense, and my uploads in a low

If you haven't switched from unionfs to mergerfs I really recommend that you do now and the layout of the new scripts should make it easier to do so.

 

 

These are now the scripts (except for my upload script which is modified which rotates remotes to upload more than 750GB/day) I'm using myself, so it'll be easier for me to maintain.


I've also updated the first two posts in this thread to align with the new scripts.

 

Any teething problems, please let me know.

Edited by DZMM
  • Like 3
Link to comment

I'm getting an rclone mount error when running the rclone_mount.sh scipts with bind option set to "N"

 

Script location: /tmp/user.scripts/tmpScripts/rclone_mount/script
Note that closing this window will abort the execution of this script
04.02.2020 17:20:08 INFO: *** Starting mount of remote cryptsend
04.02.2020 17:20:08 INFO: Checking if this script is already running.
04.02.2020 17:20:08 INFO: Script not running - proceeding.
04.02.2020 17:20:08 INFO: Mount not running. Will now mount cryptsend remote.
04.02.2020 17:20:08 INFO: Recreating mountcheck file for cryptsend remote.
2020/02/04 17:20:08 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
2020/02/04 17:20:08 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/02/04 17:20:09 DEBUG : mountcheck: Modification times differ by -2m20.521233913s: 2020-02-04 17:20:08.241233913 -0600 CST, 2020-02-04 23:17:47.72 +0000 UTC
2020/02/04 17:20:10 INFO : mountcheck: Copied (replaced existing)
2020/02/04 17:20:10 INFO :
Transferred: 32 / 32 Bytes, 100%, 17 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.8s

2020/02/04 17:20:10 DEBUG : 7 go routines active
2020/02/04 17:20:10 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
04.02.2020 17:20:10 INFO: Completed creation of mountcheck file for cryptsend remote.
04.02.2020 17:20:10 INFO: *** Creating mount for remote cryptsend
04.02.2020 17:20:10 INFO: sleeping for 5 seconds
Usage:
rclone mount remote:path /path/to/mountpoint [flags]

Flags:
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem.
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem.
--umask int Override the permission bits set by the filesystem.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
Command mount needs 2 arguments minimum: you provided 1 non flag arguments: ["/mnt/user/mount_rclone/cryptsend"]
04.02.2020 17:20:15 INFO: continuing...
04.02.2020 17:20:15 CRITICAL: cryptsend mount failed - please check for problems.
 

Link to comment
On 1/25/2020 at 9:15 PM, Kaizac said:

2 PSA's:

 

1. If you want to use more local folders in your union/merge folder which are RO, you can use the following merge command and Sonarr will work. No access denied errors anymore. Use either mount_unionfs or mount_mergerfs depending on what you use.


mergerfs /mnt/disks/local/Tdrive=RW:/mnt/user/LocalMedia/Tdrive=NC:/mnt/user/mount_rclone/Tdrive=NC /mnt/user/mount_unionfs/Tdrive -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

2. If you have issues with the mount script not working at start of array because docker daemon is starting. Then just put your mount script on custom settings and run it every minute (* * * * *). It will then run after array start and will work.

 

@nuhll both these fixes should be interesting for you.

Thanks for the tip, i dont really need multiple local folders.. i just had it still in there bc i wanted to stay as close as possible to the tutorial.

 

 

Link to comment
19 minutes ago, trajpar said:

I'm getting an rclone mount error when running the rclone_mount.sh scipts with bind option set to "N"

 

Script location: /tmp/user.scripts/tmpScripts/rclone_mount/script
Note that closing this window will abort the execution of this script
04.02.2020 17:20:08 INFO: *** Starting mount of remote cryptsend
04.02.2020 17:20:08 INFO: Checking if this script is already running.
04.02.2020 17:20:08 INFO: Script not running - proceeding.
04.02.2020 17:20:08 INFO: Mount not running. Will now mount cryptsend remote.
04.02.2020 17:20:08 INFO: Recreating mountcheck file for cryptsend remote.
2020/02/04 17:20:08 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
2020/02/04 17:20:08 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/02/04 17:20:09 DEBUG : mountcheck: Modification times differ by -2m20.521233913s: 2020-02-04 17:20:08.241233913 -0600 CST, 2020-02-04 23:17:47.72 +0000 UTC
2020/02/04 17:20:10 INFO : mountcheck: Copied (replaced existing)
2020/02/04 17:20:10 INFO :
Transferred: 32 / 32 Bytes, 100%, 17 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.8s

2020/02/04 17:20:10 DEBUG : 7 go routines active
2020/02/04 17:20:10 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
04.02.2020 17:20:10 INFO: Completed creation of mountcheck file for cryptsend remote.
04.02.2020 17:20:10 INFO: *** Creating mount for remote cryptsend
04.02.2020 17:20:10 INFO: sleeping for 5 seconds
Usage:
rclone mount remote:path /path/to/mountpoint [flags]

Flags:
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem.
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem.
--umask int Override the permission bits set by the filesystem.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
Command mount needs 2 arguments minimum: you provided 1 non flag arguments: ["/mnt/user/mount_rclone/cryptsend"]
04.02.2020 17:20:15 INFO: continuing...
04.02.2020 17:20:15 CRITICAL: cryptsend mount failed - please check for problems.
 

Just set it to Y and use a local ip similar to you server and it will work, I tried with just N too and got the same error too.

Link to comment
55 minutes ago, trajpar said:

I'm getting an rclone mount error when running the rclone_mount.sh scipts with bind option set to "N"

 

Script location: /tmp/user.scripts/tmpScripts/rclone_mount/script
Note that closing this window will abort the execution of this script
04.02.2020 17:20:08 INFO: *** Starting mount of remote cryptsend
04.02.2020 17:20:08 INFO: Checking if this script is already running.
04.02.2020 17:20:08 INFO: Script not running - proceeding.
04.02.2020 17:20:08 INFO: Mount not running. Will now mount cryptsend remote.
04.02.2020 17:20:08 INFO: Recreating mountcheck file for cryptsend remote.
2020/02/04 17:20:08 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
2020/02/04 17:20:08 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/02/04 17:20:09 DEBUG : mountcheck: Modification times differ by -2m20.521233913s: 2020-02-04 17:20:08.241233913 -0600 CST, 2020-02-04 23:17:47.72 +0000 UTC
2020/02/04 17:20:10 INFO : mountcheck: Copied (replaced existing)
2020/02/04 17:20:10 INFO :
Transferred: 32 / 32 Bytes, 100%, 17 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.8s

2020/02/04 17:20:10 DEBUG : 7 go routines active
2020/02/04 17:20:10 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
04.02.2020 17:20:10 INFO: Completed creation of mountcheck file for cryptsend remote.
04.02.2020 17:20:10 INFO: *** Creating mount for remote cryptsend
04.02.2020 17:20:10 INFO: sleeping for 5 seconds
Usage:
rclone mount remote:path /path/to/mountpoint [flags]

Flags:
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem.
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem.
--umask int Override the permission bits set by the filesystem.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
Command mount needs 2 arguments minimum: you provided 1 non flag arguments: ["/mnt/user/mount_rclone/cryptsend"]
04.02.2020 17:20:15 INFO: continuing...
04.02.2020 17:20:15 CRITICAL: cryptsend mount failed - please check for problems.
 

Sorry about that - didn't test that option.

 

Change:

$RcloneMountLocation &

to:

$RcloneRemoteName: $RcloneMountLocation &

I've fixed on github.

 

@Porkie apologies as well

Edited by DZMM
Link to comment

Update: an update to getting creative with making google accounts to create the additional remotes is to create a service account using this guide and then using as many of the credentials as you need to create additional remotes - rclone guide here

 

I've just made some more updates to the script that will be of interest to any users who have an upload speed > 70Mbps and want to upload more than the 750GB/day limit set by Google (per remote and per user/team drive), or just want to upload without a --bwlimit and not get locked out for 24 hours.

 

The new script now allows daily theoretical uploads/day of nearly 11TB with a Gbps connection.  I say theoretically as with my Gbps connection I got max upload speeds to Google of around 700-800Mbps giving a daily potential of around 8TB, but I had other things going on.  I probably could have gone faster as I did some tdrive->tdrive transfers last month and rclone was reporting 1.7Gbps.

 

I hadn't shared how I did this before as my script was quite clunky and a couple of us got it working, but I've now managed to make it easier for anyone else to setup in the new scripts. 

 

I also didn't share because my old script only worked if you had less than 750GB/day in the upload queue.  Otherwise, the script would get stuck for up to 24 hours.  Now thanks to the --drive-stop-on-upload-limit command added to rclone 1.5.1 the behaviour is much better - if the upload run exceeds 750GB/day it now stops rather than hammering away at google for up to 24 hours.  My script takes advantage of this and uses a different account for the next run i.e. in 5 mins, or whatever cron schedule you've set.

 

Setup now should take a maximum 30-60 mins (stage 3 below) if you need the full 14-15 accounts for a 1Gbps upload.  You could just dabble with a few and then add more when needed.  E.g. 1 extra account would allow 1.5TB/day which (should) be enough for most users.

 

How It Works

 

1. Your remote needs to mount a team drive NOT a normal gdrive folder.  Create One if you don't have one

 

If you haven't done this yet, creating a team drive is easy and moving the files from your gdrive-->tdrive will be mega fast as you can do it server side using server_side_across_configs = true in your remote settings and this new updated script -  - just follow these instructions to do quickly:

 

 

 

2. Share your new team drive with other UNIQUE google accounts

 

Google's 750GB/day quota is not only per remote, but also per user per team drive i.e. if you have 2 people sharing a team drive, they both can upload 750GB each = 1.5TB/day, 4 users = 3TB/day and so on.

 

So, to max out your upload you just need to decide how many users you need accessing the team drive based on how fast your connection is, how much you might upload daily and how long your upload job is scheduled for.  E.g. for a 1Gbps connection:

 

- 24x7 upload: 14-15 users (1000/8 x 60 x 60 x 24 = 10.8TB / 0.75TB = 14.4) = 14-15 extra users and remotes

- Uploading for 8 hours overnight:  5 users (3.6TB) = 5 extra users and remotes

- Script running every 5 mins with no --bwlimit: As many accounts/remotes to cover as much data downloaded

 

UPDATE: I advise creating NOT using your existing mounted remote to upload this way to avoid it getting locked out.  Use your existing remote just to mount

 

If you want to add 14-15 google accounts with access to the teamdrive you might have to get a bit creative with finding accounts to invite.  I had another google apps domain that helped where I gave those users access, plus I had a few gmail.com accounts as well I could use.

 

3. Create the extra remotes and corresponding encrypted remotes

 

Because each of the accounts in #2 above have access to the new teamdrive, they can all create mounts to access the extra 750GB/day per account.  To do this, create rclone mounts as usual - BUT for the client_ID and client_secret for each remote CREATE AND USE a DIFFERENT google account from #2.  This is because each user can only upload 750GB, regardless of which remote did it.

 

For each of your new remotes, use the SAME TEAMDRIVE and the same CRYPT LOCATION.  i.e. if your main config looks like this:

 

[gdrive]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}
server_side_across_configs = true

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

then your first new remote for fast uploading should look like this:

[gdrive_counter1]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}

[gdrive_counter1_vfs]
type = crypt
remote = gdrive_counter1:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

gdrive_counter1:

- Recommended: (so you don't lose track!) make sure you give each unencrypted remote the same name before the number (gdrive_counter)

- use a unique CLIENT_ID and SECRET

- make sure each remote is using the same TEAM DRIVE

- when creating the token using rclone config, remember to use the google account that matches the Client_ID and CLient_secret

 

gdrive_counter1_vfs:

- IMPORTANT:  Each encrypted remote HAS TO HAVE the same characters before the number (gdrive_counter) OR THE SCRIPT WON'T WORK

- IMPORTANT:  Each encrypted remote HAS TO HAVE the same characters after the number (_vfs) OR THE SCRIPT WON'T WORK

- IMPORTANT: remote needs to be :crypt to ensure files go in the same place

- IMPORTANT: PASSWORD1 and PASSWORD2 (i.e. what's entered in rclone config not the scrambled versions) need to be the same as used for gdrive_media_vfs

 

That's it. 

 

Once finished, your rclone config should look something like this:

 

[gdrive]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}
server_side_across_configs = true

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

[gdrive_counter1]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}

[gdrive_counter1_vfs]
type = crypt
remote = gdrive_counter1:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

[gdrive_counter2]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}

[gdrive_counter2_vfs]
type = crypt
remote = gdrive_counter2:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2
.
.
.
.
.
.
.
.

[gdrive_counter15]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}

[gdrive_counter15_vfs]
type = crypt
remote = gdrive_counter15:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

4. Enter Values Into Script

 

Once complete, then just fill in this section in the new upload script:

# Use Multiple upload remotes for multiple quotas
UseMultipleUploadRemotes="Y" # Y/N. Choose whether want to rotate multiple upload remotes for incresed quota (750GB x number of remotes)
RemoteNumber="15" # Integer number of remotes to use.  
RcloneUploadRemoteStart="gdrive_counter" # Enter characters before counter in your remote names ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter 'gdrive_counter'
RcloneUploadRemoteEnd="_vfs" # Enter characters after counter ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter '_vfs'

 

Edited by DZMM
  • Thanks 1
Link to comment
58 minutes ago, DZMM said:

1. Your remote needs to mount a team drive NOT a normal gdrive folder.  Create One if you don't have one

 

If you haven't done this yet, creating a team drive is easy and moving the files from your gdrive-->tdrive will be mega fast as you can do it server side using server_side_across_configs = true in your remote settings and this new updated script

To move files between your gdrive and new teamdrive, the easiest way is to:

  1. Stop your current rclone mount + plex, radarr etc - any dockers that need to access the mount
  2. Log into gdrive with your 'master' account i.e. one that can access both the gdrive folder and the teamdrive
  3. click on 'crypt' in the gdrive folder and use the move command to move the folder to the teamdrive
  4. The files will then get moved fairly quickly
  5. Adjust your rclone mount and upload scripts to use the new tdrive based remotes
  6. It's best to wait until the move is finished before remounting, because rclone might not see any server side changes made after mounting for a while
  7. Once the old gdrive folder is empty, start the mount with the new tdrive remote
Edited by DZMM
Link to comment
25 minutes ago, DZMM said:

I

2. Share your new team drive with other UNIQUE google accounts

 

Google's 750GB/day quota is not only per remote, but also per user/team drive i.e. if you have 2 people sharing a team drive, they both can upload 750GB each = 1.5TB/day, 4 users = 3TB/day and so on.

I forgot to add.  Because you can create multiple teamdrives, this is a good way to give a mate an unlimited 'gdrive' account i.e. create another teamdrive and share it with them ;-)

Link to comment

I just wanna say, Thanks @DZMM for the new versions of the scripts combined with the multiple upload accounts, it is just making my life so much easier  :)

 

Just a little tidbit, if you use the multiple upload feature, and use more than 1 "_" in your remote name, like i did, change the CUT command from field 3 to field 4

from: find /mnt/user/appdata/other/rclone/name_media_vfs/ -name 'counter_*' | cut -d"_" -f3

to: find /mnt/user/appdata/other/rclone/name_media_vfs/ -name 'counter_*' | cut -d"_" -f4

If you don't do the above, it will not rotate the accounts :)

 

Link to comment
4 hours ago, Thel1988 said:

I just wanna say, Thanks @DZMM for the new versions of the scripts combined with the multiple upload accounts, it is just making my life so much easier  :)

 

Just a little tidbit, if you use the multiple upload feature, and use more than 1 "_" in your remote name, like i did, change the CUT command from field 3 to field 4


from: find /mnt/user/appdata/other/rclone/name_media_vfs/ -name 'counter_*' | cut -d"_" -f3

to: find /mnt/user/appdata/other/rclone/name_media_vfs/ -name 'counter_*' | cut -d"_" -f4

If you don't do the above, it will not rotate the accounts :)

 

The new scripts are a lot easier to use - it's handy for me as I have 3-4 mounts going, so being able to just edit the config section is great.

 

You've got me confused though about editing the CUT - that just extracts the number from the counter_# tracking file that the script creates and should have nothing to do with the remote name.

 

What's the name or format of the multiple upload remotes you're using?  It shouldn't matter what they are called as long as you enter the right values for what comes before and after the counter number in the remote name:

RcloneUploadRemoteStart="gdrive_counter" # Enter characters before counter in your remote names ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter 'gdrive_counter'
RcloneUploadRemoteEnd="_vfs" # Enter characters after counter ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter '_vfs'

so if your remotes are name_media1_vfs, name_media2_vfs etc:

 

RcloneUploadRemoteStart="name_media"
RcloneUploadRemoteEnd="_vfs"

which creates RcloneUploadRemoteStart+Counter+RcloneUploadRemoteEnd as the remote to use on each run

Edited by DZMM
Link to comment

@DZMM

Awesome work on the new changes (and thanks @senpaibox). It makes the setup and updating to your most recent git scripts so much easier. This script just keeps getting better!

 

I ended up using service accounts instead of unique gdrive accounts like @senpaibox hinted at earlier. 

Couldn't get as creative as you to get a bunch of unique accounts and needing a phone number for creation seemed like a hassle.  I've used cloudbox and plexguide prior so I kinda knew the idea behind service accounts and figured it would translate over to your script well.  

 

By using service accounts I didn't need to bother creating a bunch of unique accounts and go through the steps of authorizing every single one. With Service Accounts you can add them to a google group and add that single group into your shared drive. For rclone authentication, you just have to reference the .json account in the config. Example: 

[gdrive_counter1]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/Rclone/accounts/SERVICEACCOUNT01.json
team_drive = SAME TEAM DRIVE

[gdrive_counter1_vfs]
type = crypt
remote = gdrive_counter1:/encrypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD 1
password2 = PASSWORD 2

 

-----------------------------------------------------------------------------------------------------------------------------

Initially I had issues regarding the script not properly referencing the counter. (PSA: my knowledge is limited on this subject but I know enough to read the script and get the jist but not enough to fix issues). Rclone was getting fed the wrong updated $RcloneUploadRemoteName when using the counter. 

 

After seeing @Thel1988 post if figured I'd see if changing the field from 3 --> 4 would make any difference. Seems like that fixed all the issues and now everything functions as intended. 

-----------------------------------------------------------------------------------------------------------------------------

 

Included some logs so hopefully you can track down what was going on:

For reference (RcloneRemoteName="gdrive_media_vfs", RcloneUploadRemoteName="gdrive_media_vfs", RcloneUploadRemoteStart="gdrive_counter", RcloneUploadRemoteEnd="_vfs") 

 

With:

CounterNumber=$(find /mnt/user/appdata/other/rclone/$RcloneRemoteName/ -name 'counter_*' | cut -d"_" -f3)

08.02.2020 22:35:34 INFO: rclone installed successfully - proceeding with upload.
vfs/counter
/tmp/user.scripts/tmpScripts/New_Rclone_Upload/script: line 85: [[: vfs/counter: division by 0 (error token is "counter")
08.02.2020 22:35:34 INFO: No counter file found for gdrive_media_vfs. Creating counter_1.
08.02.2020 22:35:34 INFO: Adjusted upload remote name to gdrive_countervfs/counter_vfs based on counter vfs/counter.
08.02.2020 22:35:34 INFO: *** Uploading to remote gdrive_media_vfs
2020/02/08 22:35:34 DEBUG : --min-age 10m0s to 2020-02-08 22:25:34.878642852 -0500 EST m=-599.992140160
2020/02/08 22:35:34 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/gdrive_media_vfs" "gdrive_countervfs/counter_vfs:" "--user-agent=gdrive_countervfs/counter_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "10m" "--exclude" ".Recycle.Bin/**" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,off 16:00,off"]
2020/02/08 22:35:34 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/02/08 22:35:34 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/02/08 22:35:34 DEBUG : downloads: Excluded
2020/02/08 22:35:34 INFO : Local file system at /usr/local/emhttp/gdrive_countervfs/counter_vfs:: Waiting for checks to finish
2020/02/08 22:35:34 INFO : Local file system at /usr/local/emhttp/gdrive_countervfs/counter_vfs:: Waiting for transfers to finish
2020/02/08 22:35:34 DEBUG : moviedocumentary/09 - The Nurse Who Loved Me.m4a: Can't move: rename /mnt/user/local/gdrive_media_vfs/moviedocumentary/09 - The Nurse Who Loved Me.m4a /usr/local/emhttp/gdrive_countervfs/counter_vfs:/moviedocumentary/09 - The Nurse Who Loved Me.m4a: invalid cross-device link: trying copy
2020/02/08 22:35:34 DEBUG : moviedocumentary/09 - The Nurse Who Loved Me.m4a: Can't move, switching to copy
2020/02/08 22:35:34 DEBUG : moviedocumentary/09 - The Nurse Who Loved Me.m4a: MD5 = e1434506ef58266cfe03aedd9a6ae72d OK
2020/02/08 22:35:34 INFO : moviedocumentary/09 - The Nurse Who Loved Me.m4a: Copied (new)
2020/02/08 22:35:34 INFO : moviedocumentary/09 - The Nurse Who Loved Me.m4a: Deleted
2020/02/08 22:35:34 DEBUG : moviedocumentary: Removing directory
2020/02/08 22:35:34 DEBUG : Local file system at /mnt/user/local/gdrive_media_vfs: deleted 1 directories
2020/02/08 22:35:34 INFO :
Transferred: 20.048M / 20.048 MBytes, 100%, 458.734 MBytes/s, ETA 0s
Checks: 2 / 2, 100%
Deleted: 1
Transferred: 1 / 1, 100%
Elapsed time: 0.0s

2020/02/08 22:35:34 DEBUG : 5 go routines active
2020/02/08 22:35:34 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/gdrive_media_vfs" "gdrive_countervfs/counter_vfs:" "--user-agent=gdrive_countervfs/counter_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "10m" "--exclude" ".Recycle.Bin/**" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,off 16:00,off"]
rm: cannot remove '/mnt/user/appdata/other/rclone/gdrive_media_vfs/counter_vfs/counter': No such file or directory
/tmp/user.scripts/tmpScripts/New_Rclone_Upload/script: line 170: vfs/counter: division by 0 (error token is "counter")
08.02.2020 22:35:34 INFO: Script complete

 

 

With: 

CounterNumber=$(find /mnt/user/appdata/other/rclone/$RcloneRemoteName/ -name 'counter_*' | cut -d"_" -f4)

 

09.02.2020 11:35:42 INFO: Counter file found for gdrive_media_vfs.
09.02.2020 11:35:42 INFO: Adjusted upload remote name to gdrive_counter1_vfs based on counter 1.
09.02.2020 11:35:42 INFO: *** Uploading to remote gdrive_media_vfs
2020/02/09 11:35:42 DEBUG : --min-age 10m0s to 2020-02-09 11:25:42.791100876 -0500 EST m=-599.991213270
2020/02/09 11:35:42 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/gdrive_media_vfs" "gdrive_counter1_vfs:" "--user-agent=gdrive_counter1_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "10m" "--exclude" ".Recycle.Bin/**" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,off 16:00,off"]
2020/02/09 11:35:42 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/02/09 11:35:42 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/02/09 11:35:43 DEBUG : downloads: Excluded

2020/02/09 11:35:44 INFO : Encrypted drive 'gdrive_counter1_vfs:': Waiting for checks to finish
2020/02/09 11:35:44 INFO : Encrypted drive 'gdrive_counter1_vfs:': Waiting for transfers to finish
2020/02/09 11:35:46 INFO : testdir/Alcatraz (45)/cover.jpg: Copied (new)
2020/02/09 11:35:46 INFO : testdir/Alcatraz (45)/cover.jpg: Deleted
2020/02/09 11:35:47 INFO : testdir/Alcatraz (45)/Alcatraz - Brandon Sanderson.mobi: Copied (new)
2020/02/09 11:35:47 INFO : testdir/Alcatraz (45)/Alcatraz - Brandon Sanderson.mobi: Deleted
2020/02/09 11:35:47 INFO : testdir/Alcatraz (45)/Alcatraz - Brandon Sanderson.pdf: Copied (new)
2020/02/09 11:35:47 INFO : testdir/Alcatraz (45)/Alcatraz - Brandon Sanderson.pdf: Deleted
2020/02/09 11:35:47 INFO : testdir/Alcatraz (45)/Alcatraz - Brandon Sanderson.epub: Copied (new)
2020/02/09 11:35:47 INFO : testdir/Alcatraz (45)/Alcatraz - Brandon Sanderson.epub: Deleted
2020/02/09 11:35:48 INFO : testdir/Alcatraz (45)/metadata.opf: Copied (new)
2020/02/09 11:35:48 INFO : testdir/Alcatraz (45)/metadata.opf: Deleted
2020/02/09 11:35:48 INFO : testdir/Alcatraz (45)/Alcatraz - Brandon Sanderson.opf: Copied (new)
2020/02/09 11:35:48 INFO : testdir/Alcatraz (45)/Alcatraz - Brandon Sanderson.opf: Deleted
2020/02/09 11:35:48 DEBUG : testdir/Alcatraz (45): Removing directory
2020/02/09 11:35:48 DEBUG : testdir: Removing directory
2020/02/09 11:35:48 DEBUG : Local file system at /mnt/user/local/gdrive_media_vfs: deleted 2 directories
2020/02/09 11:35:48 INFO :
Transferred: 1.695M / 1.695 MBytes, 100%, 368.139 kBytes/s, ETA 0s
Checks: 12 / 12, 100%
Deleted: 6
Transferred: 6 / 6, 100%
Elapsed time: 4.7s

2020/02/09 11:35:48 DEBUG : 14 go routines active
2020/02/09 11:35:48 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/gdrive_media_vfs" "gdrive_counter1_vfs:" "--user-agent=gdrive_counter1_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "10m" "--exclude" ".Recycle.Bin/**" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,off 16:00,off"]
09.02.2020 11:35:48 INFO: Created counter_2 for next upload run.
09.02.2020 11:35:48 INFO: Script complete

 

Again, I don't know what exactly is going on with changing the field but it seems to have fixed my issues. In both cases the counter would be created, however when f3 was used the command would reference $RcloneUploadRemoteName as "gdrive_countervfs/counter_vfs:" regardless of counter value in appdata/other/rclone/$RcloneRemoteName/counter_X

 

When f3 was used the files/folders would be moved to /usr/local/emhttp/gdrive_countervfs/    🤔

The script would proceed to delete the file from /mnt/local. Nothing would actually be moved to the drive and it was not seen in the merger_fs (because it was moved to /usr/local.....)

 

Sorry for the long post...... I wanted to include enough info so hopefully its helpful

 

-----------------------------------------------------------------------------------------------------------------------------

TL:DR

Changing f3 --> f4 fixed my issues with the counter value being interpreted wrong.

Service Accounts work fine when using this script. Simply modify the [gdrive_counterX] in rclone conf to reflect the location of the account.json

-----------------------------------------------------------------------------------------------------------------------------

 

Edited by watchmeexplode5
  • Thanks 1
Link to comment

PlexGuide is doing something similiar and they have automated the creation of the service accounts. You specify how many and it creates them for you. It spits out a list of accounts and you add them to your gsuite account. There must be a way to create the accounts without creating a full Gmail account. I will try and have a look to understand how it works. Not that I am all that smart 

Link to comment

@bedpan

Check out: https://github.com/xyou365/AutoRclone

 

That will auto-create all the service accounts and download all the .json files. AutoRclone also has a script to add them to your group for the teamdrive.

 

From there you are kinda left to edit your rclone conf by yourself. I'm sure somebody could script it but I just used some keyboard macros for quick adding. Kinda a lazy way but I'm not the best with scripting and it only took about 5-10 minutes to add 100 so I can't really complain.

Link to comment
4 hours ago, watchmeexplode5 said:

Again, I don't know what exactly is going on with changing the field but it seems to have fixed my issues. In both cases the counter would be created, however when f3 was used the command would reference $RcloneUploadRemoteName as "gdrive_countervfs/counter_vfs:" regardless of counter value in appdata/other/rclone/$RcloneRemoteName/counter_X

I really don't know why as it works for me (just tested again).  Glad you figured it out though.

 

3 hours ago, watchmeexplode5 said:

@bedpan

Check out: https://github.com/xyou365/AutoRclone

 

That will auto-create all the service accounts and download all the .json files. AutoRclone also has a script to add them to your group for the teamdrive.

 

From there you are kinda left to edit your rclone conf by yourself. I'm sure somebody could script it but I just used some keyboard macros for quick adding. Kinda a lazy way but I'm not the best with scripting and it only took about 5-10 minutes to add 100 so I can't really complain.

@watchmeexplode5 I was curious and even though I've already got 17 remotes I created my way, this way was definitely quickly.  It's nice to know I've got 500 accounts sitting there if I need them!

 

How did you create 100 remotes so quickly though?  I'm just checking you didn't cut and paste the same gobbledygook PASSWORD1 and PASSWORD2 into each of the 100 remotes?  You still have to create each one individually don't you, so that each remote has a unique gobbledygook string for the password? You still have to enter the actual passwords into rclone config for rclone to obfuscate, right?

Edited by DZMM
Link to comment

@DZMM

Maybe I'm being an idiot and don't know whats going on but......... isn't the rclone config PASSWORD1 and PASSWORD2 simply for decryption on rclones side, correct? So that a file name like DJFKLJDSASDSLF decrypts to ---> My Backup Folder.  So those passwords will all be the same to de-crypt the files stored on the teamdrive. Similar if you were copying your rclone config from one computer to the next. You would simply copy the PASSWORD1/2 so that you can decrypt. Please correct me if I'm wrong and made a stupid mistake and need to change something.

 

On the note about service accounts needing authentication/passwords:

(PSA: I don't know much about this so some information may be off but this is what I think is correct)

A “service account” is an account you create in your gsuite that can interact with your data and google apis, but has limitations (one being they cannot interact with "my drive" however they can interact with teamdrives). The service accounts are auto-generated through your gsuite account via a created project. Each project can have 100 service accounts associated with it. Each account must be given permission to access the teamdrive via add members (long way of doing it) or add group (short way of doing it). Essentially they are "non-human users" functioning under your gsuite domain and since each account counts as a "user", they each receive a 750gb limit (not that anybody should actually need a 75tb/day upload 🤣)

 

Every single service account has a unique access token, client id, key, ect which is stored within the .json file (contents shown below for example) This is then referenced by rclone to provide authentication to the teamdrive. After authentication it's simply like any other user. Move, sync, ect work the same as it would on a "unique account" which would have access to the teamdrive. 

 

So authentication to the drive is done via the xxx.json which each service account has associated with it AND decryption of file names is achieved with the SAME (IE Not unique) password1/password2 via rclone.

 

XXXXXXXXXXXXXXXXXXXXX.json (each service account gets it's own .json file so 100 service accounts == 100 xx.json files)

{
  "type": "service_account",
  "project_id": "saf-XXXXXXXXXXXXXXXXXX-XX",
  "private_key_id": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
  "private_key": "-----BEGIN PRIVATE KEY-----\XXXXXXXXXXXXX\n-----END PRIVATE KEY-----\n",
  "client_email": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.iam.gserviceaccount.com",
  "client_id": "XXXXXXXXXXXXXXXXXX",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/XXXXXXXXXXXXXXXXXX-qp.iam.gserviceaccount.com"
}

 

This is what I have gathered but my info could be off. Others such as AutoRclone, cloudbox, and plexguide have utilized service accounts in this manner for awhile now to increase the upload limit without getting api bans. I believe plexguide uses a script to populate the config file with the accounts (similar to your/my setup) From there it rotates accounts in some manner, just like yours. AutoRclone and cloudbox ustilize a different scripts which runs before rclone and rotates through the accounts, then feeding the commands to rclone (this info could be off though, I haven't used these programs for quite some time and I'm going off memory).  

 

An in-depth implementation of service accounts (for cloudbox but is a good read) can be seen here: https://drive.google.com/open?id=1LdyXb5AyqV8_A_CFeOp9DE0SkhXPEn3VPQhPJVOdUv0

 

For editing the config I made a stupidly simply macro that just put flipped between two notepads with the info needed. I had a list of all the service names (from ls command issued in my folder that contains my sa.json accounts) in a notepad and the other notepad had my rclone conf that with 100 blank counters needing to be edited properly to reflect the correct path.  The macro simply copied the first line of the XX.json, alt-tabbed to the conf file and pasted in the correct location. Then alt-tabbed back keyed down one line and repeated the procedure. Extremely barbaric way of mass editing but it worked fast (5 minutes to get the macro down, and about 10 seconds to run the macro and edit the file). 

 

There is probably an easy way of scripting the config edit but my neanderthal macro seemed to work well without the need of learn scripting 🤤

 

Edited by watchmeexplode5
Link to comment
2 minutes ago, watchmeexplode5 said:

Maybe I'm being an idiot and don't know whats going on but......... isn't the rclone config PASSWORD1 and PASSWORD2 simply for decryption on rclones side, correct? So that a file name like DJFKLJDSASDSLF decrypts to ---> My Backup Folder.  So those passwords will all be the same to de-crypt the files stored on the teamdrive. Similar if you were copying your rclone config from one computer to the next. You would simply copy the PASSWORD1/2 so that you can decrypt. Please correct me if I'm wrong and made a stupid mistake and need to change something.

100% certain that you need to run rclone config to create each of the additional remotes and during config enter the same real 'readable' PASSWORD1 and PASSWORD2, and then rclone obscurates them in the config - otherwise your password would be visible in the config file.

 

If you want to test, upload a new file e.g. test.txt to the root of your crypt with new gdrive_counter1_vfs and then see if it appears in your main mount.  It should appear obscured in the team drive folder but it won't appear in your mount decrypted as the passwords won't match.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.