Jump to content
DZMM

Guide: How To Use Rclone To Mount Cloud Drives And Play Files

1384 posts in this topic Last Reply

Recommended Posts

@bryansj you can remove the /downloads exclusion if you want.  But, I've read (not tried myself) that seeding from gdrive is a bad idea as it can lead to an API ban because of all the calls.  Similarly you can add an exclusion for your 4K content folder to keep it local, and then use gdrive for non-4K, non-seeding content on top of your local 84TB i.e. have access to xxxTBs extra storage.  This is what I did initially until I went all in, and uploaded everything and sold my HDDs.

 

I would definitely use it for backups.

Share this post


Link to post

I remember from my attempt a couple years ago that gdrive and downloads didn't get along, but I couldn't remember where the problem was between them.  The API ban would cause plenty of headaches.

 

I also remember there was a catch-22 back when Plex would work straight from a gdrive before they canned that service.  You could point Plex to gdrive and the users would be able to stream from there and not use your bandwidth.  However, you couldn't encode your media and you risked Google deleting your content.  If you encode your media it has to pass through your pipe to decode so you are back to using Plex "locally".

Share this post


Link to post
46 minutes ago, bryansj said:

I remember from my attempt a couple years ago that gdrive and downloads didn't get along, but I couldn't remember where the problem was between them.  The API ban would cause plenty of headaches.

 

I also remember there was a catch-22 back when Plex would work straight from a gdrive before they canned that service.  You could point Plex to gdrive and the users would be able to stream from there and not use your bandwidth.  However, you couldn't encode your media and you risked Google deleting your content.  If you encode your media it has to pass through your pipe to decode so you are back to using Plex "locally".

Why are you on torrents? Move to usenet and get rid of that seeding bullshit. Also you can just direct play 4k from your gdrive. I do with up to 80 Gb files and it's fine. You might consider a seed box though. You can use torrents and move with gigabit speed to gdrive.

Share this post


Link to post

I started as far back as MP3s in the 1990s with Usenet and moved to private trackers a few years ago.  You might have a different opinion, but I'm not going back.  I'm not talking about crappy public trackers here.  I've done seed boxes, but they don't really meet my use case anymore.

Share this post


Link to post
1 hour ago, bryansj said:

I started as far back as MP3s in the 1990s with Usenet and moved to private trackers a few years ago.  You might have a different opinion, but I'm not going back.  I'm not talking about crappy public trackers here.  I've done seed boxes, but they don't really meet my use case anymore.

Well to each his own. For mainstream media usenet is vastly superior if set up right. If you have access to private trackers and also need non-mainstream media then torrents can bring more to the table.

Either way I think with your setup/wishes you can use rclone for your backups and replace Crashplan with it. But you don't need all this elaborate configuration for it. Just create a Gdrive/Team Drive, DO NOT mount it. Just upload to it, and let removed/older data be written to a seperate folder within Gdrive. If you get infected then it can't directly access your mount files. And in case of encrypted/infected files being uploaded you will have your old media to rollback to.

 

Just have to remember that when you want to access your backups you have to mount the rclone mount/gdrive first to see the files. Or if you don't use encryption you can just see them through the browser.

Edited by Kaizac

Share this post


Link to post

Don’t seed from google drive - you’ll most likely receive an API ban. Also I’ve seen most people recommend that if you do have a large amount of local storage, keep 4K content local too. 
 

It sounds like in your position, with 84TB of local storage and from the sounds of it no real need to increase that exponentially, this just isn’t for you.

Share this post


Link to post
On 1/25/2020 at 5:16 AM, Kaizac said:

Asking it again cause I'm very curious. Can you share your merger command?

Sorry Kaizac,

 

Been offline for a couple of weeks.  Mergrfs command below.

mergerfs /mnt/user/Media:/mnt/user/mount_rclone/google_vfs/Media:/mnt/user/mount_rclone/nachodrive/Media /mnt/user/mount_unionfs/google_vfs/Media -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

 

Share this post


Link to post

Can I suggest another upload script, I had lots of issues with big files and running the normal upload script and things getting all screwy in rclone with fuse files and it moving things to bin and re uploading over and over if the file took longer than a hour to upload. Checkers for seeing if the script is running are not working for me with the original upload script.

 

I came across another upload script that seems to work better and also has the benefit of directing where which folders get uploaded to. So you could have 3 or 4 shared drives and as long as you set them up in rclone you can direct certain folders to it. For example a movies shared drive, one for tv, music etc.

 

#!/bin/bash
# RClone Config file
# Custom script specific to USER

# Lockfile
LOCKFILE="/var/lock/`basename $0`"


# Rclone arguments
ARGS=(-P --checkers 3 --log-file /mnt/user/appdata/other/logs/upload_rclone.log -v --tpslimit 3 --transfers 3 --drive-chunk-size 32M --exclude queue/** --exclude nzb/** --exclude intermediate/** --exclude complete/** --exclude downloads/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --exclude *.log~* --delete-empty-src-dirs --bwlimit 8M --tpslimit 3 --min-age 30m --user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36")

# Create exclusion file
#touch /home/user/.config/rclone/upload_excludes


(
  # Wait for lock for 5 seconds
  flock -x -w 5 200 || exit 1

# Move older local files to the cloud
rclone move /mnt/user/local/google_vfs/movies gdrive_media_vfs:movies "${ARGS[@]}"
rclone move /mnt/user/local/google_vfs/tv gdrive_media_vfs:tv "${ARGS[@]}"

) 200> ${LOCKFILE}

this script is just a example and should work for standard install from here, but if you wanted to use extra drives like say ebooks, make a shared drive called ebooks configure that in rclone and add the drive to your mount command. Then for rclone move command in the script I posted here you just add.

rclone move /mnt/user/local/google_vfs/ebooks gdrive_ebooks:ebooks "${ARGS[@]}"

This is just a example but it seems to work much better, also you dont really need so much excludes as the files have already been processed. I added the excludes anyway but im sure they can be removed.

 

 

Edited by Porkie

Share this post


Link to post

So I modified the scripts for myself and a few of my friends but I wanted to see if anyone can spot anything wrong with them or ways to improve it. I also wanna say thanks alot for these scripts they been so useful. Also I am a novice when it comes to coding so keep that in mind :P Thanks

 

Github Link

  • Like 1
  • Thanks 1

Share this post


Link to post

Perhaps mention what those variables are if they need to be used / changed?

There is also an rclone docker from hotio available that I am meaning to try. May make scripting easier.

Share this post


Link to post
1 hour ago, Spladge said:

Perhaps mention what those variables are if they need to be used / changed?

There is also an rclone docker from hotio available that I am meaning to try. May make scripting easier.

I set what the variables do at the top in each script. As far as using hotio docker, it has a different purpose unlike the scripts.

Share this post


Link to post

Yes - I meant including the variables on the wiki / instructions. The docker would replace the rclone app - what I meant buy that is you could supply a pre filled rclone.config file (with variables) to match up to the script you have here. Just another idea in terms of automating. Not suggesting there is anything wrong.

Share this post


Link to post
6 hours ago, senpaibox said:

So I modified the scripts for myself and a few of my friends but I wanted to see if anyone can spot anything wrong with them or ways to improve it. I also wanna say thanks alot for these scripts they been so useful. Also I am a novice when it comes to coding so keep that in mind :P Thanks

 

Github Link

Thanks for this.  I started this thread not just to share, but also to find ways to improve my own scripts.  I'm going to incorporate how you've created the variables (will rename some as I don't think you've used the best names) and a few other things today.   I'm hoping then you'll be able to support pulls for any improvements your end in the future.

Share this post


Link to post

Update:

 

Thanks to inspiration from @senpaibox I've made a major revision this evening to the scripts on github

  • They are now much easier to setup through the use of configurable variables
  • Much better messaging
  • Upload script has better --bwlimit options allowing daily schedules, faster or slower uploads without worrying about daily quotas (rclone 1.5.1 upwards needed) e.g. you can now do a 30MB/s upload job overnight for 7 hours to use up your quota, rather than a slow 10MB/s trickle over the day.  Or, schedule a slow trickle over the day and a max speed upload overnight
  • option to bind individual rclone mounts and uploads to different IPs.  I use this to put my mount traffic in a high-priority queue on pfsense, and my uploads in a low

If you haven't switched from unionfs to mergerfs I really recommend that you do now and the layout of the new scripts should make it easier to do so.

 

 

These are now the scripts (except for my upload script which is modified which rotates remotes to upload more than 750GB/day) I'm using myself, so it'll be easier for me to maintain.


I've also updated the first two posts in this thread to align with the new scripts.

 

Any teething problems, please let me know.

Edited by DZMM

Share this post


Link to post

I'm getting an rclone mount error when running the rclone_mount.sh scipts with bind option set to "N"

 

Script location: /tmp/user.scripts/tmpScripts/rclone_mount/script
Note that closing this window will abort the execution of this script
04.02.2020 17:20:08 INFO: *** Starting mount of remote cryptsend
04.02.2020 17:20:08 INFO: Checking if this script is already running.
04.02.2020 17:20:08 INFO: Script not running - proceeding.
04.02.2020 17:20:08 INFO: Mount not running. Will now mount cryptsend remote.
04.02.2020 17:20:08 INFO: Recreating mountcheck file for cryptsend remote.
2020/02/04 17:20:08 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
2020/02/04 17:20:08 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/02/04 17:20:09 DEBUG : mountcheck: Modification times differ by -2m20.521233913s: 2020-02-04 17:20:08.241233913 -0600 CST, 2020-02-04 23:17:47.72 +0000 UTC
2020/02/04 17:20:10 INFO : mountcheck: Copied (replaced existing)
2020/02/04 17:20:10 INFO :
Transferred: 32 / 32 Bytes, 100%, 17 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.8s

2020/02/04 17:20:10 DEBUG : 7 go routines active
2020/02/04 17:20:10 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
04.02.2020 17:20:10 INFO: Completed creation of mountcheck file for cryptsend remote.
04.02.2020 17:20:10 INFO: *** Creating mount for remote cryptsend
04.02.2020 17:20:10 INFO: sleeping for 5 seconds
Usage:
rclone mount remote:path /path/to/mountpoint [flags]

Flags:
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem.
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem.
--umask int Override the permission bits set by the filesystem.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
Command mount needs 2 arguments minimum: you provided 1 non flag arguments: ["/mnt/user/mount_rclone/cryptsend"]
04.02.2020 17:20:15 INFO: continuing...
04.02.2020 17:20:15 CRITICAL: cryptsend mount failed - please check for problems.
 

Share this post


Link to post
On 1/25/2020 at 9:15 PM, Kaizac said:

2 PSA's:

 

1. If you want to use more local folders in your union/merge folder which are RO, you can use the following merge command and Sonarr will work. No access denied errors anymore. Use either mount_unionfs or mount_mergerfs depending on what you use.


mergerfs /mnt/disks/local/Tdrive=RW:/mnt/user/LocalMedia/Tdrive=NC:/mnt/user/mount_rclone/Tdrive=NC /mnt/user/mount_unionfs/Tdrive -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

2. If you have issues with the mount script not working at start of array because docker daemon is starting. Then just put your mount script on custom settings and run it every minute (* * * * *). It will then run after array start and will work.

 

@nuhll both these fixes should be interesting for you.

Thanks for the tip, i dont really need multiple local folders.. i just had it still in there bc i wanted to stay as close as possible to the tutorial.

 

 

Share this post


Link to post
19 minutes ago, trajpar said:

I'm getting an rclone mount error when running the rclone_mount.sh scipts with bind option set to "N"

 

Script location: /tmp/user.scripts/tmpScripts/rclone_mount/script
Note that closing this window will abort the execution of this script
04.02.2020 17:20:08 INFO: *** Starting mount of remote cryptsend
04.02.2020 17:20:08 INFO: Checking if this script is already running.
04.02.2020 17:20:08 INFO: Script not running - proceeding.
04.02.2020 17:20:08 INFO: Mount not running. Will now mount cryptsend remote.
04.02.2020 17:20:08 INFO: Recreating mountcheck file for cryptsend remote.
2020/02/04 17:20:08 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
2020/02/04 17:20:08 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/02/04 17:20:09 DEBUG : mountcheck: Modification times differ by -2m20.521233913s: 2020-02-04 17:20:08.241233913 -0600 CST, 2020-02-04 23:17:47.72 +0000 UTC
2020/02/04 17:20:10 INFO : mountcheck: Copied (replaced existing)
2020/02/04 17:20:10 INFO :
Transferred: 32 / 32 Bytes, 100%, 17 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.8s

2020/02/04 17:20:10 DEBUG : 7 go routines active
2020/02/04 17:20:10 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
04.02.2020 17:20:10 INFO: Completed creation of mountcheck file for cryptsend remote.
04.02.2020 17:20:10 INFO: *** Creating mount for remote cryptsend
04.02.2020 17:20:10 INFO: sleeping for 5 seconds
Usage:
rclone mount remote:path /path/to/mountpoint [flags]

Flags:
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem.
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem.
--umask int Override the permission bits set by the filesystem.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
Command mount needs 2 arguments minimum: you provided 1 non flag arguments: ["/mnt/user/mount_rclone/cryptsend"]
04.02.2020 17:20:15 INFO: continuing...
04.02.2020 17:20:15 CRITICAL: cryptsend mount failed - please check for problems.
 

Just set it to Y and use a local ip similar to you server and it will work, I tried with just N too and got the same error too.

Share this post


Link to post
55 minutes ago, trajpar said:

I'm getting an rclone mount error when running the rclone_mount.sh scipts with bind option set to "N"

 

Script location: /tmp/user.scripts/tmpScripts/rclone_mount/script
Note that closing this window will abort the execution of this script
04.02.2020 17:20:08 INFO: *** Starting mount of remote cryptsend
04.02.2020 17:20:08 INFO: Checking if this script is already running.
04.02.2020 17:20:08 INFO: Script not running - proceeding.
04.02.2020 17:20:08 INFO: Mount not running. Will now mount cryptsend remote.
04.02.2020 17:20:08 INFO: Recreating mountcheck file for cryptsend remote.
2020/02/04 17:20:08 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
2020/02/04 17:20:08 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/02/04 17:20:09 DEBUG : mountcheck: Modification times differ by -2m20.521233913s: 2020-02-04 17:20:08.241233913 -0600 CST, 2020-02-04 23:17:47.72 +0000 UTC
2020/02/04 17:20:10 INFO : mountcheck: Copied (replaced existing)
2020/02/04 17:20:10 INFO :
Transferred: 32 / 32 Bytes, 100%, 17 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1.8s

2020/02/04 17:20:10 DEBUG : 7 go routines active
2020/02/04 17:20:10 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "cryptsend:" "-vv" "--no-traverse"]
04.02.2020 17:20:10 INFO: Completed creation of mountcheck file for cryptsend remote.
04.02.2020 17:20:10 INFO: *** Creating mount for remote cryptsend
04.02.2020 17:20:10 INFO: sleeping for 5 seconds
Usage:
rclone mount remote:path /path/to/mountpoint [flags]

Flags:
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode).
--daemon-timeout duration Time limit for rclone to respond to kernel (not supported by all OSes).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem.
-h, --help help for mount
--max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem.
--umask int Override the permission bits set by the filesystem.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache. (default off)
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-case-insensitive If a file name not found, find a case insensitive match.
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
Command mount needs 2 arguments minimum: you provided 1 non flag arguments: ["/mnt/user/mount_rclone/cryptsend"]
04.02.2020 17:20:15 INFO: continuing...
04.02.2020 17:20:15 CRITICAL: cryptsend mount failed - please check for problems.
 

Sorry about that - didn't test that option.

 

Change:

$RcloneMountLocation &

to:

$RcloneRemoteName: $RcloneMountLocation &

I've fixed on github.

 

@Porkie apologies as well

Edited by DZMM

Share this post


Link to post
1 hour ago, DZMM said:

Sorry about that - didn't test that option.

 

Change:


$RcloneMountLocation &

to:


$RcloneRemoteName: $RcloneMountLocation &

I've fixed on github.

 

@Porkie apologies as well

Thanks! It is working!

Share this post


Link to post

Update: an update to getting creative with making google accounts to create the additional remotes is to create a service account using this guide and then using as many of the credentials as you need to create additional remotes - rclone guide here

 

I've just made some more updates to the script that will be of interest to any users who have an upload speed > 70Mbps and want to upload more than the 750GB/day limit set by Google (per remote and per user/team drive), or just want to upload without a --bwlimit and not get locked out for 24 hours.

 

The new script now allows daily theoretical uploads/day of nearly 11TB with a Gbps connection.  I say theoretically as with my Gbps connection I got max upload speeds to Google of around 700-800Mbps giving a daily potential of around 8TB, but I had other things going on.  I probably could have gone faster as I did some tdrive->tdrive transfers last month and rclone was reporting 1.7Gbps.

 

I hadn't shared how I did this before as my script was quite clunky and a couple of us got it working, but I've now managed to make it easier for anyone else to setup in the new scripts. 

 

I also didn't share because my old script only worked if you had less than 750GB/day in the upload queue.  Otherwise, the script would get stuck for up to 24 hours.  Now thanks to the --drive-stop-on-upload-limit command added to rclone 1.5.1 the behaviour is much better - if the upload run exceeds 750GB/day it now stops rather than hammering away at google for up to 24 hours.  My script takes advantage of this and uses a different account for the next run i.e. in 5 mins, or whatever cron schedule you've set.

 

Setup now should take a maximum 30-60 mins (stage 3 below) if you need the full 14-15 accounts for a 1Gbps upload.  You could just dabble with a few and then add more when needed.  E.g. 1 extra account would allow 1.5TB/day which (should) be enough for most users.

 

How It Works

 

1. Your remote needs to mount a team drive NOT a normal gdrive folder.  Create One if you don't have one

 

If you haven't done this yet, creating a team drive is easy and moving the files from your gdrive-->tdrive will be mega fast as you can do it server side using server_side_across_configs = true in your remote settings and this new updated script -  - just follow these instructions to do quickly:

 

 

 

2. Share your new team drive with other UNIQUE google accounts

 

Google's 750GB/day quota is not only per remote, but also per user per team drive i.e. if you have 2 people sharing a team drive, they both can upload 750GB each = 1.5TB/day, 4 users = 3TB/day and so on.

 

So, to max out your upload you just need to decide how many users you need accessing the team drive based on how fast your connection is, how much you might upload daily and how long your upload job is scheduled for.  E.g. for a 1Gbps connection:

 

- 24x7 upload: 14-15 users (1000/8 x 60 x 60 x 24 = 10.8TB / 0.75TB = 14.4) = 14-15 extra users and remotes

- Uploading for 8 hours overnight:  5 users (3.6TB) = 5 extra users and remotes

- Script running every 5 mins with no --bwlimit: As many accounts/remotes to cover as much data downloaded

 

UPDATE: I advise creating NOT using your existing mounted remote to upload this way to avoid it getting locked out.  Use your existing remote just to mount

 

If you want to add 14-15 google accounts with access to the teamdrive you might have to get a bit creative with finding accounts to invite.  I had another google apps domain that helped where I gave those users access, plus I had a few gmail.com accounts as well I could use.

 

3. Create the extra remotes and corresponding encrypted remotes

 

Because each of the accounts in #2 above have access to the new teamdrive, they can all create mounts to access the extra 750GB/day per account.  To do this, create rclone mounts as usual - BUT for the client_ID and client_secret for each remote CREATE AND USE a DIFFERENT google account from #2.  This is because each user can only upload 750GB, regardless of which remote did it.

 

For each of your new remotes, use the SAME TEAMDRIVE and the same CRYPT LOCATION.  i.e. if your main config looks like this:

 

[gdrive]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}
server_side_across_configs = true

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

then your first new remote for fast uploading should look like this:

[gdrive_counter1]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}

[gdrive_counter1_vfs]
type = crypt
remote = gdrive_counter1:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

gdrive_counter1:

- Recommended: (so you don't lose track!) make sure you give each unencrypted remote the same name before the number (gdrive_counter)

- use a unique CLIENT_ID and SECRET

- make sure each remote is using the same TEAM DRIVE

- when creating the token using rclone config, remember to use the google account that matches the Client_ID and CLient_secret

 

gdrive_counter1_vfs:

- IMPORTANT:  Each encrypted remote HAS TO HAVE the same characters before the number (gdrive_counter) OR THE SCRIPT WON'T WORK

- IMPORTANT:  Each encrypted remote HAS TO HAVE the same characters after the number (_vfs) OR THE SCRIPT WON'T WORK

- IMPORTANT: remote needs to be :crypt to ensure files go in the same place

- IMPORTANT: PASSWORD1 and PASSWORD2 (i.e. what's entered in rclone config not the scrambled versions) need to be the same as used for gdrive_media_vfs

 

That's it. 

 

Once finished, your rclone config should look something like this:

 

[gdrive]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}
server_side_across_configs = true

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

[gdrive_counter1]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}

[gdrive_counter1_vfs]
type = crypt
remote = gdrive_counter1:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

[gdrive_counter2]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}

[gdrive_counter2_vfs]
type = crypt
remote = gdrive_counter2:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2
.
.
.
.
.
.
.
.

[gdrive_counter15]
type = drive
client_id = UNIQUE CLIENT_ID
client_secret = MATCHING_UNIQUE_SECRET
scope = drive
team_drive = SAME TEAM DRIVE
token = {"access_token":Google_Generated"}

[gdrive_counter15_vfs]
type = crypt
remote = gdrive_counter15:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

4. Enter Values Into Script

 

Once complete, then just fill in this section in the new upload script:

# Use Multiple upload remotes for multiple quotas
UseMultipleUploadRemotes="Y" # Y/N. Choose whether want to rotate multiple upload remotes for incresed quota (750GB x number of remotes)
RemoteNumber="15" # Integer number of remotes to use.  
RcloneUploadRemoteStart="gdrive_counter" # Enter characters before counter in your remote names ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter 'gdrive_counter'
RcloneUploadRemoteEnd="_vfs" # Enter characters after counter ## i.e. for gdrive_counter1_vfs, gdrive_counter2_vfs, ...gdrive_counter15_vfs,gdrive_counter16_vfs enter '_vfs'

 

Edited by DZMM

Share this post


Link to post
58 minutes ago, DZMM said:

1. Your remote needs to mount a team drive NOT a normal gdrive folder.  Create One if you don't have one

 

If you haven't done this yet, creating a team drive is easy and moving the files from your gdrive-->tdrive will be mega fast as you can do it server side using server_side_across_configs = true in your remote settings and this new updated script

To move files between your gdrive and new teamdrive, the easiest way is to:

  1. Stop your current rclone mount + plex, radarr etc - any dockers that need to access the mount
  2. Log into gdrive with your 'master' account i.e. one that can access both the gdrive folder and the teamdrive
  3. click on 'crypt' in the gdrive folder and use the move command to move the folder to the teamdrive
  4. The files will then get moved fairly quickly
  5. Adjust your rclone mount and upload scripts to use the new tdrive based remotes
  6. It's best to wait until the move is finished before remounting, because rclone might not see any server side changes made after mounting for a while
  7. Once the old gdrive folder is empty, start the mount with the new tdrive remote
Edited by DZMM

Share this post


Link to post
25 minutes ago, DZMM said:

I

2. Share your new team drive with other UNIQUE google accounts

 

Google's 750GB/day quota is not only per remote, but also per user/team drive i.e. if you have 2 people sharing a team drive, they both can upload 750GB each = 1.5TB/day, 4 users = 3TB/day and so on.

I forgot to add.  Because you can create multiple teamdrives, this is a good way to give a mate an unlimited 'gdrive' account i.e. create another teamdrive and share it with them ;-)

Share this post


Link to post

I just wanna say, Thanks @DZMM for the new versions of the scripts combined with the multiple upload accounts, it is just making my life so much easier  :)

 

Just a little tidbit, if you use the multiple upload feature, and use more than 1 "_" in your remote name, like i did, change the CUT command from field 3 to field 4

from: find /mnt/user/appdata/other/rclone/name_media_vfs/ -name 'counter_*' | cut -d"_" -f3

to: find /mnt/user/appdata/other/rclone/name_media_vfs/ -name 'counter_*' | cut -d"_" -f4

If you don't do the above, it will not rotate the accounts :)

 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.