Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Not media file related, but just sharing how I backup to gdrive my local /mnt/user/backup share where I keep my VM backups, CA appdata backup, cloudberry backup of important files from other shares etc

rclone sync /mnt/user/backup gdrive_media_vfs:backup --backup-dir gdrive_media_vfs:backup_deleted -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 10000k --tpslimit 3 --min-age 30m

rclone delete --min-age 90d gdrive_media_vfs:backup_deleted --bwlimit 10000k

sync keeps a copy in the cloud of my files.  Any files deleted or moved locally that have already been synced are moved to the backup_deleted directory on gdrive:

--backup-dir gdrive_media_vfs:backup_deleted

where they are deleted after 90 days:

rclone delete --min-age 90d gdrive_media_vfs:backup_deleted --bwlimit 10000k

 

Edited by DZMM
  • Thanks 1
  • Upvote 1
Link to comment

Hi all i need help please

 

I setup everything in my unraid setup but i experience very slow start for plex and also it's taking a long time adding new media in plex (scanning).

 

I just backup manually my video etc and scan it with plex after. But plex is scanning it very slow.

 

I got the gdrive + crypt option

 

my mount script:

 

mkdir -p $mntpoint
 rclone mount --max-read-ahead 1024k --allow-other $remoteshare $mntpoint &

 

 

thx

Link to comment

I also wonder, files which are on gdrive take around half a minute to start, the mysterios thing is, that if i click the movie... it doenst start downloading (i can see that), it waits like 30 seconds, then starts downloading and then starts the movie... Oo

 

Using all the standard settings besides lower chunk size... and buffer size


Its not a big problem, but i wonder why it takes so long to start the movie if i have 3 files on gdrive... ;)

Edited by nuhll
Link to comment
4 hours ago, francrouge said:

Hi all i need help please

 

I setup everything in my unraid setup but i experience very slow start for plex and also it's taking a long time adding new media in plex (scanning).

 

I just backup manually my video etc and scan it with plex after. But plex is scanning it very slow.

 

I got the gdrive + crypt option

 

my mount script:

 

mkdir -p $mntpoint
 rclone mount --max-read-ahead 1024k --allow-other $remoteshare $mntpoint &

 

 

thx

have a read of the first couple of posts and check my scripts on github that are fairly up to date (just need to tweak the teamdrive bits).  If you're still stuck, there's a few people in this thread who will help.  You need to do a vfs mount for streaming.

Link to comment
6 hours ago, DZMM said:

have a read of the first couple of posts and check my scripts on github that are fairly up to date (just need to tweak the teamdrive bits).  If you're still stuck, there's a few people in this thread who will help.  You need to do a vfs mount for streaming.

Hi i follow you're script but on startup i got an error 

 

18.12.2018 20:21:29 INFO: mounting rclone vfs.
2018/12/18 20:21:29 Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use
18.12.2018 20:21:34 CRITICAL: rclone gdrive vfs mount failed - please check for problems.
Script Finished Tue, 18 Dec 2018 20:21:34 -0500

 

 

I erase my config and tried with new one but still samething 

 

any idea ?

 

thx

Link to comment
7 hours ago, francrouge said:

Hi i follow you're script but on startup i got an error 

 

18.12.2018 20:21:29 INFO: mounting rclone vfs.
2018/12/18 20:21:29 Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use
18.12.2018 20:21:34 CRITICAL: rclone gdrive vfs mount failed - please check for problems.
Script Finished Tue, 18 Dec 2018 20:21:34 -0500

 

 

I erase my config and tried with new one but still samething 

 

any idea ?

 

thx

Did you create the file "mountcheck" on your Gdrive mount?

Link to comment
On 12/16/2018 at 11:34 PM, DZMM said:

Just create another encrypted remote for Bazarr with a different client_ID pointing to same gdrive/tdrive e.g. 

 


[gdrive_bazarr]
type = drive
client_id = Diff ID
client_secret = Diff secret
scope = drive
root_folder_id = 
service_account_file = 
token = {should be able to use same token, or create new one if pointed to teamdrive"}

[gdrive_bazarr_vfs]
type = crypt
remote = gdrive_bazarr:crypt
filename_encryption = standard
directory_name_encryption = true
password = same password
password2 = same password

 

One problem I'm encountering is the multiple upload scripts are using a fair bit of memory, so I'm investigating how to reduce the memory usage by removing things like --fast-list from the upload script.  Not a biggie as I can fix

Currently running 6 API's/scripts for uploading (1 API/script per disk). And 1 API dedicated for streaming (only Emby and Plex use this API). Other dockers are set on the seperate Docker API. I had an initial error is my unionfs command so my streaming API was used and not my docker API. Since I fixed that I see the API hits being distributed nicely. So this should prevent any futher API bans and still allow me to run Bazarr 24/7.

 

Upload scripts I had high memory usage as well and was getting rate banned at the end of the day. So I think it might have been uploading a bit too much on 8500k bwlimit. Could also have been too much of checkers going through all the files. Changed to the following command last night. Currently sitting very low in memory usage with 6 uploads with each 3 transfers.

 

Quote

-vv --buffer-size 128M --drive-chunk-size 32M --checkers 3 --fast-list --transfers 3 --delete-empty-src-dirs --bwlimit 8000k --tpslimit 3 --min-age 30m

 

Link to comment
30 minutes ago, Kaizac said:

Did you create the file "mountcheck" on your Gdrive mount?

no I miss this part i think.

 

But can you explain me a bit what i need to do in the gdrive i don't really understand this part. (I just read it )

 

 

Edit:

 

I create a file in my drive  with the commands inside

 

touch mountcheck


rclone copy mountcheck gdrive_media_vfs: -vv --no-traverse
rclone copy mountcheck tdrive_media_vfs: -vv --no-traverse

 

 

Capture.thumb.PNG.bfff3614ce2ce1599f6049062a886b88.PNG

 

Thx

Edited by francrouge
edit something
Link to comment

@francrouge you got it working now?

 

@DZMM I've been migrating fully last days to the Team Drive. Not sure if you did do so already? The mistake I made was to populate the Team Drive already. So when you want to move the files through the WebUI of Google Drive it will create duplicate folders. It doesn't merge them somehow. When I tried to move mount_rclone/Gdrive to mount_rclone/Tdrive with Krusader it started a normal transfer. So I think it's seen as separate entities and thus it will count towards your quota. Maybe your experience is different.

 

So what I did was creating a folder "Transfer" in the Tdrive (through Krusader/Windows) and moved the files from Gdrive to the Tdrive/Transfer through the WebUI. This will start the move in the background.

Then I moved the folders from the Transfer folder to the Tdrive itself with Krusader. This does count as a move on the server and won't count for your quota and is fast. However I noticed that after 2 days it's still transferring files to my Transfer folder, even though it seems like it's done transferring in the WebUI. So that's something to look out for that the background process (which you can't see how far it is) takes a while to run.

Link to comment
1 hour ago, Kaizac said:

Upload scripts I had high memory usage as well and was getting rate banned at the end of the day. So I think it might have been uploading a bit too much on 8500k bwlimit. Could also have been too much of checkers going through all the files. Changed to the following command last night. Currently sitting very low in memory usage with 6 uploads with each 3 transfers.

 

 

You encountered the same problem as me that I've just fixed.  The info I read on Team Drive limits was wrong - it's 750GB/day per team drive not per user, as well as 750GB/day per user. 

 

https://support.google.com/a/answer/7338880#

 

What I've done to fix this is create 3 team drives and then spread my media folders across them e.g.:

  • gdrive: tv shows
  • td1: movies
  • td2: movies_uhd
  • etc etc

 

Edit: with still unique token/client_id per team drive

 

The reason I've separated media types across team drives is I want to make it easier to move files within google so I don't have to download to reupload or end up with duplicate folders......I noticed yesterday that if I moved tv_show1/season1/episode_6 from td1 to gdrive where there's already a tv_show1/season1 folder it would create a new tv_show1/season1 folder with tv_show1/season1/episode_6 in it rather than adding episode_6 to the existing folder.  This was causing havoc with the mount, so by splitting my media folders I will reduce the number of times I will have to move files between team drives.

Edited by DZMM
Link to comment
23 minutes ago, Kaizac said:

 

@DZMM I've been migrating fully last days to the Team Drive. Not sure if you did do so already? The mistake I made was to populate the Team Drive already. So when you want to move the files through the WebUI of Google Drive it will create duplicate folders. It doesn't merge them somehow. When I tried to move mount_rclone/Gdrive to mount_rclone/Tdrive with Krusader it started a normal transfer. So I think it's seen as separate entities and thus it will count towards your quota. Maybe your experience is different.

lol see the post I just did - you need to use multiple tdrives the quota is 750GB/day per team drive.  Not a big issue as each takes 5 mins to setup.

 

23 minutes ago, Kaizac said:

So what I did was creating a folder "Transfer" in the Tdrive (through Krusader/Windows) and moved the files from Gdrive to the Tdrive/Transfer through the WebUI. This will start the move in the background.

Then I moved the folders from the Transfer folder to the Tdrive itself with Krusader. This does count as a move on the server and won't count for your quota and is fast. However I noticed that after 2 days it's still transferring files to my Transfer folder, even though it seems like it's done transferring in the WebUI. So that's something to look out for that the background process (which you can't see how far it is) takes a while to run.

I'm doing my td-->td transfers as much as possible within gdrive.  What I've noticed so far in mc is if you are overwriting files it takes as long as downloading, but if you move it's pretty much instantaneous - so just make sure the destination directory is empty

Edited by DZMM
Link to comment
7 minutes ago, Kaizac said:

@francrouge you got it working now?

 

@DZMM I've been migrating fully last days to the Team Drive. Not sure if you did do so already? The mistake I made was to populate the Team Drive already. So when you want to move the files through the WebUI of Google Drive it will create duplicate folders. It doesn't merge them somehow. When I tried to move mount_rclone/Gdrive to mount_rclone/Tdrive with Krusader it started a normal transfer. So I think it's seen as separate entities and thus it will count towards your quota. Maybe your experience is different.

 

So what I did was creating a folder "Transfer" in the Tdrive (through Krusader/Windows) and moved the files from Gdrive to the Tdrive/Transfer through the WebUI. This will start the move in the background.

Then I moved the folders from the Transfer folder to the Tdrive itself with Krusader. This does count as a move on the server and won't count for your quota and is fast. However I noticed that after 2 days it's still transferring files to my Transfer folder, even though it seems like it's done transferring in the WebUI. So that's something to look out for that the background process (which you can't see how far it is) takes a while to run.

Hi nop 

 

always getting 1

 

 

9.12.2018 05:32:57 INFO: mounting rclone vfs.
2018/12/19 05:32:57 NOTICE: Serving remote control on http://127.0.0.1:5572/
19.12.2018 05:33:07 CRITICAL: rclone gdrive vfs mount failed - please check for problems.
Script Finished Wed, 19 Dec 2018 05:33:07 -0500

 

 

I'm not using teamdrive so i just copy the mount code i thought  i needed

 

Here is my config:

 

[gdrive]
type = drive
scope = drive
token = {"access_token":

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = 
password2 = 

My mount script:

 

#!/bin/bash

#######  Check if script is already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_mount_running

fi

#######  End Check if script already running  ##########

#######  Start rclone mounts  ##########

# create directories for rclone mount and unionfs mount

mkdir -p /mnt/user/appdata/other/rclone
mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_rclone/tdrive_rclone1_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs

#######  Start rclone gdrive mount  ##########

# check if gdrive mount already created

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

else

echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

rclone mount --rc --allow-other --buffer-size 512M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &

# check if mount successful

# slight pause to give mount time to finalise

sleep 5

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End rclone gdrive mount  ##########

thx

Link to comment
12 minutes ago, DZMM said:

lol see the post I just did - you need to use multiple tdrives the quota is 750GB/day per team drive.  Not a big issue as each takes 5 mins to setup.

 

Hmmm I'm not sure if it's enforced like that. I was able to upload 4TB yesterday to my Tdrive. Will see what happens today.

Link to comment
2 minutes ago, Kaizac said:

Hmmm I'm not sure if it's enforced like that. I was able to upload 4TB yesterday to my Tdrive. Will see what happens today.

It worked for me as well one day and then it stopped - I think maybe the checks on teamdrives aren't as stringent.  I'm going to stick with my new distributed layout as it makes it easier for my cleanup script to work and for me to move files around.  E.g. here's my new clean-up script that looks at each media type in each teamdrive:

 

echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup."

# gdrive - movies

find /mnt/user/mount_unionfs/google_vfs/.unionfs/movies -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs/movies}
newPath=/mnt/user/mount_rclone/google_vfs/movies${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done

# tdrive_rclone1 - tv

find /mnt/user/mount_unionfs/google_vfs/.unionfs/tv -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs/tv}
newPath=/mnt/user/mount_rclone/tdrive_rclone1_vfs/tv${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done

find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete

 

Link to comment
Did you create the mountcheck file?
 
touch mountcheckrclone copy mountcheck gdrive_media_vfs: -vv --no-traverse

 

Yes but i just realise that i created the file on my windows pc and transfert it directly to my gdrive.. so i don't think its ok lol

I'm going to redo that

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment

@francrouge also, rc keeps running after your first mount attempt, so if you are trying a second mount attempt without rebooting it will always fail unless you remove --rc i.e. run:

 

rclone mount --allow-other --buffer-size 512M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &

I'm trying to find a way to stop rc in the unmount script to stop this problem tripping people up

Link to comment
48 minutes ago, DZMM said:

Did you create the mountcheck file?

 


touch mountcheck

rclone copy mountcheck gdrive_media_vfs: -vv --no-traverse

 

Hi so i manage to run the commands

 

In the logs everything look good.

 

But My concern is this.

 

I just want to be able to drag movies etc in my gdrive and then play it with plex.

 

So how does you're script works ?

 

Because ive been reading it and i don't quite understand  the union_fs part.

 

I can create my folder there but can i dump my files there ?

 

Because nothing is showing up in gdrive write now ?

 

thx  a lot

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.