Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

6 minutes ago, nuhll said:

There is no error, its working... what is the exact problem?

The problem is that when i turn on my array and after the mount check it doesn’t work. Impossible to see unionfs

 

i have to do the procedure i wrote just before and then everything work.

 

When you turn on your server everything work automatically after mountcheck?  Or you have to do the same procedure as me?

 

 

Edited by neow
Link to comment
Quote

####### Start unionfs mount ##########

 


if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."

 

 



16.06.2019 01:16:51 INFO: Check successful, unionfs mounted.

 

Its working in that script... 

 

Where are you looking? are you sshed into it?

 

Edited by nuhll
Link to comment
6 minutes ago, nuhll said:

 

 

 

 

Its working in that script... 

 

Where are you looking? are you sshed into it?

 

1h14 it’s the array mounted. The script stops here

 

16.06.2019 01:14:37 INFO: Exiting script already running. Script Finished Sun, 16 Jun 2019 01:14:37 +0200  Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_plugin/log.txt 

 

 

After my manipulation 1h16

everything work fine

 

16.06.2019 01:16:51 INFO: Check successful, unionfs mounted.

 

Edited by neow
Link to comment
5 minutes ago, nuhll said:

Thats normal the script gets run every 10 min, and when its already mounted, no need to mount again... 

 

again... how are you seeing "its not there"?

Krusader.

 

i can affirm i can wait 1hour. Without this manipulation unionfs will never mount

 

i don’t have a cron task for mounting script like many if you. It’s just run at the array mounting

 

 

Edited by neow
Link to comment

Everything works, except for some reasons I am limited to 750GB / day total.

  • Once 1 teamdrive hits limit, everything else (main gdrive + the other tdrive) is errorred out too.
  • Same situation, once 1 client ID hits limit, the rest (6 IDs) are blocked.

@DZMM: Looks like my daily limit is enforced even more rigorously than when you reported back in December 19 last year. But then it seemed to have been lifted for you out of a sudden based on the December 23 post and our recent PMs. Do you remember making any particular changes? My setup is pretty similar to yours except for having fewer client_IDs and I have only been using tdrive basically exclusively (gdrive only for testing purposes).

Link to comment
3 minutes ago, nuhll said:

I never used krusader.

 

Just make your share visible via smb and visit  \mount_unionfs\google_vfs

Ok i will try. 

 

Just one question: 

 

when you start the array you mount check and the mount script work fine?

 

 

Link to comment
20 minutes ago, testdasi said:

Everything works, except for some reasons I am limited to 750GB / day total.

  • Once 1 teamdrive hits limit, everything else (main gdrive + the other tdrive) is errorred out too.
  • Same situation, once 1 client ID hits limit, the rest (6 IDs) are blocked.

@DZMM: Looks like my daily limit is enforced even more rigorously than when you reported back in December 19 last year. But then it seemed to have been lifted for you out of a sudden based on the December 23 post and our recent PMs. Do you remember making any particular changes? My setup is pretty similar to yours except for having fewer client_IDs and I have only been using tdrive basically exclusively (gdrive only for testing purposes).

glad you got it almost right first time.

 

Re your teamdrive setup, I'm assuming:

 

1. all files are being loaded to the same teamdrive

2. then the teamdrive is shared with x different users with unique email addresses

3. for each user you've created a unique rclone remote, with unique client IDs each time (all loading to same teamdrive)

4. you've then created a encrypted version of each unique remote

5. you've then created unique rclone_move commands for each user i.e

 

rclone move /mnt/user/rclone_upload/google_vfs USER1_remote: ................
rclone move /mnt/user/rclone_upload/google_vfs USER2_remote: ................
rclone move /mnt/user/rclone_upload/google_vfs USER3_remote: ................

 

Edited by DZMM
added in email address
Link to comment
18 minutes ago, DZMM said:

glad you got it almost right first time.

 

Re your teamdrive setup, I'm assuming:

 

1. all files are being loaded to the same teamdrive

2. then the teamdrive is shared with x different users with unique email addresses

3. for each user you've created a unique rclone remote, with unique client IDs each time (all loading to same teamdrive)

4. you've then created a encrypted version of each unique remote

5. you've then created unique rclone_move commands for each user i.e

 


rclone move /mnt/user/rclone_upload/google_vfs USER1_remote: ................
rclone move /mnt/user/rclone_upload/google_vfs USER2_remote: ................
rclone move /mnt/user/rclone_upload/google_vfs USER3_remote: ................

 

here's what my config looks like:

 

[user1]
type = drive
client_id = id1
client_secret = secret1
scope = drive
team_drive = SAME_TDRIVE
token = token1

[user1_vfs]
type = crypt
remote = user1:crypt
filename_encryption = standard
directory_name_encryption = true
password = pass1
password2 = pass2

[user2]
type = drive
client_id = id2
client_secret = secret2
scope = drive
team_drive = SAME_TDRIVE
token = token2

[user2_vfs]
type = crypt
remote = user2:crypt
filename_encryption = standard
directory_name_encryption = true
password = pass1
password2 = pass2

and my move commands:

 

rclone move /mnt/disks/ud_mx500/rclone_upload/google_vfs user1_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --fast-list

rclone move /mnt/disks/ud_mx500/rclone_upload/google_vfs user2_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --fast-list

 

Link to comment
1 hour ago, DZMM said:

1. all files are being loaded to the same teamdrive

2. then the teamdrive is shared with x different users with unique email addresses

3. for each user you've created a unique rclone remote, with unique client IDs each time (all loading to same teamdrive)

4. you've then created a encrypted version of each unique remote

5. you've then created unique rclone_move commands for each user i.e

1. Yes.

2. You split this point 2 and point 3 so perhaps I have missed something here. By "shared" do you mean also adding those emails to the Member Access on the Gdrive website e.g. making those email Content Manager? Or maybe something else?

3. Yes. What I did was having each unique email + create a project for each (unique project names too) + adding Google Drive API to project + creating unique OAUTH client_id and secrets for each project API and use those for unique remote.

4. Yes (see below for section of conf)

5. Yes. I noticed you have --user-agent="unRAID" which I didn't have so will try that this morning when my limit is reset.

 

A question: when your unique client_id move things to gdrive, does the gdrive website shows your activity (click on the (i) icon, upper right corner under the G Suite logo and then click Activity to show activity) as "[name of account] created an item" or does it shows as "You created an item"? (as in literally it says "You", not the email address or your name). For me it shows as "You" regardless of upload account so maybe that's an indication of something being wrong?

 

.rclone.conf

[gdrive]
type = drive
client_id = 829[random stuff].apps.googleusercontent.com
client_secret = [random stuff]
scope = drive
token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"}

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = abcdzyz
password2 = 1234987

[tdrive]
type = drive
client_id = 401[random stuff].apps.googleusercontent.com
client_secret = [random stuff]
scope = drive
token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"}
team_drive = [team_drive ID]

[tdrive_vfs]
type = crypt
remote = tdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = abcdzyz
password2 = 1234987

[tdrive_01]
type = drive
client_id = 345[random stuff].apps.googleusercontent.com
client_secret = [random stuff]
scope = drive
token = token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"}
team_drive = [team_drive ID]

[tdrive_01_vfs]
type = crypt
remote = tdrive_01:crypt
filename_encryption = standard
directory_name_encryption = true
password = abcdzyz
password2 = 1234987

etc...

 

rclone move command

I have a command for each of 01, 02, 03, 04 and a folder for each. I ensure that each 0x folder has less than 750GB (about 700GB).

rclone move /mnt/user/rclone_upload/tdrive_01_vfs/ tdrive_01_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*  --delete-empty-src-dirs --fast-list --bwlimit 110000k --tpslimit 3 --min-age 30m 

 

 

 

Edited by testdasi
Link to comment
8 hours ago, testdasi said:

1. Yes.

2. You split this point 2 and point 3 so perhaps I have missed something here. By "shared" do you mean also adding those emails to the Member Access on the Gdrive website e.g. making those email Content Manager? Or maybe something else?

3. Yes. What I did was having each unique email + create a project for each (unique project names too) + adding Google Drive API to project + creating unique OAUTH client_id and secrets for each project API and use those for unique remote.

4. Yes (see below for section of conf)

5. Yes. I noticed you have --user-agent="unRAID" which I didn't have so will try that this morning when my limit is reset.

 

A question: when your unique client_id move things to gdrive, does the gdrive website shows your activity (click on the (i) icon, upper right corner under the G Suite logo and then click Activity to show activity) as "[name of account] created an item" or does it shows as "You created an item"? (as in literally it says "You", not the email address or your name). For me it shows as "You" regardless of upload account so maybe that's an indication of something being wrong?

 

.rclone.conf


[gdrive]
type = drive
client_id = 829[random stuff].apps.googleusercontent.com
client_secret = [random stuff]
scope = drive
token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"}

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = abcdzyz
password2 = 1234987

[tdrive]
type = drive
client_id = 401[random stuff].apps.googleusercontent.com
client_secret = [random stuff]
scope = drive
token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"}
team_drive = [team_drive ID]

[tdrive_vfs]
type = crypt
remote = tdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = abcdzyz
password2 = 1234987

[tdrive_01]
type = drive
client_id = 345[random stuff].apps.googleusercontent.com
client_secret = [random stuff]
scope = drive
token = token = {"access_token":"[random stuff]","token_type":"Bearer","refresh_token":"[random stuff]","expiry":"2019-06-16T01:51:02"}
team_drive = [team_drive ID]

[tdrive_01_vfs]
type = crypt
remote = tdrive_01:crypt
filename_encryption = standard
directory_name_encryption = true
password = abcdzyz
password2 = 1234987

etc...

 

rclone move command

I have a command for each of 01, 02, 03, 04 and a folder for each. I ensure that each 0x folder has less than 750GB (about 700GB).


rclone move /mnt/user/rclone_upload/tdrive_01_vfs/ tdrive_01_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*  --delete-empty-src-dirs --fast-list --bwlimit 110000k --tpslimit 3 --min-age 30m 

 

 

 

Hmm it all looks ok.  And for each upload command you've got:

 

rclone move /mnt/user/rclone_upload/tdrive_01_vfs/ tdrive_01_vfs: .........

rclone move /mnt/user/rclone_upload/tdrive_02_vfs/ tdrive_02_vfs: .........

 

Etc etc

Edited by DZMM
Link to comment
58 minutes ago, DZMM said:

Hmm it all looks ok.  And for each upload command you've got:

 

rclone move /mnt/user/rclone_upload/tdrive_01_vfs/ tdrive_01_vfs: .........

rclone move /mnt/user/rclone_upload/tdrive_02_vfs/ tdrive_02_vfs: .........

 

Etc etc

Hey! I think I figured out what's wrong! When I do rclone config with the authorise link, when the sign in screen comes up on the browser, I clicked the main account because I can't see the team drive if I click on the account corresponding to the client_id.

 

So your step 2 looks to be the step I missed. When I add the other accounts as content manager, I can now see the team drive if I click on the account corresponding to the client_id. Just did a test transfer for all the accounts and the activity page shows the actions on the corresponding accounts.

 

I just kicked off the upload script and we'll probably know if it works by dinner time!

 

10 hours ago, neow said:

Krusader.

 

Don't use Krusader to check. It somehow also doesn't show unionfs correctly for me too.

MC works, sonarr works, Plex works, Krusader doesn't.

 

Use mc from console is the most reliable way to check the mount itself. Cut out the docker-specific problem.

 

Link to comment
1 hour ago, testdasi said:

So your step 2 looks to be the step I missed. When I add the other accounts as content manager, I can now see the team drive if I click on the account corresponding to the client_id. Just did a test transfer for all the accounts and the activity page shows the actions on the corresponding accounts

Yes that'd be the problem.  Within gdrive you need to share the teamdrive with each user email and then when you create the remote use that account to create the token.

 

A few tips:

 

1. Don't use the remote/user account you mount for uploading to make sure your mount always works for playback

2. If you're bulk uploading and you are confident there is no more than 750GB in each of your sub upload folders, I would run your move commands sequentially rather than at all at the same time ONCE A DAY, with the bwlimit set at say 80% of your max.  Running multiple rclone move commands at the same time uses up more memory.  You'll still get the same max transfer per day, with less ram usage

Link to comment
8 hours ago, DZMM said:

Yes that'd be the problem.  Within gdrive you need to share the teamdrive with each user email and then when you create the remote use that account to create the token.

 

A few tips:

 

1. Don't use the remote/user account you mount for uploading to make sure your mount always works for playback

2. If you're bulk uploading and you are confident there is no more than 750GB in each of your sub upload folders, I would run your move commands sequentially rather than at all at the same time ONCE A DAY, with the bwlimit set at say 80% of your max.  Running multiple rclone move commands at the same time uses up more memory.  You'll still get the same max transfer per day, with less ram usage

Thank you. It has gone 800+ GB already without any error so looks like the step 2 was indeed the step I missed.

 

I'm running a script to sequentially go through 4 subfolders of 700GB each. My current connection means I can only go through about 3 folders/day so I should not be reaching the limit on any of the account. Your control file logic gate was quite elegant so I repurpose it to make my script run perpetually unless I delete the control file. :D I just need to "refill" daily and forget about it.

Link to comment
52 minutes ago, testdasi said:

I'm running a script to sequentially go through 4 subfolders of 700GB each. My current connection means I can only go through about 3 folders/day so I should not be reaching the limit on any of the account. Your control file logic gate was quite elegant so I repurpose it to make my script run perpetually unless I delete the control file. :D I just need to "refill" daily and forget about it.

it took me a few hours of trial and error to sort the counters.  Once you've finished moving your existing content you should be able to upload from just one folder like me if your upload and download speeds are the same (i.e. content is shifted just as fast as it's added) - you just need enough accounts to ensure no individual account will upload more than 750GB/day and mess up the script for up to 24 hours until the ban lifts. 

 

I cap my upload scripts at 70MB/s so if I uploaded 24/7 I'd do about 6TB/day so I'd need at a min 6000/750=8 users... but I use about double that just in case something goes wrong.

Link to comment

If I change the rclone conf (e.g. point account to a different Team Drive), how do I make rclone remount?

 

I tried the unmount script, which unmounts the tdrive but I can still see the rclone process running (i.e. it only unmounts, it doesn't actually kill the rclone processes with the old config). I have been restarting just to be safe but just thought to ask if maybe there's something I have done wrong.

Link to comment
4 hours ago, testdasi said:

If I change the rclone conf (e.g. point account to a different Team Drive), how do I make rclone remount?

 

I tried the unmount script, which unmounts the tdrive but I can still see the rclone process running (i.e. it only unmounts, it doesn't actually kill the rclone processes with the old config). I have been restarting just to be safe but just thought to ask if maybe there's something I have done wrong.

not sure.  I've never done that.  I assume if you change the rclone config then it will kill any necessary processes.

Link to comment
3 hours ago, francrouge said:

Hi guys

Is there something i can do because i scan all my library again and it seem google ban my api or blocked it i dont know so i cant read movies right now.

What should i do ?

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

post your logs.  I've done full library scans/update metadata and not had problems.

 

The bans only last 24 hours

Link to comment
3 hours ago, francrouge said:

Hi guys

Is there something i can do because i scan all my library again and it seem google ban my api or blocked it i dont know so i cant read movies right now.

What should i do ?

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

 

Something is making too many API calls to cause yours to be blocked.  You need to check what your various dockers are doing. From previous posts, it looks like Bazarr and Emby / Plex subtitle searches may be the main contributor.


I run Plex and Sonarr refreshes a few times today already + calculate how much space I'm using (a lot of API calls to count things) and I'm nowhere close to the limit. Even the per 100s limit of 1000, I only get to 20% on the worst day. So your dockers must be doing something very drastic to cause API ban. You might want to separate that docker on its own client_id.

Once banned, there's nothing you can do but to wait till your quota is reset. Usually reset time is midnight US Pacific (where Google HQ is). 

(You can see when it's reset and how many API calls you have done from your API dashboard - https://console.developers.google.com/apis/dashboard then click on quota)

 

That is assuming you have set up your own API + OAUTH client_id + share the team drive with the appropriate account.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.