Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

So I first set this up without services accounts and a team drive. I have now gone through the entire process to create the services accounts and the drives and added them correctly. What is the easiest way to migrate from a normal gdrive to a team/shared drive with service accounts?

Edited by Michel Amberg
Link to comment

A couple of questions.

1. Has there been any significant changes to scripts within the last year, where you are recommending to update? I'm using Mergersf currently. Can something be gained in regards to Plex loadtimes?

 

2. What does people use for music or where there's a lot of files?

Mount multiple teamdrives or I read using GDrive but then you will have disadvantages like limited storage or?

Link to comment

I am also wondering what this is?

service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json

 

I followed both the github guide and the autoclone guide but I don't see it mentioned anywhere else than this? I have all my service accounts in there but I need some kind of special file in there? What should it contain?

 

Link to comment

what does the vfs do here? it seems it should remove it, but it stays and fills up my drive? 


image.png.6eaac6161d380991095656af7f0aa120.png

2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E01 - Untaught Emotions Bluray-1080p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E02 - In the Fire Bluray-480p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E03 - Tears Bluray-480p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E04 - Ill Fortune Bluray-480p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E05 - Alphas Bluray-480p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E06 - Hostage Bluray-480p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E07 - The Ring of Exposure Bluray-720p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E08 - A Normal Family Bluray-480p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E09 - Whereabouts of the Pendant Bluray-480p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E10 - Party Bluray-720p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E11 - Puppet Bluray-480p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E12 - The Red Stake Bluray-480p.mkv was removed, freed 0 bytes
2021/10/11 03:14:48 INFO  : vfs cache RemoveNotInUse (maxAge=0, emptyOnly=false): item anime/Zetman/Season 1/Zetman - S01E13 - Funeral Procession Bluray-480p.mkv was removed, freed 0 bytes

Link to comment
On 10/12/2021 at 1:13 AM, ryanm91 said:

anyone have an issue with Plex where it doesn't like to directplay or if it does play just black screen playing no sound. So i thought it was originally maybe my nvidia shield but another server with local storage i get zero issues. Paths are mounted correctly and if i force transcoding it will play no issue.

All of my clients bar 1 are Nvidia shields and I have no problems - except with the 2019 sticks which were iffy with 4K content buffering and sometimes crashing.  I added in - -vfs-read-ahead 2G which seemed to do the trick.

 

Also, try upping --vfs-read-chunk-size to higher than the default (think it's 32m).  I use --vfs-read-chunk-size 256M for my 4K files

 

	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
        --vfs-read-chunk-size 256M \
	--vfs-cache-max-size 100G \
	--vfs-cache-max-age 96h \
	--vfs-read-ahead 2G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

 

Edited by DZMM
  • Like 1
Link to comment
On 10/12/2021 at 9:10 PM, Playerz said:

Hi

Im having trouble with high bitrate videos on plex, and only high bitrate 4k movies

I got 1/1 gig fiber so that shouldnt be a problem.

Everything else works like a dream.

Sendt fra min SM-G996B med Tapatalk
 

try adding:

 

--vfs-read-ahead 2G 

 

I was having the same problem.  I think what's happening is the first chunk isn't enough of the file to keep Plex happy, so for 4K/high-bitrate you need more of the file ready before playback starts.

 

Also, try upping --vfs-read-chunk-size to higher than the default (think it's 32m).  I use --vfs-read-chunk-size 256M for my 4K files

 

	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
        --vfs-read-chunk-size 256M \
	--vfs-cache-max-size 100G \
	--vfs-cache-max-age 96h \
	--vfs-read-ahead 2G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

 

Edited by DZMM
Link to comment
On 10/12/2021 at 10:03 PM, Michel Amberg said:

So I first set this up without services accounts and a team drive. I have now gone through the entire process to create the services accounts and the drives and added them correctly. What is the easiest way to migrate from a normal gdrive to a team/shared drive with service accounts?

1. change your rclone config to look something like this:

 

[tdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive.json
team_drive = xxxxxxxxxxxxxxxxxxxxxxxxx
server_side_across_configs = true

 

2. In the folder where your service account files are e.g. in my case  /mnt/user/appdata/other/rclone/service_accounts, make sure they are numbered sa_tdrive1.json, sa_tdrive2.json, sa_tdrive3.json and so on

 

3. Then fill in the settings in the upload script

 

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="Y" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_tdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="16" # Integer number of service accounts to use.

 

4. If you need to move files from the old gdrive mount to the tdrive, it's best to do this within google drive if there's more than 750GB to avoid quota issues.  Stop all your dockers etc until you've finished the move, create the new tdrive mount, and once all the files are available in the right place, restart your dockers

Link to comment
On 10/13/2021 at 4:40 PM, Michel Amberg said:

I am also wondering what this is?

service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json

 

I followed both the github guide and the autoclone guide but I don't see it mentioned anywhere else than this? I have all my service accounts in there but I need some kind of special file in there? What should it contain?

 

you should have around 100 json files if you've done the steps correctly.  You need to rename up to 15-16 (16 needed to max out a 1Gbps line) sa_gdrive1.json, sa_gdrive2.json and so on and put them in a directory of your choosing

Link to comment

@DZMM:

 

1. As far as I understand you are using a regular Drive for music, but first uploads through TeamDrive and afterwards run a move script from TD to Drive right?

How is the script like for this?

 

I have 4 TD and 4 crypts (4 mountscripts and 4 upload scripts).

 

I wanted to create a regular Drive, but the clientID etc, was to much hassle when having TD already, so a script to move from 1 to another I think is a good idea.

 

2. Also first part of my script is like this. Can I optimize it for loading times? I haven't used cache or anything like that.

 

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.6 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="google_crypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="sonarr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MountFolders=\{"media/temp,downloads/complete"\} # comma separated list of folders to create within the mount
 

Edited by Bjur
Link to comment
7 minutes ago, Bjur said:

1. As far as I understand you are using a regular Drive for music, but first uploads through TeamDrive and afterwards run a move script from TD to Drive right?

How is the script like for this?

Correct

 

9 minutes ago, Bjur said:

I wanted to create a regular Drive, but the clientID etc, was to much hassle when having TD already, so a script to move from 1 to another I think is a good idea.

creating a client_id is a doddle  https://rclone.org/drive/#making-your-own-client-id.  If you are using service_accounts you don't have to do this.

 

Because you're moving server-side you don't need to do anything fancy

 

rclone move tdrive:crypt/encrypted_name_of_music_folder gdrive:crypt/encrypted_name_of_music_folder \
--user-agent="transfer" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit \
--delete-empty-src-dirs

 

12 minutes ago, Bjur said:

2. Also first part of my script is like this. Can I optimize it for loading times? I haven't used cache or anything like that.

Do you mean playback times?  Every setup is different.  There are a few settings you can play with e.g. --vfs-read-chunk-size, --vfs-read-ahead.  e.g. here's what I currently have for my 4K tdrive mount.

 

# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
        --vfs-read-chunk-size 256M \
	--vfs-cache-max-size 100G \
	--vfs-cache-max-age 96h \
	--vfs-read-ahead 2G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

 

Link to comment

Thanks for the quick reply.

To understand correct. When you say I don't need client ID. If I have a regular Drive and want to access like the TD I need to create the Drive in Rclone and that needs the Client ID. I tried without and just copy a TD remote and edit the settings but no mountpoint and won't create.

 

2. Yes I mean playback time, but on my TDs I have both 4k and non-4k together, but perhaps I can try the settings in the mountscript.

Link to comment
1 hour ago, DZMM said:

you should have around 100 json files if you've done the steps correctly.  You need to rename up to 15-16 (16 needed to max out a 1Gbps line) sa_gdrive1.json, sa_gdrive2.json and so on and put them in a directory of your choosing

 

Ah so that is what the confusion comes from. The rclone_conf posted in the guide just refers to one of these files? Do I need to add mutiple rows in the config for every SA account? Also my files using your script renames them to sa_gdrive_upload1.json, sa_gdrive_upload2.json etc not what is described in the guide so that confused me

Edited by Michel Amberg
Link to comment
2 hours ago, Bjur said:

Thanks for the quick reply.

To understand correct. When you say I don't need client ID. If I have a regular Drive and want to access like the TD I need to create the Drive in Rclone and that needs the Client ID. I tried without and just copy a TD remote and edit the settings but no mountpoint and won't create.

 

2. Yes I mean playback time, but on my TDs I have both 4k and non-4k together, but perhaps I can try the settings in the mountscript.

sorry, if I confused you - if you are using SAs you don't need Client IDs

 

For playback, it's best to experiment.  The defaults work well except for 4k content.  Having the same settings for non-4K will be ok - it'll just mean start times might be 1-2s longer (wow)

Link to comment
1 hour ago, Michel Amberg said:

 

Ah so that is what the confusion comes from. The rclone_conf posted in the guide just refers to one of these files? Do I need to add mutiple rows in the config for every SA account? Also my files using your script renames them to sa_gdrive_upload1.json, sa_gdrive_upload2.json etc not what is described in the guide so that confused me

the script changes the SA file used by rclone and overwrites the entry in the rclone config - i.e. I think the script takes care of it if you don't add an initial value.

 

The script doesn't rename the files you have to create 1-16 and put in the right directory, and then the script will rotate which one rclone uses for each run

Link to comment
20 minutes ago, DZMM said:

sorry, if I confused you - if you are using SAs you don't need Client IDs

 

For playback, it's best to experiment.  The defaults work well except for 4k content.  Having the same settings for non-4K will be ok - it'll just mean start times might be 1-2s longer (wow)

But my confusion is that I will need to have a regular Drive in rclone and from what I can read I can't use SAs for regular Drive right? Then I need clientID. I've created a regular Drive in rclone but it will only create a mountpoint if I configure SAs beside the client id. Funny thing is I can run upload script and it uploads but can't see the test file in drive. 

Link to comment

This is how my Drive looks. I have 4 TD configured besides this.

 

[googleAuDr]
type = drive
drive = 
server_side_across_configs = true
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_dau1.json
client_id = 
client_secret = 
scope = drive

[googleAuDr_crypt]
type = crypt
remote = googleAuDr:GCryptAuDr
filename_encryption = standard
directory_name_encryption = true
password = 
password2 = 

Link to comment
1 hour ago, Bjur said:

But my confusion is that I will need to have a regular Drive in rclone and from what I can read I can't use SAs for regular Drive right? Then I need clientID. I've created a regular Drive in rclone but it will only create a mountpoint if I configure SAs beside the client id. Funny thing is I can run upload script and it uploads but can't see the test file in drive. 

you can use a a normal drive (with a client ID) or a teamdrive (with SA)

Link to comment
17 hours ago, DZMM said:

the script changes the SA file used by rclone and overwrites the entry in the rclone config - i.e. I think the script takes care of it if you don't add an initial value.

 

The script doesn't rename the files you have to create 1-16 and put in the right directory, and then the script will rotate which one rclone uses for each run

How are you supposed to understand from your guide that you need to rename the files again when there is already a script to rename them?.. I am not getting this so if I rename them 1-16 add att that number to the file the conf will automatically find it when it is not even named the correct thing? The conf points to sa_gdrive.json not sa_gdrive1.json I actually think this is still not clear at all to me

Edited by Michel Amberg
Link to comment
1 hour ago, Michel Amberg said:

How are you supposed to understand from your guide that you need to rename the files again when there is already a script to rename them?.. I am not getting this so if I rename them 1-16 add att that number to the file the conf will automatically find it when it is not even named the correct thing? The conf points to sa_gdrive.json not sa_gdrive1.json I actually think this is still not clear at all to me

It is explained what to do in the config section of the upload script

Link to comment
19 minutes ago, DZMM said:

It is explained what to do in the config section of the upload script

ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".

 

So basically remove the counters I get that but what about the rclone config? if there is not file named sa_gdrive_upload it will not work right? 

Edited by Michel Amberg
Link to comment
9 minutes ago, Michel Amberg said:
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".

 

So basically remove the counters I get that but what about the rclone config? if there is not file named sa_gdrive_upload it will not work right? 

Ok I will try to make it a bit more clear it feels like we are misunderstanding each other. I did the renaming part after completing the Autorclone guide with a shared drive and a group with my SAs. This is what my service_accounts folder looks like:

image.png.cbced082f0321f493e16c96f856733a1.png

 

Ranging all the way up to 100. So in the upload script I should ignore the numbers and just write: sa_gdrive_upload. So far so good. My problem is in the rclone config where your guide state this:
 

[gdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json
team_drive = TEAM DRIVE ID
server_side_across_configs = true

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD1
password2 = PASSWORD2

 

Specifically this part:
 

service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json

 

If there is no file called sa_gdrive.json it will work, so what is this file? Just one random of the service accounts rename? a collection of all of them in one file some how? A service_account_file downloaded from google? I don't know and I don't understand this part and there is no way for me to understand it from your guide I have read it I several times and watched youtube videos with foreign langauges and I still can't wrap my head around what to do with this? Everything else is pretty clear I needed to go through some hoops to get it to work but that is fine 

Link to comment
20 hours ago, DZMM said:

the script changes the SA file used by rclone and overwrites the entry in the rclone config - i.e. I think the script takes care of it if you don't add an initial value.

 

The script doesn't rename the files you have to create 1-16 and put in the right directory, and then the script will rotate which one rclone uses for each run

 

Link to comment
On 10/16/2021 at 3:05 PM, DZMM said:

 

Ok since I renamed the files as I posted in the picture above. I will just try and add it as it is and it should just work then? I think this should be included in the guide it is a bit confusing as it is now I though I misunderstood something I am manually uploading everything to the cloud now so I will live with the upload limit it is just 2 more days and I am done with 10TB of data. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.