Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

3 minutes ago, DZMM said:

I use the same passwords for both mounts, different IDs (tried same didn't fix the problem) but different tokens because using two different google accounts.  I'm not sure what rclone uses to encrypt but I'm guessing the tokens are the reason or the actual remote name.

You say same passwords, but did you also define the salt yourself during mount setup? If you did, then it is very strange. Encryption should only be based on the password + salt. I will do some testing myself tomorrow.

Link to comment
3 minutes ago, Kaizac said:

You say same passwords, but did you also define the salt yourself during mount setup? If you did, then it is very strange. Encryption should only be based on the password + salt. I will do some testing myself tomorrow.

yep - that's what I thought as well.  Options I choose are:

 

- 2 / Encrypt the filenames

- 1 / Encrypt directory names

- 1 Password or pass phrase for encryption. y) Yes type in my own password

- Password or pass phrase for salt.  y) Yes type in my own password 

Link to comment
Just now, DZMM said:

yep - that's what I thought as well.  Options I choose are:

 

- 2 / Encrypt the filenames

- 1 / Encrypt directory names

- 1 Password or pass phrase for encryption. y) Yes type in my own password

- Password or pass phrase for salt.  y) Yes type in my own password 

And you created the exact same file structure before you started populating?

Link to comment
6 minutes ago, Kaizac said:

And you created the exact same file structure before you started populating?

That could be it.  My gdrive path is Gdrive/crypt/then_encrypted_media_folders and my team drive is Gdrive/highlander(name of tdrive)/crypt/then_encrypted_media_folders - Maybe if I named the tdrive 'crypt' and put the media folders in the root the names would match up.  You're not really supposed to encrypt the root of a remote (will check why as not sure)

 

I'm going to test now by creating a teamdrive called crypt and adding folders to its root.

Link to comment
1 minute ago, DZMM said:

That could be it.  My gdrive path is Gdrive/crypt/then_encrypted_media_folders and my team drive is Gdrive/highlander(name of tdrive)/crypt/then_encrypted_media_folders - Maybe if I named the tdrive 'crypt' and put the media folders in the root the names would match up.  You're not really supposed to encrypt the root of a remote (will check why as not sure)

 

I'm going to test now by creating a teamdrive called crypt and adding folders to its root.

I don't think the name of the tdrive itself matters, since you start your crypt a level below it (at least thats how I set it up). The file structure below that does matter since based on the password + salt the directories get their own unique names which continues to the levels below. I'm curious what your findings will be!

Link to comment
55 minutes ago, Kaizac said:

I don't think the name of the tdrive itself matters, since you start your crypt a level below it (at least thats how I set it up). The file structure below that does matter since based on the password + salt the directories get their own unique names which continues to the levels below. I'm curious what your findings will be!

hmm not sure what was going on.  I created a new teamdrive 'crypt' and the obscured file and folder names match up, regardless of what level I place the crypt.  I'm going to ditch the first teamdrive I created and use this one and just doublecheck names manually the first time I decide to transfer files, if I do.  

 

I've added two users to this teamdrive so I'm up to 3x750GB per day - I'll only use the 3rd upload manually as I won't need that very often.

Link to comment
47 minutes ago, DZMM said:

hmm not sure what was going on.  I created a new teamdrive 'crypt' and the obscured file and folder names match up, regardless of what level I place the crypt.  I'm going to ditch the first teamdrive I created and use this one and just doublecheck names manually the first time I decide to transfer files, if I do.  

 

I've added two users to this teamdrive so I'm up to 3x750GB per day - I'll only use the 3rd upload manually as I won't need that very often.

Maybe the passwords during rclone config didn't register correctly. I've had that before with my client id and secret that copy paste through Putty doesn't always register correctly making mounts work but not correctly. Glad you got this fixed. Currently filling up my downloading backlog again so this will be a nice way to upload it quickly.

Link to comment

@DZMM Nice setup with the 2nd and 3rd team drive. Wish I could join in all the fun! So for the dedi I'm using a xeon E3-1230v2 with 16GB of ram, 2TB HDD, 120SSD for OS and applications and 240SSD for incoming data. Using Ubunutu 16. Unfortunately this was a large learning curve for me since it was all CLI, but I figured it out and saved all of the pages I used for reference. Backups have been made and I have them locally and on the cloud of my setup, so if anything ever happened or I wanted to switch to another dedicated server I could pick up quickly.  

Edited by slimshizn
Link to comment
3 hours ago, DZMM said:

I've added two users to this teamdrive so I'm up to 3x750GB per day - I'll only use the 3rd upload manually as I won't need that very often.

Ok checked the 3rd user is working properly by creating a new remote using this user's token and then mounting it - it decrypted the existing folders and files correctly, so the password/encrypt sync worked

 

🙂 🙂 

  • Upvote 1
Link to comment

Ive got it working.

 

Problem is only that there are so many exits so it may happen that script doenst run to end (if there is no host alive).

 

Limit NZBGET and STOP rclone does work 100% tho.

 



#!/bin/sh
#rm /mnt/user/downloads/pingtest
#touch /mnt/user/downloads/pingtest

### Here you can enter your hosts IP addresses, you can add as much was u want, but then u need to also specify them later in code
host=192.168.86.1
host2=192.168.86.48
host3=192.168.86.154
### Hosts

### <-- Debug can be removed
### current_date_time="`date "+%Y-%m-%d %H:%M:%S"`";
### echo $current_date_time >> /mnt/user/downloads/pingtest;

### echo "host 1" >> /mnt/user/downloads/pingtest;
### ping -c 1 -W1 -q $host > /dev/null
### echo "$?" >> /mnt/user/downloads/pingtest;

### echo "host 2" >> /mnt/user/downloads/pingtest;
### ping -c 1 -W1 -q $host2 > /dev/null
### echo "$?" >> /mnt/user/downloads/pingtest;

### echo "host 3" >> /mnt/user/downloads/pingtest;
### ping -c 1 -W1 -q $host3 > /dev/null
### echo "$?" >> /mnt/user/downloads/pingtest;
### --> Debug can be removed

### Ping 3 hosts
ping -c 1 -W1 -q $host || ping -c 1 -W1 -q $host2 || ping -c 1 -W1 -q $host3 > /dev/null
      if [ $? == 0 ]; then

###  Check if script already run 
if [[ -f "/mnt/user/appdata/other/speed/limited_speed" ]]; then
logger ""$(date "+%d.%m.%Y %T")"" NzbGet bereits begrenzt / rclone bereits gekillt.
exit

else

touch /mnt/user/appdata/other/speed/limited_speed

fi
###

logger min. 1 Ping erfolgreich, Upload/Download begrenzen

### Stop uploading to gdrive
killall -9 rclone
### 

### Throttle nzbget to 10Mbits DL
docker exec nzbget /app/nzbget/nzbget -c /config/nzbget.conf -R 10000
### 

rm -f /mnt/user/appdata/other/speed/unlimited_speed 

else

rm -f /mnt/user/appdata/other/speed/limited_speed


### upload script should always run
######  Check if script already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_upload" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_upload

fi

######  End Check if script already running  ##########

######  check if rclone installed  ##########

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."

else

echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

fi

######  end check if rclone installed  ##########

# move files

rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 128M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*  --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 30m 

# remove dummy file

rm /mnt/user/appdata/other/rclone/rclone_upload

### upload script end

###  Check if script already run  
if [[ -f "/mnt/user/appdata/other/speed/unlimited_speed" ]]; then
logger ""$(date "+%d.%m.%Y %T")"" NzbGet bereits unbegrenzt.
exit

else

touch /mnt/user/appdata/other/speed/unlimited_speed

fi
###

logger Alle Pings fehlgeschlagen, Upload/Download starten.

### unthrottle nzbget
docker exec nzbget /app/nzbget/nzbget -c /config/nzbget.conf -R 0
###

fi

Link to comment
17 hours ago, DZMM said:

Ok checked the 3rd user is working properly by creating a new remote using this user's token and then mounting it - it decrypted the existing folders and files correctly, so the password/encrypt sync worked

 

🙂 🙂 

Do manually populate your rclone_upload_tdrive folders to have those files moved to the cloud or did you automate this somehow?

And how do you use the multiple accounts for the tdrive? Do you create multiple rclone mounts for each user?

Link to comment
51 minutes ago, Kaizac said:

Do manually populate your rclone_upload_tdrive folders to have those files moved to the cloud or did you automate this somehow?

And how do you use the multiple accounts for the tdrive? Do you create multiple rclone mounts for each user?

I have just one rclone_upload folder and then I created 3 upload scripts - one uploads /mnt/cache/rclone_upload, another /mnt/user0/rclone_upload and the third is a booster and currently is uploading from /mnt/disk4/rclone_upload.

 

Yes, one tdrive with multiple accounts - one for each upload instance.  Then there's only one tdrive to mount.  Check my GitHub scripts for how I added in tdrive support.

 

The mount I did for the 3rd user was just temporary to double-check it all worked as expected. 

 

To add each user just create new tdrive remotes with the same tdrive Id, rclone passwords, same remote location but different user tokens for each (and client IDs to spread the API hits)

Edited by DZMM
Link to comment

This is working very well.  I just moved files within google drive between My Drive and the Team Drive and once the dir cache updated, they appeared in the tdrive mount and played perfectly 🙂 

 

I'm going to do a few more trial moves, which if go well I'm going to move all my files to the teamdrive and only upload to it going forwards.

 

I wonder if google realise one user can create multiple teamdrives to share with friends to give them unlimited storage?

Link to comment
2 hours ago, DZMM said:

This is working very well.  I just moved files within google drive between My Drive and the Team Drive and once the dir cache updated, they appeared in the tdrive mount and played perfectly 🙂 

 

I'm going to do a few more trial moves, which if go well I'm going to move all my files to the teamdrive and only upload to it going forwards.

 

I wonder if google realise one user can create multiple teamdrives to share with friends to give them unlimited storage?

Currently setting this up with in total 5 API's. While doing this I wonder if we can use this method to seperate streaming and docker activities with seperate API's. Currently programs like Bazarr cause a lot of API hits, often resulting in bans. Which causes the playback to fail. Maybe if we use a mount for isolated streaming and an API for docker activities like Bazarr it will not be a problem anymore? Just not sure how to set this up yet, but with a Team Drive this should work with seperate accounts I think.

Link to comment
1 hour ago, Kaizac said:

Currently setting this up with in total 5 API's. While doing this I wonder if we can use this method to seperate streaming and docker activities with seperate API's. Currently programs like Bazarr cause a lot of API hits, often resulting in bans. Which causes the playback to fail. Maybe if we use a mount for isolated streaming and an API for docker activities like Bazarr it will not be a problem anymore? Just not sure how to set this up yet, but with a Team Drive this should work with seperate accounts I think.

Just create another encrypted remote for Bazarr with a different client_ID pointing to same gdrive/tdrive e.g. 

 

[gdrive_bazarr]
type = drive
client_id = Diff ID
client_secret = Diff secret
scope = drive
root_folder_id = 
service_account_file = 
token = {should be able to use same token, or create new one if pointed to teamdrive"}

[gdrive_bazarr_vfs]
type = crypt
remote = gdrive_bazarr:crypt
filename_encryption = standard
directory_name_encryption = true
password = same password
password2 = same password

 

One problem I'm encountering is the multiple upload scripts are using a fair bit of memory, so I'm investigating how to reduce the memory usage by removing things like --fast-list from the upload script.  Not a biggie as I can fix

Link to comment

What you say abouit this piece of log?


If i understand it correct, one chunk failed, but i dont need to start from scratch?!

 

Quote

 


2018/12/17 19:15:57 INFO : 
Transferred: 956.289M / 3.967 GBytes, 24%, 1.136 MBytes/s, ETA 45m33s
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 2, 0%
Elapsed time: 14m1.7s
Transferring:
* Filme/(T)Raumschiff …- Periode 1 2004.mp4: 21% /2.283G, 593.924k/s, 52m53s
* Filme/3 Days in Quib…in Quiberon 2018.avi: 26% /1.684G, 599.778k/s, 35m59s

2018/12/17 19:16:24 DEBUG : is7npm4pf8tr0c9otsuca7omdo/bp2gcotk4qc4n0ln7hr164e9f9ltadf317t7b41qqu8ibi2fuk6ng50fnqggamkk7tc5bb017c83q/aqvbl0sg71tbkndcpuls2lbreoj0hlhuigd6r3sq8eogcr39f7kbc4ptqfdgpnkoh85g562pqik5a: Sending chunk 536870912 length 134217728
2018/12/17 19:16:57 INFO : 
Transferred: 1.000G / 3.967 GBytes, 25%, 1.136 MBytes/s, ETA 44m33s
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 2, 0%
Elapsed time: 15m1.7s
Transferring:
* Filme/(T)Raumschiff …- Periode 1 2004.mp4: 22% /2.283G, 622.078k/s, 49m26s
* Filme/3 Days in Quib…in Quiberon 2018.avi: 28% /1.684G, 507.198k/s, 41m33s

2018/12/17 19:17:35 DEBUG : pacer: Rate limited, sleeping for 1.444749045s (1 consecutive low level retries)
2018/12/17 19:17:35 DEBUG : pacer: low level retry 1/10 (error Post https://www.googleapis.com/...net/http: HTTP/1.x transport connection broken: write tcp 192.168.86.2:54378->172.217.22.106:443: write: connection reset by peer)
2018/12/17 19:17:35 DEBUG : is7npm4pf8tr0c9otsuca7omdo/upbjjo2tplrmt2dkru0l1q1tmlnnm6frg4e8ni7a2ao69u3gc190/np0p319u0bpfmf9d98kfeb33u8jgf0ldaqbgq6hndickvf9l4grg: Sending chunk 402653184 length 134217728
2018/12/17 19:17:57 INFO : 
Transferred: 1.057G / 3.967 GBytes, 27%, 1.125 MBytes/s, ETA 44m7s
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 2, 0%
Elapsed time: 16m1.7s
Transferring:
* Filme/(T)Raumschiff …- Periode 1 2004.mp4: 24% /2.283G, 719.491k/s, 41m49s
* Filme/3 Days in Quib…in Quiberon 2018.avi: 29% /1.684G, 120.617k/s, 2h52m6s
 

 

Edited by nuhll
Link to comment

Besides this i got everything working, finally, big thanks @ all.

 

Only thing left is a script which runs over my old archive and mves old (1y or older) files to the rclone mount upload folder (incl. (sub)directorys)

But it should max mv X files at the same time, or only when less then X files in rlcone upload folder. (not that my whole archive sits the next year inside the upload directory... xD)

 

anyone any idea?

Edited by nuhll
Link to comment
1 hour ago, nuhll said:

What you say abouit this piece of log?


If i understand it correct, one chunk failed, but i dont need to start from scratch?!

 

rclone move automatically retries failed transfers so you'll be ok - it's why it's best to upload via move rather than writing direct to the mount, because if the write fails there it's permanent.

Link to comment

So, i figured everything out and nearly all is working.

 

Only one thing.


If i killall -9 rclone, the upload doenst stop. i need to killall -9 rcloneorig but then /mnt/user/mount_rclone/google_vfs is no longer working

 

Is there any way i can limit or stop the upload, without having to remount every time..??? And no, i dont want to always limit it.

 

Okay i found something:

 

root@Unraid-Server:~# rclone rc core/bwlimit rate=1M

 

{        

"rate":

"1M"

}

 

 

root@Unraid-Server:~# kill -SIGUSR2 rclone
-bash: kill: rclone: arguments must be process or job IDs

 

Sigusr2 will remove the limit (exactly what im searching for) but how to i get the process ID(s)?

 

 

 

root@Unraid-Server:~# ps -A -o pid,cmd|grep rclone | grep -v grep |head -n 1 | awk '{print $1}'

5037

 

root@Unraid-Server:~# kill -SIGUSR2 5037root@Unraid-Server:~# ps -A -o pid,cmd|grep rclone | grep -v grep |head -n 1 | kill -SIGUSR2 '{print $1}'

-bash: kill: {print $1}: arguments must be process or job IDshead: write error: Broken piperoot@

Unraid-Server:~# ps -A -o pid,cmd|grep rclone | grep -v grep |head -n 1 | kill -SIGUSR2 $1kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
head: write error: Broken pipe

root@Unraid-Server:~# ps -A -o pid,cmd|grep rclone | grep -v grep |head -n 1 | kill -SIGUSR2 $1
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
head: write error: Broken pipe
root@Unraid-Server:~# ps -A -o pid,cmd|grep rclone | grep -v grep |head -n 1 | kill -SIGUSR2 '$1'
-bash: kill: $1: arguments must be process or job IDs
head: write error: Broken pipe
pgrep: no matching criteria specified
Try `pgrep --help' for more information.
root@Unraid-Server:~# pgrep rclone
5040
13199
13201
14916
14924
15576
15578

 

Edited by nuhll
Link to comment
On ‎12‎/‎16‎/‎2018 at 11:42 AM, DZMM said:

Yes, one tdrive with multiple accounts - one for each upload instance.  Then there's only one tdrive to mount.  Check my GitHub scripts for how I added in tdrive support.

 

Please pardon what is likely an extremely silly and basic question from someone who has not used Google's My Drive or Team Drives at all (I apologize to no one!): Wouldn't one have to pay an additional monthly fee for each additional user for Team Drive (to improve your daily upload limits)? 

Link to comment
2 minutes ago, BRiT said:

 

Please pardon what is likely an extremely silly and basic question from someone who has not used Google's My Drive or Team Drives at all (I apologize to no one!): Wouldn't one have to pay an additional monthly fee for each additional user for Team Drive (to improve your daily upload limits)? 

I guess u can create teamdrives without a team... 

Link to comment
1 minute ago, BRiT said:

 

Please pardon what is likely an extremely silly and basic question from someone who has not used Google's My Drive or Team Drives at all (I apologize to no one!): Wouldn't one have to pay an additional monthly fee for each additional user for Team Drive (to improve your daily upload limits)? 

No.  A team drive created by a google app user (who has unlimited storage) can be shared with any google account user(s).  So, you could create team drives for friends to give them unlimited storage.  Each user has a 750GB/day upload quota, so as long as each upload to the shared teamdrive is coming from a diff user (diff rclone token for the remote, and client_ID to try and avoid API bans) then you can utilise the extra quotas.  I've added 3 accounts to my plex teamdrive and it's all working fine so far for 4 uploads (3 shared users and my google apps account)

 

I imagine google has a FUP to clamp down on real abuse e.g. creating 100 teamdrives.

  • Upvote 1
Link to comment

Why is 

rclone rc core/bwlimit rate=1M

 

not working?

 

https://rclone.org/docs/

 

I cant use my internet now, its crazy. :(

 

 

root@Unraid-Server:~# ps auxf |grep 'rclone'|`awk '{ print "kill -SIGUSR2 " $2 }'`
-bash: kill: (13791) - No such process
-bash: kill: kill: arguments must be process or job IDs
-bash: kill: -SIGUSR2: arguments must be process or job IDs
-bash: kill: kill: arguments must be process or job IDs
-bash: kill: -SIGUSR2: arguments must be process or job IDs
-bash: kill: kill: arguments must be process or job IDs
-bash: kill: -SIGUSR2: arguments must be process or job IDs
-bash: kill: kill: arguments must be process or job IDs
-bash: kill: -SIGUSR2: arguments must be process or job IDs
-bash: kill: kill: arguments must be process or job IDs
-bash: kill: -SIGUSR2: arguments must be process or job IDs
-bash: kill: kill: arguments must be process or job IDs

 

 

kill -SIGUSR2 `pgrep -f rclone`
 

 

Does work, but how do i know in which state rclone is.... because it toggles between bw limit and none.... :/

Edited by nuhll
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.