Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

@DZMM

K, so maybe I've missed something, however, adding the obfuscated passwords (all users using same obfuscated password) seems to work still with decryption/encryption of files.

 

Testing this out I just created a new remote called testpassword1. I input my original, plaintext 'readable' password and gave it the original salt I used on creation. Looking at the config, this did indeed output a new unique password and password2. This was not the same string of characters as the original remote crypt password so you are correct that it will give UNIQUE password/2 on new creation.

 

Tested the remotes: I was able decrypt the whole mount and I pushed a test file (via gdrive_counter1_vfs) 

 

Checked the teamdrive via web browser and a new file has been placed there (name is crypted).

 

Checked that the test file is viewable in all remotes. It is, even in remotes with different obfuscated passwords. 🤔 

 

gdrive_media_vfs and gdrive_coounter1_vfs have the EXACT SAME password and password2. testpassword1 has it's OWN UNIQUE password/2. 

 

root@Tower:~# rclone lsf gdrive_media_vfs:/testdir
test.txt
root@Tower:~# rclone lsf gdrive_counter1_vfs:/testdir
test.txt
root@Tower:~# rclone lsf testpassword1:/testdir
test.txt

 

I don't know how the obfuscation process works but it appears that using the same obfuscated key does work. I've moved mounts between systems before and followed a similar procedure. Simply copy and paste conf works (that's what admins recommended on the rclone forum).

 

Edit: To reflect DZMM post on why this functions this way

https://forum.rclone.org/t/crypt-remote-generating-different-hash-each-time-for-the-same-password/13154/11

Edited by watchmeexplode5
  • Like 1
Link to comment

@watchmeexplode5 wow thanks for checking.  It seems a weird way to do it, otherwise why as you confirmed would rclone config split out different text when the passwords are the same.  Weird!

 

Edit: found answer here.  Wish I'd known this before, would have saved me a lot of time 😞

 

https://forum.rclone.org/t/crypt-remote-generating-different-hash-each-time-for-the-same-password/13154/11

Edited by DZMM
Link to comment

@DZMM

 

No problem. It did throw me for a loop when the new remote had a different obfuscated password.   

Let me know if you need anything else tested.

 

btw, all the service accounts seem to be running smoothly. No issues uploading. Accounts are rotating and all new files are visible in the mount. 

Edited by watchmeexplode5
Link to comment
14 hours ago, watchmeexplode5 said:

From there you are kinda left to edit your rclone conf by yourself. I'm sure somebody could script it

Actually - I do have a script if you want to copy and paste the config - I use it for making a lot of remotes though. It's a subscript of my sharedrive mounter. You could populate an existing rclone.config file pretty easily but I usually have them all there already. If you have client id / secret / token for an existing remote you could use that too.

https://github.com/maximuskowalski/smount/wiki/Rclone

 

I should add that I do not use encryption.

Edited by Spladge
  • Thanks 1
Link to comment
3 hours ago, Spladge said:

Actually - I do have a script if you want to copy and paste the config - I use it for making a lot of remotes though. It's a subscript of my sharedrive mounter. You could populate an existing rclone.config file pretty easily but I usually have them all there already. If you have client id / secret / token for an existing remote you could use that too.

https://github.com/maximuskowalski/smount/wiki/Rclone

 

I should add that I do not use encryption.

@Spladge Excellent!

 

@watchmeexplode5 I have a question about the service account script.  It created hundreds of json files named  gobbledygook.json.  Did you just rename the ones you used to SERVICEACCOUNT01.json, SERVICEACCOUNT02.json ...SERVICEACCOUNTXX.json?

Link to comment

@Thel1988 and @watchmeexplode5 I found an error with the counter which messed it up on the first run.   Can you change:

 

	else
		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneRemoteName}. Creating counter_1."
		touch /mnt/user/appdata/other/rclone/$RcloneRemoteName/counter_1
	fi

to:

	else
		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneRemoteName}. Creating counter_1."
		touch /mnt/user/appdata/other/rclone/$RcloneRemoteName/counter_1
		CounterNumber="1"
	fi

Or just use the latest script on github. @watchmeexplode5 I've tested this with the service accounts (copying passwords worked perfectly!) using this config and it all worked perfectly for sa_tdrive1_vfs, sa_tdrive2_vfs etc etc:

 

UseMultipleUploadRemotes="Y"
RemoteNumber="16" 
RcloneUploadRemoteStart="sa_tdrive"
RcloneUploadRemoteEnd="_vfs"

 

Link to comment

@DZMM Nice catch on the counter edits. That should fix the earlier issue I was having. I also left the SA as a gobbledygook.json and referenced those names in my config. Now I'm realizing it would have been much cleaner to rename them so I did 😛

 

There is probably a better way to achieving a rotation with scripts like what @Spladge referenced to but here is how I did it:

 

I'll use AutoRclone in the example but it should work with similar scripts (IE 88lex's sa-gen). To my knowledge the end result is the same...

 

(1) ---- Follow steps 1-4 here https://github.com/xyou365/AutoRclone. Might have to add some dependencies via nerdtools. This should generate the accounts, add them to your group, and finally add the group to teamdrive so they all have authorization. 

 

(2) ---- Copy all your .json accounts to /mnt/user/appdata/other/sa_accounts/

     **** I'd copy them over to the new location just encase anything get messed up along the way, you will still have the originals to fall back on.*****

 

(3) ---- Run the commands to rename all the *.json files:

cd /mnt/user/appdata/other/sa_accounts/

          Test Command just encase prior to renaming:

n=1; for f in *.json; do echo mv "$f" "$((n++)).json"; done

          If echo output is desired remove echo to rename the files:

n=1; for f in *.json; do mv "$f" "$((n++)).json"; done

          All .json files should now be renamed 1.json------------->100.json

 

 

    ********************************************EDIT*******************************************

                                           STEPS BELOW ARE NO LONGER NEEDED

   *********SEE LATER POSTS ABOUT MODIFYING SINGLE RCLONE MOUNT FOR ROTATION********

 

(4) ---- Use template I attached here and replace with your values (client_id, secret, token, password, password2, and possibly remote name)

         To do this use some editor (I used notepad++) and use "find and replace all" with your values: IE: YOUR_PASSWORD --> PASSWORD from your conf

         Your remote name needs to match your config. I use ":/encrypt" but others like DZMM use ":crypt" so find and replace those as well

 

         To find out your values and where your config file is located on your machine run the following command:

rclone config file

 

(5) ---- Once you are happy with the edits. Save it or copy and paste to your rclone config. 

 

Done. That should work and rotate service accounts with the DZMM's scripts. 

 

 

Rclone_conf_with_sa_template.txt

Edited by watchmeexplode5
  • Thanks 1
Link to comment

@Spladge I just finished looking at the rotation script you posted.

 

Updating a single remote to reflect the rotating service account looks like a infinitely better solution than having all the service accounts as unique remotes in the rclone config. It seems like it would be easy to implement to DZMM's upload script so that it changes the count and updates the rclone remote every time the upload script runs. 

 

I'm going to play around with it and see what I can get working. Thank you for linking to that script!

 

  • Thanks 1
Link to comment

Both -but it depends what you are doing - this was made to avoid api bans doing something like tdarr. So if you have a movies drive and a TV drive and so on, splitting them makes a block on one remote not affect the others.

 

Did you look at my simple rgen script for config file?
https://github.com/maximuskowalski/smount/blob/master/rgen.sh
I use sometimes a few rclone.conf files and just swap them out. Different batches of service accounts from different projects.

 

service_account_file = /opt/sa/76.json

becomes

service_account_file = /opt/sa2/76.json

 

Link to comment
2 hours ago, watchmeexplode5 said:

@Spladge I just finished looking at the rotation script you posted.

 

Updating a single remote to reflect the rotating service account looks like a infinitely better solution than having all the service accounts as unique remotes in the rclone config. It seems like it would be easy to implement to DZMM's upload script so that it changes the count and updates the rclone remote every time the upload script runs. 

 

I'm going to play around with it and see what I can get working. Thank you for linking to that script!

 

I just had a quick play and it's easy to do:

JSONDIR="/your/dir/here"
SA="sa_tdrive" # enter the bit before the 1.  I'm assuming the first account will be 1 not 01
CounterNumber="1" # my way of doing the count is a bit numpty, but I can follow what's happening and keep track
SA+=$CounterNumber".json
rclone config update $RcloneRemoteName service_account_file $JSONDIR/$SA # didn't know you could update the config like this

Much cleaner.  I will do this tomorrow when I'm awake so I don't make any mistakes

Edited by DZMM
  • Thanks 1
Link to comment

@DZMM, didn't see your last post in time...

 

I played around with it and implemented a Service Account rotation option like @Spladge suggested. Got a script up and running while maintaining the ability to utilize multiple remotes (so people have options to choose from). I tried to stick to your naming scheme so everything should work well. 

 

I dunno if you want to use my edits but I attached it here if you do. My scripting is very novice so you might have a better way of adding in the feature. I simply took what you wrote and emulated it to fit the case. Just figured I'd add it here encase you want to use some pieces of it.  

**Naming relies on the .json being named the same as the count ie 1.json, 2.json and so on** 

 

 

If you want to use it. Changes/Additions are found in Line 47-54, 63-70, 100-131 and finally line 195. 

 

With the changes I can simply have a single service account remote in the rclone config. So far everything seems to be working as expected. 

 

Again, @DZMM thank you for all the hard work and @Spladge for the cleaner way of rotation. 

 

Rclone_Upload_With_SA.txt

Edited by watchmeexplode5
Link to comment

You may wish to consider the 400k object limit on teamdrives - this includes files / directories and stuff in the trash.

 

An rclone move into a mergered MYDRIVE after upload can get you around that, but it's still another remote, and mydrive doesn't really support service accounts properly.

Link to comment

@DZMM

 

New script looks good but doesn't work if you use encryption because you need to edit the service account remote but then upload to the crypt for the service account.

 

To edit the service account but upload via crypt I made these changes (added ServiceAccountRemote variable):

# REQUIRED SETTINGS
RcloneUploadRemoteName="service_account_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
ServiceAccountRemote="service_account" # Name of Remote which authenticates via service_account.json files

 

# Adjusting service_account_file if using Service Accounts
if [[ $UseServiceAccountUpload == 'Y' ]]; then
	ServiceAccountFile+=$CounterNumber.json
	rclone config update $ServiceAccountRemote service_account_file $ServiceAccountDirectory/$ServiceAccountFile
	echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${ServiceAccountRemote} to ${ServiceAccountFile} based on counter ${CounterNumber}."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
fi

This way rclone updates the ServiceAccountRemote instead of the RcloneUploadRemoteName. 

  • Thanks 2
Link to comment
1 hour ago, Spladge said:

You may wish to consider the 400k object limit on teamdrives - this includes files / directories and stuff in the trash.

 

An rclone move into a mergered MYDRIVE after upload can get you around that, but it's still another remote, and mydrive doesn't really support service accounts properly.

I use MYDRIVE for my music for this reason because of the 400k object limit.

 

I've got 3 team drives - (i) plex, (ii) home photos/videos and (iii) backup.  The backup teamdrive hit 400K recently so I had to remove some older versions of files, but I'm surviving for now.  I hope my plex tdrive never goes over 400K as I'm dreading having to split into multiple teamdrives as updating radarr etc will be painful.....although just realised could use symlinks so files won't have 'moved' for dockers, so shouldn't be too hard.

Link to comment
34 minutes ago, watchmeexplode5 said:

New script looks good but doesn't work if you use encryption because you need to edit the service account remote but then upload to the crypt for the service account.

Thanks - I spotted this in testing, but with all my cutting and pasting I somehow didn't post that change.  I need to find a way to work on files locally and sync with github as cutting and pasting is a recipe for disaster.

 

Updated

Link to comment

@DZMM Good additions to your scripts :)

 

Got one question though about the mkdir -p command in line 60, i can't find anywhere else, where the $RcloneUploadRemoteName folder is created in the upload script, is it me who is reading it wrong? As it seems the $RcloneUploadRemoteName subfolder is being Touched.

Edited by Thel1988
Link to comment
10 minutes ago, Thel1988 said:

@DZMM Good additions to your scripts :)

 

Got one question though about the mkdir -p command in line 60, i can't find anywhere else, where the $RcloneUploadRemoteName folder is created in the upload script, is it me who is reading it wrong? As it seems the $RcloneUploadRemoteName subfolder is being Touched.

Thanks - another cut and paste error.

 

I think you meant it was:

mkdir -p /mnt/user/appdata/other/rclone/$RcloneRemoteName #for script files

I've just updated to:

mkdir -p /mnt/user/appdata/other/rclone/$RcloneUploadRemoteName #for script files

I made this change so users can keep track of what's going on with each upload remote separately from the mount remote.

Link to comment

Thanks this works great, I still need to adjust the cut command to 4 as it is my fourth field, but really cool on the simplification of the scripts:

 

find /mnt/user/appdata/other/rclone/upload_user1_vfs/ -name 'counter_*' | cut -d"_" -f3
Give this output:
vfs/counter

Changing the cut command to use field number 4 it outputs it correctly:
find /mnt/user/appdata/other/rclone/upload_user1_vfs/ -name 'counter_*' | cut -d"_" -f4
2

 

Link to comment
On 2/10/2020 at 5:20 PM, DZMM said:

@Thel1988 and @watchmeexplode5 I found an error with the counter which messed it up on the first run.   Can you change:

 


	else
		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneRemoteName}. Creating counter_1."
		touch /mnt/user/appdata/other/rclone/$RcloneRemoteName/counter_1
	fi

to:


	else
		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneRemoteName}. Creating counter_1."
		touch /mnt/user/appdata/other/rclone/$RcloneRemoteName/counter_1
		CounterNumber="1"
	fi

 

@Thel1988 have you made the change above?

Link to comment
1 hour ago, DZMM said:

@Thel1988 have you made the change above?

Yeap I have been cloning and merged it with my settings, and it is like this.

But it kind of make sense from my side:

This is the output of my find command (before cut)

/mnt/user/appdata/other/rclone/Upload_user1_vfs/counter_2

Correct me if i'm wrong, I have been reading into how the CUT command is working:

 cut -d"_" -f4  the -d"_" will tell it what delimter it should, and after that which field it should output, in my example above.

field1: /mnt/user/appdata/other/rclone/Upload field2: user1 field3:vfs/counter and then field4: 2

So in my example it will be -f4

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.