Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

On 3/19/2020 at 11:45 AM, JohnJay829 said:

Thanks for putting this together saves me days of uploads. I have tried to access my encrypted files from a windows pc using a copy of the rclone config file but am unable to find the media folders or files. I originally was using PlexGuide which i used the same SA's in my upload script i am using copy instead of move as i would like to have a copy on UnRaid and my gdrive 

I was able to get this going with making a new config file..

 

 

 

Now i am able to see my encrypted files on my windows rclone browser. Although i can see the files i am only able to upload without the sa's when i try using them i get an error:

26.03.2020 12:40:22 INFO: *** Rclone move selected. Files will be moved from /mnt/user/MeJoMediaServer/googledrive_encrypted for googledrive_encrypted ***
26.03.2020 12:40:22 INFO: Checking if rclone installed successfully.
26.03.2020 12:40:22 INFO: rclone installed successfully - proceeding with upload.
26.03.2020 12:40:22 INFO: Counter file found for googledrive_encrypted.
26.03.2020 12:40:22 INFO: Adjusted service_account_file for upload remote googledrive_encrypted to GDSA5.json based on counter 5.
26.03.2020 12:40:22 INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload.
====== RCLONE DEBUG ======
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 2 (retrying may help)
Elapsed time: 3.7s
==========================
26.03.2020 12:40:28 INFO: Created counter_6 for next upload run.
26.03.2020 12:40:28 INFO: Log files scrubbed
rm: cannot remove '/mnt/user/appdata/other/rclone/remotes/googledrive_encrypted/upload_running_daily_upload': No such file or directory
26.03.2020 12:40:28 INFO: Script complete
Script Finished Thu, 26 Mar 2020 12:40:28 -0400

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Rclone Upload/log.txt

 

Where i do see the actual errors

Link to comment
10 hours ago, veritas2884 said:

@DZMM Just wanted to drop a thank you note. Finally finished my 26TB upload. 100% cloud based now. Had 12 simultaneous streams last night and no issues. You're a legend.

You're welcome.  What are you going to do with your empty HDDs?  I sold mine, even my parity drive as I don't need it now.

 

12 simultaneous streams is good going - I think I've only hit 10 once over Christmas.

 

 

  • Haha 1
Link to comment
13 hours ago, DZMM said:

You're welcome.  What are you going to do with your empty HDDs?  I sold mine, even my parity drive as I don't need it now.

 

12 simultaneous streams is good going - I think I've only hit 10 once over Christmas.

 

 

I didn't think about selling them. We are currently on state-wide lock down, so I don't think I'll get to a fedex any time soon. I might also scrap the parity drive to speed up writes.

 

I do have a new issue that popped up. I wanted to start adding 4K TV content and created a 4KTV folder in my local media folder. This folder then showed up inside my MergerFS mount, as expected. However, Sonarr and Radarr cannot add the location even though I can select it, it just doesn't show up upon hitting add as a location. It seems like any new folder I create isn't addressable by my Docker Containers even though it is inside the master media folder that is added to their container with read/write access settings. The odd thing is I had a folder of some of my daughter's dance recitals I had shared on plex for family to see a while ago and I was able to move the files out of there get Sonarr to add the 4Ktv to that. 

 

So TL:DR it seems like some kind of permissions issue with new folder created inside the mergerFS structure. Is there a recommended way to create new folder locations to make them addressable by containers? 

 

 

**UPDATE** I was able to chmod 777 the folder inside the mergerFS mount via the terminal and now I can add it to Sonarr. Is this expected behavior or am I creating new folders for new content incorrectly?  

Edited by veritas2884
Link to comment
5 hours ago, veritas2884 said:

**UPDATE** I was able to chmod 777 the folder inside the mergerFS mount via the terminal and now I can add it to Sonarr. Is this expected behavior or am I creating new folders for new content incorrectly? 

Something's wrong as folders created either in /local or /mount_mergerfs should behave like normal folders i.e. radarr/sonarr etc adding/upgrading/removing when they want. 

 

Some apps like krusader and Plex need to be started after the mount, but that's the only problem I'm aware of.   What are your docker mappings and rclone mount options?

Link to comment
On 4/1/2020 at 10:59 AM, remedy said:

the upload script doesn't show any progress output for me until the transfer is complete, it just sits at "====== RCLONE DEBUG ======"

 

any ideas? i'd like to be able to see the live progress.

The reason for this is because of the Discord notifications. If you don't care for Discord notifications then you can remove "--stats 9999m" and change "-vP" to -vvv or -P

  • Like 1
Link to comment
4 hours ago, senpaibox said:

The reason for this is because of the Discord notifications. If you don't care for Discord notifications then you can remove "--stats 9999m" and change "-vP" to -vvv or -P

hmm still not working. it prints everything after the transfer is complete, but until then it still sits at the rclone debug line. i removed "--stats 9999m" and tried "-vvv" and tried "-P, same result.

 

is there no way to get it to output the upload periodically during the actual upload? i tried "-v" with "--stats 5m" instead and that didn't work either.

Link to comment

Just thought I would share this little script. It can probably be integrated with DZMM's scripts, but I'm not using all his scripts.

 

When a mount drops, the script should automatically pick it up, but when this is not possible the dockers will just continue to fill the merger/union folder making the remount impossible (you get the error that the mount is not empty). To make sure all dockers stop which are using the union I made the following script. Just run it every minute as well. When the mount is back again it should start your dockers again from your mount script.

Just make sure you change the folder paths to your situation and put in your dockers.

#!/bin/bash

if [[ -f "/mnt/user/mount_rclone/Tdrive/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Mount connected."
else
	touch /mnt/user/appdata/other/rclone/mount_disconnected
	echo "$(date "+%d.%m.%Y %T") INFO: Mount disconnected, stopping dockers."
	docker stop plex nzbget
    rm /mnt/user/appdata/other/rclone/dockers_started
fi

 

  • Thanks 1
Link to comment
23 hours ago, remedy said:

hmm still not working. it prints everything after the transfer is complete, but until then it still sits at the rclone debug line. i removed "--stats 9999m" and tried "-vvv" and tried "-P, same result.

 

is there no way to get it to output the upload periodically during the actual upload? i tried "-v" with "--stats 5m" instead and that didn't work either.

Hmm works for me but I am using scripts on a VPS machine rather than Unraid. I know Userscripts recently got an update but not sure if that is stopping scripts from displaying live progress

Link to comment

Hi I'm completely new to this, so sorry for my questions, I just want to set it up correct. I've purchased a GSuite 10€ unlimited account and now wants to setup Google Drive and RClone.

What I have done so far:

- Setup GSuite and use my domain.

- Setup own API key

- No teams

- No Google Drive folders yet.

 

What I want is to have 2 folders.

1. 1 encrypted folder for Plex.

2. 1 ordinary folder for backup of pictures. (Which is not main priority here)

 

I have ssh into unraid and typed rclone config.

- Typed in API key

- blank root folder

- bland service account

- no advanced config

- no auto config and have autorized through link

 

- I'm at teams now. Should I skip that or what should I do here? Again I want 1 encrypted folder for Plex.

 

I'm sorry for the newbie questions but I really want it to work the right way the first time.

Hope you can help.

 

Link to comment

So you just need to do the next step which is to create an encrypted remote. I would also recommend setting up Service Accounts if you plan on exceeding 750gb/day otherwise here is an example of what your rclone config should look like if you want an encrypted Plex folder

[googledrive]
type = drive
client_id = **********
client_secret = **********
scope = drive
token = {"access_token":"**********"}
server_side_across_configs = true

[googledrive_encrypted]
type = crypt
remote = googledrive:encrypted_plex_folder
filename_encryption = standard
directory_name_encryption = true
password = **********
password2 = **********

 

Link to comment
On 4/4/2020 at 2:06 PM, senpaibox said:

Hmm works for me but I am using scripts on a VPS machine rather than Unraid. I know Userscripts recently got an update but not sure if that is stopping scripts from displaying live progress

It worked for me. Removed "--stats 9999m" and used "-P", "-vvv" did not work for me. Thank you

Link to comment
1 hour ago, senpaibox said:

So you just need to do the next step which is to create an encrypted remote. I would also recommend setting up Service Accounts if you plan on exceeding 750gb/day otherwise here is an example of what your rclone config should look like if you want an encrypted Plex folder


[googledrive]
type = drive
client_id = **********
client_secret = **********
scope = drive
token = {"access_token":"**********"}
server_side_across_configs = true

[googledrive_encrypted]
type = crypt
remote = googledrive:encrypted_plex_folder
filename_encryption = standard
directory_name_encryption = true
password = **********
password2 = **********

 

Thanks for the answer. I have seen some mentioning about teams, should I create that or not?

Also how do I make sure the folder itself is encrypted and not only the rclone transfers?

Thanks for helping.

Link to comment
12 minutes ago, Bjur said:

Thanks for the answer. I have seen some mentioning about teams, should I create that or not?

Also how do I make sure the folder itself is encrypted and not only the rclone transfers?

Thanks for helping.

Yes you should. As you start to use the gdrive for things beyond media storage (which you will once you discover how convenient this whole gdrive thing is), you will start to desire separating data for easier organisation and/or approach the limits of gdrive (yes, even unlimited has limits) and/or control access. That's where team drives come in.

It's better to start using tdrive now then to wait till you need it and then have to wait for things to be moved about.

 

You can double check the encryption just by trying to access the file from the gdrive website. The file names should be jumbled up and if you download the file, you should not be able to make any sense out of it.

Link to comment

So I have created a team account now in admin module. But it doesn't have any user or folders in settings.

So should I just then go into Google Drive and start creating folder and selecting Team somehow from there?

I can't find any goods guides to start up with.

Thanks for you assistance.

Link to comment

I am in the early stages of this process, and didn't fully understand the standard Drive vs Team Drive.  I have < 1tb of data currently uploaded, and roughly 4tb pending upload on my local share that I am migrating to that share.  If I was to move my existing stuff into the team drive, would I just need to just re-do my rclone config with the same names but this time as a Team Drive?  Would my data currently only on my local gdrive share be ok?

Link to comment
7 hours ago, Stinkpickle said:

I am in the early stages of this process, and didn't fully understand the standard Drive vs Team Drive.  I have < 1tb of data currently uploaded, and roughly 4tb pending upload on my local share that I am migrating to that share.  If I was to move my existing stuff into the team drive, would I just need to just re-do my rclone config with the same names but this time as a Team Drive?  Would my data currently only on my local gdrive share be ok?

I'd move everything to the team drive - as @testdasi said as soon as you start understanding what you can do with this setup, you'll want a teamdrive.  To move:

 

- stop all mounts, dockers etc

- create a team drive and note the ID - it'll be in the url

- add your google account as a user of the teamdrive if needed

- within google drive move all the files from the gdrive folder to the teamdrive folder - ensure all the paths are the same i.e. gdrive/crypt/sub-folder -->tdrive/crypt/sub-folder.  This should be pretty quick for 1TB

- change your rclone config via the plugin settings page (easiest) to:

 

[tdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive.json
team_drive = Team DRIVE ID
server_side_across_configs = true

if you're setting up service_accounts at the same time.  I'd advise doing this if you've got a decent connection and/or will be uploading more than 750GB/day now or in the future. 

 

If not, then change to:

 

[tdrive]
type = drive
client_id = xxxxxxxxx.apps.googleusercontent.com
client_secret = xxxxxxxxxxxxxxxx
scope = drive
root_folder_id = TEAM DRIVE ID
token = {"access_token":"xxxxxxxxxxxxxxxxxx"}
server_side_across_configs = true

Edit: if you're using gdrive_media_vfs for your decrypted remote remember to change:

 

remote = gdrive:crypt to remote = tdrive:crypt

 

Then re-mount and if all looks ok, start your dockers.

Edited by DZMM
Link to comment

Alright, so I think I have an issue with my service accounts.  This is all very new to me, but I did my best to follow documentation.  I followed the instructions to create service accounts from here; https://rclone.org/drive/#1-create-a-service-account-for-example-com. I created 5 of them yesterday, I did it manually because I was unable to get the AutoRclone to work on my Windows machine. 

 

I put the following command in terminal, and it seemed to work properly and list the folders in the root of my Shared Drive.

# rclone -v --drive-impersonate [email protected] lsf gdrive:
Media/
Test/

This is the log from the mount script;

10.04.2020 08:32:26 INFO: Creating local folders.
10.04.2020 08:32:26 INFO: *** Starting mount of remote sgdrive_media
10.04.2020 08:32:26 INFO: Checking if this script is already running.
10.04.2020 08:32:26 INFO: Script not running - proceeding.
10.04.2020 08:32:26 INFO: Mount not running. Will now mount sgdrive_media remote.
10.04.2020 08:32:26 INFO: Recreating mountcheck file for sgdrive_media remote.
2020/04/10 08:32:26 DEBUG : rclone: Version "v1.51.0-151-gfc663d98-beta" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone-beta/.rclone.conf" "copy" "mountcheck" "sgdrive_media:" "-vv" "--no-traverse"]
2020/04/10 08:32:26 DEBUG : Using config file from "/boot/config/plugins/rclone-beta/.rclone.conf"
2020/04/10 08:32:27 DEBUG : mountcheck: Need to transfer - File not found at Destination
2020/04/10 08:32:27 ERROR : mountcheck: Failed to copy: failed to make directory: googleapi: Error 404: File not found: REMOVED., notFound
2020/04/10 08:32:27 ERROR : Attempt 1/3 failed with 1 errors and: failed to make directory: googleapi: Error 404: File not found: REMOVED., notFound
2020/04/10 08:32:28 DEBUG : mountcheck: Need to transfer - File not found at Destination
2020/04/10 08:32:29 ERROR : mountcheck: Failed to copy: failed to make directory: googleapi: Error 404: File not found: REMOVED., notFound
2020/04/10 08:32:29 ERROR : Attempt 2/3 failed with 1 errors and: failed to make directory: googleapi: Error 404: File not found: REMOVEDD., notFound
2020/04/10 08:32:29 DEBUG : mountcheck: Need to transfer - File not found at Destination
2020/04/10 08:32:29 ERROR : mountcheck: Failed to copy: failed to make directory: googleapi: Error 404: File not found: REMOVED., notFound
2020/04/10 08:32:29 ERROR : Attempt 3/3 failed with 1 errors and: failed to make directory: googleapi: Error 404: File not found: REMOVED., notFound
2020/04/10 08:32:29 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 1 (retrying may help)
Elapsed time: 1.9s

2020/04/10 08:32:29 DEBUG : 10 go routines active
2020/04/10 08:32:29 Failed to copy: failed to make directory: googleapi: Error 404: File not found: REMOVED., notFound
10.04.2020 08:32:29 INFO: *** Creating mount for remote sgdrive_media
10.04.2020 08:32:29 INFO: sleeping for 5 seconds
2020/04/10 08:32:30 INFO : Google drive root 'Media': Failed to get StartPageToken: googleapi: Error 403: The attempted action requires shared drive membership., teamDriveMembershipRequired
10.04.2020 08:32:34 INFO: continuing...
10.04.2020 08:32:35 CRITICAL: sgdrive_media mount failed - please check for problems.
Script Finished Fri, 10 Apr 2020 08:32:35 -0500

Edit:

Rclone Config

gdrive:Media - Media is the folder residing in my Shared Drive

[gdrive]
type = drive
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/unraid_sa1.json
team_drive = REMOVED
server_side_across_configs = true

[sgdrive_media]
type = crypt
remote = gdrive:Media
filename_encryption = standard
directory_name_encryption = true
password = REMOVED
password2 = REMOVED

 

Edited by Stinkpickle
Rclone Config
Link to comment
26 minutes ago, Stinkpickle said:

2020/04/10 08:32:27 DEBUG : mountcheck: Need to transfer - File not found at Destination 2020/04/10 08:32:27 ERROR : mountcheck: Failed to copy: failed to make directory: googleapi: Error 404: File not found: REMOVED., notFound

mountcheck file either doesn't exist in the source folder to be copied to gdrive, or the target folder doesn't exist

Link to comment

I am looking for a clean way to migrate my Movie collection without causing issues.  I don't have enough free space on my array to duplicate my Movie directory into the google drive mergefs share.

 

I was going to point my Local directory for these scripts to my current Media share on Unraid, would that be an ok way to upload my files without moving them?

 

I noticed the currently directory structure gets kind of embedded twice with the RcloneRemoteName.

 

My current mount/upload scripts look like this;

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.6 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="sgdrive_media" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/sgdrive_media_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
MergerfsMountShare="/mnt/user/sgdrive_media" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="PlexMediaServer NZBGet sonarr radarr radarr4k ombi tautulli" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
LocalFilesShare="/mnt/user/sgdrive_media_local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MountFolders=\{"Movies,Movies-UHD,Music,Other,Personal,TV,TV-Kids"\} # comma separated list of folders to create within the mount
#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.6 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="sgdrive_media" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="sgdrive_media" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/sgdrive_media_local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/sgdrive_media_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

And I noticed this resulted in the following share directories, 

 

 /mnt/user/sgdrive_media_local/sgdrive_media

 /mnt/user/sgdrive_media_rclone/sgdrive_media

 /mnt/user/sgdrive_media/sgdrive_media

 

My current Media share is;

 

/mnt/user/Media/

My concern is if I point my local directory at that, it's going to look for /mnt/user/Media/sgdrive_media/ - Which does not exsist/emtpy.

 

What is the best way to accomplish this?

 

Thank you.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.