Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

26 minutes ago, bobo89 said:

My issue is that on server reboot some of my docker containers seem to boot faster than the rclone mount comes up. For example emby keeps giving me can't find media stream errors and from within the docker console no files show up. Rebooting the docker containers picks up the mount properly.

Anyone else have this problem ? Any way to delay docker startup on array start ?

Sent from my SM-N960W using Tapatalk
 

There's an option in the script to enter dockers to start after a successful mount

  • Like 1
Link to comment
On 6/3/2021 at 5:59 PM, T0rqueWr3nch said:

Another few questions myself:

 

1. RcloneCacheShare="/mnt/user0/mount_rclone" - Is there a reason this "Rclone Cache" isn't using the cache and is using spinning rust directly instead? Should this be /mnt/cache/mount_rclone? I saw a similar question asked in the past 99 pages, but never saw a response.

 

2. If we're using VFS caching with rclone mount, why do we need the rclone upload (rclone move) script? I have noticed that sometimes when I make a change, it's transferred immediately (even though the upload script hasn't run yet) and other times, the upload script seems to have to do the work. Any idea why?

 

Thanks.

I was wondering the same. You ever figure it out?

Link to comment
On 6/14/2021 at 5:00 AM, INTEL said:

I was wondering the same. You ever figure it out?

 

So just a follow-up for at least question 1: I DO NOT recommend using /mnt/user0/mount_rclone. I wanted my cache to be a real cache (i.e. to use the Unraid cache drive), but I also wanted to be able to move it to disk if I need to clear up space, so instead I went with /mnt/user/mount_rclone with the mount_rclone share set to use cache.

 

As for question two, I still haven't thoroughly looked into why the upload script is necessary when using rclone mount. I believe the reason is because we're using mergerfs and when we write new files to the mergerfs directory, we're physically writing to the LocalFileShare mount and not to mount_rclone itself. Therefore the upload script is necessary to make sure any new files get uploaded. Any pre-existing files, if modified, I'm willing to bet are actually modified within the rclone mount cache and handled directly by rclone mount itself.

Link to comment
  • 2 weeks later...

I'm having problems getting the service accounts to automatically rotate. Once the api limit is reached on an account the counter doesn't seem to update to use the next one. Everything works fine if change the counter manually.

I also took a look at the code and there doesn't seem to be any feedback mechanism to iterate the account on api errors. However it seems like others are able to run large jobs without any problems?

 

Anyone able to provide some guidance on how to properly get the service accounts to iterate?

Link to comment
  • 5 weeks later...
On 6/23/2021 at 1:38 AM, lzrdking71 said:

@T0rqueWr3nch is it possible for you to share the unmount script mentioned below in this forum?

 

https://github.com/BinsonBuzz/unraid_rclone_mount/issues/28#issuecomment-854122090

 

 

if you used the same folders (edit the folders if you didn't) as the OP that wrote the script you can use the following in a new script and set it to run at stop of array:

 

#!/bin/bash

#######################
### Unmount Script ####
#######################

echo "Unmounting MergerFS"
umount -l /mnt/user/mount_mergerfs/gdrive_vfs
echo "Unmounting Rclone"
umount -l /mnt/user/mount_rclone/gdrive_vfs
echo "$(date "+%d.%m.%Y %T") INFO: ***Finished Cleanup! ***"

exit
 

Edited by twisteddemon
Link to comment

I managed to setup rclone but i'm trying to upload test files to gdrive but i always see "excluded" on the script logs

 

12.08.2021 00:51:56 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive for gdrive ***
12.08.2021 00:51:56 INFO: *** Starting rclone_upload script for gdrive ***
12.08.2021 00:51:56 INFO: Script not running - proceeding.
12.08.2021 00:51:56 INFO: Checking if rclone installed successfully.
12.08.2021 00:51:56 INFO: rclone installed successfully - proceeding with upload.
12.08.2021 00:51:56 INFO: Uploading using upload remote gdrive
12.08.2021 00:51:56 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
2021/08/12 00:51:56 INFO : Starting transaction limiter: max 8 transactions/s with burst 1
2021/08/12 00:51:56 DEBUG : --min-age 10m0s to 2021-08-12 00:41:56.321003203 +0200 SAST m=-599.986105024
2021/08/12 00:51:56 DEBUG : rclone: Version "v1.56.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/gdrive" "gdrive:" "--user-agent=gdrive" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,descending" "--min-age" "10m" "--drive-stop-on-upload-limit" "--bwlimit" "07:00,2M 22:00,0 00:00,0" "--bind=" "--delete-empty-src-dirs"]
2021/08/12 00:51:56 DEBUG : Creating backend with remote "/mnt/user/local/gdrive"
2021/08/12 00:51:56 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2021/08/12 00:51:56 DEBUG : Creating backend with remote "gdrive:"
2021/08/12 00:51:56 DEBUG : gdrive: detected overridden config - adding "{y5r0i}" suffix to name
2021/08/12 00:51:56 DEBUG : fs cache: renaming cache item "gdrive:" to be canonical "gdrive{y5r0i}:"
2021/08/12 00:51:56 DEBUG : Media/test.mp4: Excluded
2021/08/12 00:51:57 DEBUG : Google drive root '': Waiting for checks to finish
2021/08/12 00:51:57 DEBUG : Google drive root '': Waiting for transfers to finish
2021/08/12 00:51:57 DEBUG : Media: Removing directory
2021/08/12 00:51:57 DEBUG : Media: Failed to Rmdir: remove /mnt/user/local/gdrive/Media: directory not empty
2021/08/12 00:51:57 DEBUG : Local file system at /mnt/user/local/gdrive: failed to delete 1 directories
2021/08/12 00:51:57 INFO : There was nothing to transfer
2021/08/12 00:51:57 INFO :
Transferred: 0 / 0 Byte, -, 0 Byte/s, ETA -
Deleted: 0 (files), 1 (dirs)
Elapsed time: 1.2s

2021/08/12 00:51:57 DEBUG : 6 go routines active
12.08.2021 00:51:57 INFO: Not utilising service accounts.
12.08.2021 00:51:57 INFO: Script complete
Script Finished Aug 12, 2021 00:51.57

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_upload/log.txt

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/remote" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="10m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="descending" # "ascending" oldest files first, "descending" newest files first

 

# OPTIONAL SETTINGS

# Add name to upload job
JobName="upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

Edited by sheldz8
Link to comment
6 hours ago, axeman said:

 

Are you waiting at least 10 minutes before trying? 

Yes I did and now it worked.

 

Every time it moves it obviously deletes the folders under /mnt/user/local/gdrive but how can I let Syncthing always see a "completed" folder even though it was deleted after first import? I currently have it setup to download to another folder but I want to change it to now go to the local folder.

 

To make things easier must I try setup rclone on my seedbox so the torrents upload to gdrive from there but the only issue is that I have a limit of 4TB upload bandwidth with ultraseedbox.

 

On Radarr / Sonarr must I create a rw, slave mount for mergerfs folder to see the Media?

Edited by sheldz8
Link to comment
5 hours ago, sheldz8 said:

Yes I did and now it worked.

 

Every time it moves it obviously deletes the folders under /mnt/user/local/gdrive but how can I let Syncthing always see a "completed" folder even though it was deleted after first import? I currently have it setup to download to another folder but I want to change it to now go to the local folder.

 

To make things easier must I try setup rclone on my seedbox so the torrents upload to gdrive from there but the only issue is that I have a limit of 4TB upload bandwidth with ultraseedbox.

 

On Radarr / Sonarr must I create a rw, slave mount for mergerfs folder to see the Media?

lots of questions there.

 

i) If you need a folder to be present that the upload job is deleting at the end because it's empty, just add a mkdir to the right section of the script

ii) radarr/sonarr usually need starting AFTER the mount - that's why the script has a section to start dockers once a successful mount has been verified.  An alternative is to manually restart the dockers

Link to comment
7 minutes ago, DZMM said:

lots of questions there.

 

i) If you need a folder to be present that the upload job is deleting at the end because it's empty, just add a mkdir to the right section of the script

ii) radarr/sonarr usually need starting AFTER the mount - that's why the script has a section to start dockers once a successful mount has been verified.  An alternative is to manually restart the dockers

Where in the upload script do i use the mkdir command? I'm not sure what you mean by right section of the script.

 

If I change the command from move to copy or sync what happens?

Link to comment
17 minutes ago, sheldz8 said:

Where in the upload script do i use the mkdir command? I'm not sure what you mean by right section of the script.

 

If I change the command from move to copy or sync what happens?

I can't remember what copy is on github these days, but in my local script I'd add it anywhere after the #remove dummy file at the end of the script.

 

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

copy = copies files without deleting the source

sync = sync new or changed files.  I think it's a 2-way sync

Link to comment

I got an email the other day from google about updating the security of my apis. does this effect the rclone script shared here? 

 

except from the email:

 

What do I need to know?

Items that have a Drive API permission with type=domain or type=anyone, where withLink=true (v2) or allowFileDiscovery=false (v3), will be affected by this security update.

In addition to the item ID, your application may now also need a resource key to access these items. Without a resource key, requests for these items may result in a 404 Not Founderror (See below for details). Note that access to items that are directly shared with the user or group are not affected.

 

it makes me super confused on what I need to do in the next 25 days to keep my account working...... if anything.

Link to comment
1 hour ago, twisteddemon said:

I got an email the other day from google about updating the security of my apis. does this effect the rclone script shared here? 

 

except from the email:

 

What do I need to know?

Items that have a Drive API permission with type=domain or type=anyone, where withLink=true (v2) or allowFileDiscovery=false (v3), will be affected by this security update.

In addition to the item ID, your application may now also need a resource key to access these items. Without a resource key, requests for these items may result in a 404 Not Founderror (See below for details). Note that access to items that are directly shared with the user or group are not affected.

 

it makes me super confused on what I need to do in the next 25 days to keep my account working...... if anything.

I'm not sure.  A quick google turned this up on the rclone forum - I'll keep an eye on the thread 

Link to comment
17 minutes ago, bugster said:

I'm getting the following errors and im unable to mount the share

 

 

Log attached

log.txt 6.28 kB · 2 downloads

Did you change the mount commands?

 

This is the error you need to fix "Command mount needs 2 arguments minimum: you provided 0 non flag arguments: []" 

 

If you did change it then you could of left out a \ throughout the command or on the last line you added a \ instead of leaving it out.

 

You could of also changed the $command items and left out something 

 

I'm not sure if this post is making sense.

Edited by sheldz8
Link to comment
14 hours ago, sheldz8 said:

Did you change the mount commands?

 

This is the error you need to fix "Command mount needs 2 arguments minimum: you provided 0 non flag arguments: []" 

 

If you did change it then you could of left out a \ throughout the command or on the last line you added a \ instead of leaving it out.

 

You could of also changed the $command items and left out something 

 

I'm not sure if this post is making sense.

 

I didnt change anything. I rebooted the server and is now working.

Link to comment

UPDATE: I figured this out after about four hours of re-teaching myself lol. 
Something odd happened in Google Workspace. App Access Control (api) was untrusted. I re-enabled it and then had to run rclone config as headless and use my Workspace admin account to get the token and update it. I screwed it up the first time by using my main(old) regular google(gmail) account and could see my personal google drive in rclone lol. Using the admin account for Workspace fixed that. 

I really appreciate this forum. It gives me the confidence to poke around! I figured I was fine as long as I keep copies of the encryption passwords for the crypt portion and, sure enough, I eventually got it. 



Lost my mount three days ago apparently and it looks like the token has expired. From the mount script log; 

 

 couldn't fetch token - maybe it has expired? - refresh with "rclone config reconnect gdrive{UpdQG}:": oauth2: cannot fetch token: 400 Bad Request
Response: {
"error": "invalid_grant",
"error_description": "Token has been expired or revoked."
}

 

The "rclone config reconnect" command in the log doesn't work, I get; 

Error: backend doesn't support reconnect or authorize
Usage:
  rclone config reconnect remote: [flags]

Flags:
  -h, --help   help for reconnect

Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.

2021/08/16 23:40:29 Fatal error: backend doesn't support reconnect or authorize

Going to need some detailed help. I set this up a few years ago and it's been cruising along on its own just fine until now. 

 

Thanks. 

Edited by sol
Update
Link to comment

Hi I've received this mail from Google, do I need to do anything to not loose files with this plugin:

A security update will be applied to Drive

On September 13, 2021, Drive will apply a security update to make file sharing more secure. This update will change the links used for some files, and may lead to some new file access requests. Access to these files won't change for people who have already viewed them.

What do I need to do?

You can choose to remove this security update in Drive, but removing the update is not recommended and should only be considered for files that are posted publicly. Learn more

After the update is applied, you can avoid new access requests by distributing the updated link to your file(s). Follow instructions in the Drive documentation

Which files will the update be applied to?

See files with the security update

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.