Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

4 hours ago, X672 said:

Hello again :)

 

Just wondering if there is a "easier" way if I already have a shared folder in unraid with about 4000 folders except for copying them into the "\mount_mergerfs\gdrive_media_vfs" folder to make it upload them to gdrive?

 

I used MC and went to each drive and created a \local\ folder - and moved any movies from whatever path they were at to the \local\ folder. so that you are moving within the disk. 

  • Thanks 1
Link to comment

Hi

 

A quick question to the performance for docker an mergerfs. On Git it is described that dockers shoulb be linked direktly to the /mnt/user share for best perfomance with mergerfs. Isn't this a bit risky to give dockers full access to all files on UnRaid this way?

Is the perfomance much worse if its linked to a subfolder in the "union" directory of mergerfs?

 

 

Cheers

Simon

Link to comment
On 6/1/2021 at 8:03 AM, Symon said:

Hi

 

A quick question to the performance for docker an mergerfs. On Git it is described that dockers shoulb be linked direktly to the /mnt/user share for best perfomance with mergerfs. Isn't this a bit risky to give dockers full access to all files on UnRaid this way?

Is the perfomance much worse if its linked to a subfolder in the "union" directory of mergerfs?

 

 

Cheers

Simon

 

I'm wondering the same thing; and I'm wondering how much of this is dogma. I also don't know why it would need the root /mnt/user instead of the more conservative /mnt/user/merger_fs (which I would still not be a fan of since I assume most of us are going to be compartmentalizing that directory further).

 

My only guess as to why their might be a difference is if rclone can share a connection if the root directory is mounted, instead of having to reestablish a connection to the Cloud service for each individual subdirectory request, but I readily admit ignorance of rclone's mechanisms.

Link to comment
Posted (edited)

Another few questions myself:

 

1. RcloneCacheShare="/mnt/user0/mount_rclone" - Is there a reason this "Rclone Cache" isn't using the cache and is using spinning rust directly instead? Should this be /mnt/cache/mount_rclone? I saw a similar question asked in the past 99 pages, but never saw a response.

 

2. If we're using VFS caching with rclone mount, why do we need the rclone upload (rclone move) script? I have noticed that sometimes when I make a change, it's transferred immediately (even though the upload script hasn't run yet) and other times, the upload script seems to have to do the work. Any idea why?

 

Thanks.

Edited by T0rqueWr3nch
Link to comment

I would appreciate some help from anyone. I'm not good with rclone but I'd like to use it with Google drive. I followed spaceinvader one's 4 year old video but it's changed a lot since then. I've got my drives setup like in the video. I've got a google and a secure. I found the github scripts for mounting, unmounting and uploading, but I have no idea how to change those to fit my needs. I'm wanting to first copy my Plex folder to it and then start having my plex use the cloud files.

Here's my mount part that I edited:

 

RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data

RcloneMountShare="/mnt/user/google" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone

RcloneMountDirCacheTime="720h" # rclone dir cache time

LocalFilesShare="/mnt/user/Plex" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable

RcloneCacheShare="/mnt/user0/google" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone

RcloneCacheMaxSize="400G" # Maximum size of rclone cache

RcloneCacheMaxAge="336h" # Maximum age of cache files

MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable

DockerStart="sabnzbd plex binhex-sonarr binhex-radarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page

MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv,scary movies,kids movies,4k movies.4k tv series"\} # comma separated list of folders to create within the mount

 

 

 

Here's my upload that I know isn't right:

RcloneCommand="copy" # choose your rclone command e.g. move, copy, sync

RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'.

RcloneUploadRemoteName="secure" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName.

LocalFilesShare="/mnt/user/Plex" # location of the local files without trailing slash you want to rclone to use

RcloneMountShare="/mnt/user/google" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone

MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y

ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

Link to comment
6 hours ago, francrouge said:

Hi all 

 

Quick question, does anyone has been able to map 2 gdrive with 2 seperate script or in the same ?

 

i want to access 2 gdrive account but i can't i'm always gettin errors about port already bind

 

 

thx

 

Try to add this to your script under "# Add extra commands or filters" and restart your server :)

Command2="--rc-addr 127.0.0.1:0"

 

Link to comment

My issue is that on server reboot some of my docker containers seem to boot faster than the rclone mount comes up. For example emby keeps giving me can't find media stream errors and from within the docker console no files show up. Rebooting the docker containers picks up the mount properly.

Anyone else have this problem ? Any way to delay docker startup on array start ?

Sent from my SM-N960W using Tapatalk

Link to comment
26 minutes ago, bobo89 said:

My issue is that on server reboot some of my docker containers seem to boot faster than the rclone mount comes up. For example emby keeps giving me can't find media stream errors and from within the docker console no files show up. Rebooting the docker containers picks up the mount properly.

Anyone else have this problem ? Any way to delay docker startup on array start ?

Sent from my SM-N960W using Tapatalk
 

There's an option in the script to enter dockers to start after a successful mount

  • Like 1
Link to comment
On 6/3/2021 at 5:59 PM, T0rqueWr3nch said:

Another few questions myself:

 

1. RcloneCacheShare="/mnt/user0/mount_rclone" - Is there a reason this "Rclone Cache" isn't using the cache and is using spinning rust directly instead? Should this be /mnt/cache/mount_rclone? I saw a similar question asked in the past 99 pages, but never saw a response.

 

2. If we're using VFS caching with rclone mount, why do we need the rclone upload (rclone move) script? I have noticed that sometimes when I make a change, it's transferred immediately (even though the upload script hasn't run yet) and other times, the upload script seems to have to do the work. Any idea why?

 

Thanks.

I was wondering the same. You ever figure it out?

Link to comment
On 6/14/2021 at 5:00 AM, INTEL said:

I was wondering the same. You ever figure it out?

 

So just a follow-up for at least question 1: I DO NOT recommend using /mnt/user0/mount_rclone. I wanted my cache to be a real cache (i.e. to use the Unraid cache drive), but I also wanted to be able to move it to disk if I need to clear up space, so instead I went with /mnt/user/mount_rclone with the mount_rclone share set to use cache.

 

As for question two, I still haven't thoroughly looked into why the upload script is necessary when using rclone mount. I believe the reason is because we're using mergerfs and when we write new files to the mergerfs directory, we're physically writing to the LocalFileShare mount and not to mount_rclone itself. Therefore the upload script is necessary to make sure any new files get uploaded. Any pre-existing files, if modified, I'm willing to bet are actually modified within the rclone mount cache and handled directly by rclone mount itself.

Link to comment
  • 2 weeks later...

I'm having problems getting the service accounts to automatically rotate. Once the api limit is reached on an account the counter doesn't seem to update to use the next one. Everything works fine if change the counter manually.

I also took a look at the code and there doesn't seem to be any feedback mechanism to iterate the account on api errors. However it seems like others are able to run large jobs without any problems?

 

Anyone able to provide some guidance on how to properly get the service accounts to iterate?

Link to comment

Is there a simple way to mount more than 1 gdrives? I have 3 teamdrives that I would like to mount on the same box. Should I simply copy the script and modifie to the new mount? I feel like this is not the way to go - or is it?

Link to comment
  • 5 weeks later...
On 6/23/2021 at 1:38 AM, lzrdking71 said:

@T0rqueWr3nch is it possible for you to share the unmount script mentioned below in this forum?

 

https://github.com/BinsonBuzz/unraid_rclone_mount/issues/28#issuecomment-854122090

 

 

if you used the same folders (edit the folders if you didn't) as the OP that wrote the script you can use the following in a new script and set it to run at stop of array:

 

#!/bin/bash

#######################
### Unmount Script ####
#######################

echo "Unmounting MergerFS"
umount -l /mnt/user/mount_mergerfs/gdrive_vfs
echo "Unmounting Rclone"
umount -l /mnt/user/mount_rclone/gdrive_vfs
echo "$(date "+%d.%m.%Y %T") INFO: ***Finished Cleanup! ***"

exit
 

Edited by twisteddemon
Link to comment

I managed to setup rclone but i'm trying to upload test files to gdrive but i always see "excluded" on the script logs

 

12.08.2021 00:51:56 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive for gdrive ***
12.08.2021 00:51:56 INFO: *** Starting rclone_upload script for gdrive ***
12.08.2021 00:51:56 INFO: Script not running - proceeding.
12.08.2021 00:51:56 INFO: Checking if rclone installed successfully.
12.08.2021 00:51:56 INFO: rclone installed successfully - proceeding with upload.
12.08.2021 00:51:56 INFO: Uploading using upload remote gdrive
12.08.2021 00:51:56 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
2021/08/12 00:51:56 INFO : Starting transaction limiter: max 8 transactions/s with burst 1
2021/08/12 00:51:56 DEBUG : --min-age 10m0s to 2021-08-12 00:41:56.321003203 +0200 SAST m=-599.986105024
2021/08/12 00:51:56 DEBUG : rclone: Version "v1.56.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/gdrive" "gdrive:" "--user-agent=gdrive" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,descending" "--min-age" "10m" "--drive-stop-on-upload-limit" "--bwlimit" "07:00,2M 22:00,0 00:00,0" "--bind=" "--delete-empty-src-dirs"]
2021/08/12 00:51:56 DEBUG : Creating backend with remote "/mnt/user/local/gdrive"
2021/08/12 00:51:56 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2021/08/12 00:51:56 DEBUG : Creating backend with remote "gdrive:"
2021/08/12 00:51:56 DEBUG : gdrive: detected overridden config - adding "{y5r0i}" suffix to name
2021/08/12 00:51:56 DEBUG : fs cache: renaming cache item "gdrive:" to be canonical "gdrive{y5r0i}:"
2021/08/12 00:51:56 DEBUG : Media/test.mp4: Excluded
2021/08/12 00:51:57 DEBUG : Google drive root '': Waiting for checks to finish
2021/08/12 00:51:57 DEBUG : Google drive root '': Waiting for transfers to finish
2021/08/12 00:51:57 DEBUG : Media: Removing directory
2021/08/12 00:51:57 DEBUG : Media: Failed to Rmdir: remove /mnt/user/local/gdrive/Media: directory not empty
2021/08/12 00:51:57 DEBUG : Local file system at /mnt/user/local/gdrive: failed to delete 1 directories
2021/08/12 00:51:57 INFO : There was nothing to transfer
2021/08/12 00:51:57 INFO :
Transferred: 0 / 0 Byte, -, 0 Byte/s, ETA -
Deleted: 0 (files), 1 (dirs)
Elapsed time: 1.2s

2021/08/12 00:51:57 DEBUG : 6 go routines active
12.08.2021 00:51:57 INFO: Not utilising service accounts.
12.08.2021 00:51:57 INFO: Script complete
Script Finished Aug 12, 2021 00:51.57

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_upload/log.txt

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/remote" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="10m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="descending" # "ascending" oldest files first, "descending" newest files first

 

# OPTIONAL SETTINGS

# Add name to upload job
JobName="upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

Edited by sheldz8
Link to comment
6 hours ago, axeman said:

 

Are you waiting at least 10 minutes before trying? 

Yes I did and now it worked.

 

Every time it moves it obviously deletes the folders under /mnt/user/local/gdrive but how can I let Syncthing always see a "completed" folder even though it was deleted after first import? I currently have it setup to download to another folder but I want to change it to now go to the local folder.

 

To make things easier must I try setup rclone on my seedbox so the torrents upload to gdrive from there but the only issue is that I have a limit of 4TB upload bandwidth with ultraseedbox.

 

On Radarr / Sonarr must I create a rw, slave mount for mergerfs folder to see the Media?

Edited by sheldz8
Link to comment
5 hours ago, sheldz8 said:

Yes I did and now it worked.

 

Every time it moves it obviously deletes the folders under /mnt/user/local/gdrive but how can I let Syncthing always see a "completed" folder even though it was deleted after first import? I currently have it setup to download to another folder but I want to change it to now go to the local folder.

 

To make things easier must I try setup rclone on my seedbox so the torrents upload to gdrive from there but the only issue is that I have a limit of 4TB upload bandwidth with ultraseedbox.

 

On Radarr / Sonarr must I create a rw, slave mount for mergerfs folder to see the Media?

lots of questions there.

 

i) If you need a folder to be present that the upload job is deleting at the end because it's empty, just add a mkdir to the right section of the script

ii) radarr/sonarr usually need starting AFTER the mount - that's why the script has a section to start dockers once a successful mount has been verified.  An alternative is to manually restart the dockers

Link to comment
7 minutes ago, DZMM said:

lots of questions there.

 

i) If you need a folder to be present that the upload job is deleting at the end because it's empty, just add a mkdir to the right section of the script

ii) radarr/sonarr usually need starting AFTER the mount - that's why the script has a section to start dockers once a successful mount has been verified.  An alternative is to manually restart the dockers

Where in the upload script do i use the mkdir command? I'm not sure what you mean by right section of the script.

 

If I change the command from move to copy or sync what happens?

Link to comment
17 minutes ago, sheldz8 said:

Where in the upload script do i use the mkdir command? I'm not sure what you mean by right section of the script.

 

If I change the command from move to copy or sync what happens?

I can't remember what copy is on github these days, but in my local script I'd add it anywhere after the #remove dummy file at the end of the script.

 

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

copy = copies files without deleting the source

sync = sync new or changed files.  I think it's a 2-way sync

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.