Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

On 11/4/2020 at 1:40 PM, francrouge said:

Hi all

Can someone explain me how i can link my downloaded files to be uploaded in drive and be able to seed it

I'm a bit lost with all the configs.

For now my upload script seems to work and my mounting script also

Thx

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

 

If you use the mergerfs versions of the scripts that supports hardlinks, this is all taken care of i.e. the files stay local until removed from your torrent client

Link to comment
On 11/1/2020 at 10:07 PM, animeking said:

01.11.2020 15:58:34 INFO: rclone not installed - will try again later.

 

The upload script checks for the presence of the mountcheck file in the right place that is created by the mount script.  The check is failing - check you've mounted correctly and/or that your remote names match.

Link to comment
1 hour ago, DZMM said:

Good work!  In the scripts you can set BWLimits and schedules to fit your connection/usage.  If you've 4.375MB/s, I would recommend only scheduling this speed for overnight.

 

You can try playing around with drive-chunk-size etc to see if that helps if you're really trying to squeeze out a few more MB/s.

 

 

So if I would like to adjust the drive-chunk-size, I do this in the rclone_mount and rclone_upload scripts?

This will then override what I have in the rclone config when I created the gdrive remotes?

Link to comment
1 hour ago, Ericsson said:

So if I would like to adjust the drive-chunk-size, I do this in the rclone_mount and rclone_upload scripts?

This will then override what I have in the rclone config when I created the gdrive remotes?

 
 

Just the upload script if it's upload speed you're adjusting.  Whatever you run in a command i.e. the scripts, overrides the settings in the rclone config file

Link to comment
On 10/14/2020 at 2:07 AM, DZMM said:

We've never been able to pinpoint the problem as it seems intermittent.  I haven't had problems for a few months now.

I've been having this issue now. I goes away if I manually kill the rclone script, the rcloneorig process, and mergerFS process. Not really sure why, or how safe it is to just kill those processes

Edited by M1kep_
Link to comment

@DZMM Just wanted to say thanks for those scripts! They been working perfectly for 2 month on my cloud server with Radarr, Sonarr etc... I run the upload script every 2 minutes so I don't need to wait when a movie is being imported! I filled my gdrive with 20tb (~1300 movies) worth of movies thanks to you, all automated with Traktarr.

Edited by Lucka
  • Like 1
Link to comment
On 11/4/2020 at 3:11 PM, MowMdown said:

 

I just run this nightly at 3am using user scripts, super simple. (obviously I don't have a folder named "files" but you can use your imagination)


rclone move /mnt/user/media/files crypt:files -v --delete-empty-src-dirs --fast-list --drive-stop-on-upload-limit --order-by size,desc

I have a single 500GB drive that I fill up with whatever I want to be moved to the cloud and that small script does it.

sorry if this is a dumb question (maybe my imagination isn't working) ... but isn't that /media the mount where your union is? So would rclone know where the "local" version is? Or does it basically traverse the entire folder to see if there's differences between crypt:files and /mnt/user/media/files ? 

Link to comment

@Lucka glad you for them all working without any hiccups.  When they run smoothly in the background, it is really good if you have enough bandwidth.  It's saved me thousands of pounds in storage and a fair chunk in electricity cost from fewer HDDs spinning.

 

Using traktarr is a good addition ;-)

Link to comment

@axeman, no, if you pay close attention to the path for that move command, im using "/mnt/user/media" not "/mnt/disks/media" (I don't mount to /user/)

 

My rclone mount is under "/mnt/disks/media" so it does not interfere with the move. I'm essentiall moving the files from /mnt/user/media to the "crypt:media" mount but as far as the unraid is concerned the file isn't actually moving since no matter where I put the file it always shows up in /mnt/disks/media.

Link to comment
10 hours ago, MowMdown said:

@axeman, no, if you pay close attention to the path for that move command, im using "/mnt/user/media" not "/mnt/disks/media" (I don't mount to /user/)

 

My rclone mount is under "/mnt/disks/media" so it does not interfere with the move. I'm essentiall moving the files from /mnt/user/media to the "crypt:media" mount but as far as the unraid is concerned the file isn't actually moving since no matter where I put the file it always shows up in /mnt/disks/media.

Thanks I noticed that - but don't really understand the difference. I thought perhaps you just typed it as an example of your script. 

 

 

Link to comment
On 9/8/2020 at 4:41 PM, DZMM said:

Can I get some help testing please.  V1.5.3 of rclone (remember you have to remove and reinstall the plugin to update it) now supports better caching where files can be cached locally.  I'll add a variable in for setting the cache location once it's all working, but for now can a few people try these settings in the mount script:

 


# create rclone mount
	rclone mount \
	--allow-other \
	--dir-cache-time 720h \
	--log-level INFO \
	--poll-interval 15s \
	--cache-dir=/mnt/user/downloads/rclone/tdrive_vfs/cache \
	--vfs-cache-mode full \
	--vfs-cache-max-size 500G \
	--vfs-cache-max-age 336h \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

set the cache-dir to wherever is convenient.   The settings above will keep up to 500GB of files downloaded from gdrive for up to 2 weeks, with the oldest removed first when full.  I think this will work well with my kids who keep stopping and starting the same file, or when plex is indexing or doing other operations.  However, I don't think it will help majorly with playback for my setup, unless a user tries to open the same file within a few hours.  Dunno.

 

There's another new setting --vfs-read-ahead that could potentially help with forward skipping/smoother playback by downloading more data ahead of the current stream position, that we can play with as well.

 

Edit: poll-interval shortens the default 1m, so should hopefully add a bit more butter to updates.

 

Edit 2:. Initial launch times are much faster even before the cache kicks in!!

 
 
 
 
 

I've just updated the mount script to support local file caching.  In my experience this has vastly improved playback experience and reduced transfer, and is definitely worth an upgrade.  To utilise you need to be on V1.5.3+

 

The new toggles to set are in the REQUIRED SETTINGS block:

 

RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="400G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files

I use /user0 as my location as I have 7 teamdrives mounted, so I don't have enough space on my SSD.  Choose wherever works for you.

 

https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount

Edited by DZMM
  • Thanks 1
Link to comment

hey! so I got a problem, most likely cause im new to unraid lol 

So I got the upload script to work as expected, files are passed to gdrive with encryption, then removed from local. 

My problem is with my mount script I assume. 

In the console running rclone lsd gdrive: or running rclone lsd gdrive_crypt:  returns all of my folders as expected. However when I run my mount script the folder is empty.


Here is my config stuff:


[gdrive]
type = drive
client_id = *****
client_secret = *****
scope = drive
token = {"access_token":*****"}

 

[gdrive_crypt]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = ****
password2 = ******

 

and here is my mount script

 

RcloneRemoteName="gdrive_crypt" 
RcloneMountShare="/mnt/user/the_stuff/gdrive" 
RcloneMountDirCacheTime="720h"
LocalFilesShare="/mnt/user/the_stuff/local"
RcloneCacheShare="/mnt/user0/the_stuff/gdrive" 
RcloneCacheMaxSize="400G" 
RcloneCacheMaxAge="336h" s
MergerfsMountShare="/mnt/user/mount_mergerfs" 
DockerStart="nzbget plex sonarr radarr ombi" 

 

here is the output of it


(

Script location: /tmp/user.scripts/tmpScripts/rclone_mount_plugin/script
Note that closing this window will abort the execution of this script
12.11.2020 19:00:00 INFO: Creating local folders.
12.11.2020 19:00:00 INFO: Creating MergerFS folders.
12.11.2020 19:00:00 INFO: *** Starting mount of remote gdrive_crypt
12.11.2020 19:00:00 INFO: Checking if this script is already running.
12.11.2020 19:00:00 INFO: Script not running - proceeding.
12.11.2020 19:00:00 INFO: *** Checking if online
12.11.2020 19:00:02 PASSED: *** Internet online
12.11.2020 19:00:02 INFO: Success gdrive_crypt remote is already mounted.
12.11.2020 19:00:02 INFO: Check successful, gdrive_crypt mergerfs mount in place.
12.11.2020 19:00:02 INFO: Starting dockers.
Error response from daemon: No such container: nzbget
Error response from daemon: No such container: plex
Error response from daemon: No such container: sonarr
Error response from daemon: No such container: radarr
Error response from daemon: No such container: ombi
Error: failed to start containers: nzbget, plex, sonarr, radarr, ombi
12.11.2020 19:00:02 INFO: Script complete

 

and here is my upload script 

 

 


# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_crypt" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_crypt" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/the_stuff/local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/the_stuff/gdrive" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first


(it outputs this:

12.11.2020 19:23:06 INFO: *** Rclone move selected. Files will be moved from /mnt/user/the_stuff/local/gdrive_crypt for gdrive_crypt ***
12.11.2020 19:23:06 INFO: *** Starting rclone_upload script for gdrive_crypt ***
12.11.2020 19:23:06 INFO: Script not running - proceeding.
12.11.2020 19:23:06 INFO: Checking if rclone installed successfully.
12.11.2020 19:23:06 INFO: rclone installed successfully - proceeding with upload.
12.11.2020 19:23:06 INFO: Uploading using upload remote gdrive_crypt
12.11.2020 19:23:06 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
2020/11/12 19:23:06 DEBUG : --min-age 15m0s to 2020-11-12 19:08:06.429286925 -0800 PST m=-899.987925388
2020/11/12 19:23:06 DEBUG : rclone: Version "v1.53.2" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/the_stuff/local/gdrive_crypt" "gdrive_crypt:" "--user-agent=gdrive_crypt" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,15M 16:00,12M" "--bind=" "--delete-empty-src-dirs"]
2020/11/12 19:23:06 DEBUG : Creating backend with remote "/mnt/user/the_stuff/local/gdrive_crypt"
2020/11/12 19:23:06 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/11/12 19:23:06 INFO : Starting bandwidth limiter at 12MBytes/s
2020/11/12 19:23:06 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/11/12 19:23:06 DEBUG : Creating backend with remote "gdrive_crypt:"
2020/11/12 19:23:06 DEBUG : Creating backend with remote "gdrive:crypt"
2020/11/12 19:23:06 DEBUG : Google drive root 'crypt': root_folder_id = "****" - save this in the config to speed up startup
2020/11/12 19:23:06 DEBUG : downloads: Excluded
2020/11/12 19:23:07 DEBUG : Encrypted drive 'gdrive_crypt:': Waiting for checks to finish
2020/11/12 19:23:07 DEBUG : Encrypted drive 'gdrive_crypt:': Waiting for transfers to finish
2020/11/12 19:23:07 DEBUG : tv: Removing directory
2020/11/12 19:23:07 DEBUG : movies: Removing directory
2020/11/12 19:23:07 DEBUG : Local file system at /mnt/user/the_stuff/local/gdrive_crypt: deleted 2 directories
2020/11/12 19:23:07 INFO : There was nothing to transfer
2020/11/12 19:23:07 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Elapsed time: 0.7s

2020/11/12 19:23:07 DEBUG : 7 go routines active
12.11.2020 19:23:07 INFO: Not utilising service accounts.
12.11.2020 19:23:07 INFO: Script complete
)


going to this path 


/mnt/user/the_stuff/gdrive
contains     /cache and /gdrive_crypt both are empty 

 

I know in spaceinvaders tutorial his mount script ran in background but when I try to do that it runs for a second then stops. However if i run it once it says it mounted and then if I try again it says that it already mounted. 

Please let me know if you need me to provide more info, thanks a lot for the help! and thanks for the scripts! 
 

Edited by InCaseOf
Link to comment
On 11/9/2020 at 9:38 AM, MowMdown said:

@axeman, no, if you pay close attention to the path for that move command, im using "/mnt/user/media" not "/mnt/disks/media" (I don't mount to /user/)

 

My rclone mount is under "/mnt/disks/media" so it does not interfere with the move. I'm essentiall moving the files from /mnt/user/media to the "crypt:media" mount but as far as the unraid is concerned the file isn't actually moving since no matter where I put the file it always shows up in /mnt/disks/media.

So I'm having trouble creating files on the union location. It's strange because I if I go directly to the media_vfs mount, I can create files. But can't on the media one. I even tried installing the unassigned devices and updated the paths to /disks/ instead of /user/ ..

 

What could I be doing wrong? 

 

Thanks for your time. 

 

Edit : this shows up on the script log:

/test2.txt: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes

/test2.txt: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes

an rclone forum post said to set the cache mode ... but I see we are already doing that on the mount script. It seems to happen regardless of the :nc modifier. 

Edited by axeman
Link to comment

when writing to the union mount directory “media” the non “vfs” one, shouldn’t be touching the cache because it should only write to the local drives your first upstream in the Union setup. Sounds like maybe you should check the spelling/case of that first path. 
 

you might need to add the flag

-vv

 to the mount command so you can verbosely debug the issue further.

Link to comment

@DZMM After a server reboot I can’t seem to get the the rclone_mount script (same one in my last post) to work all I’m seeing is:

————-

Script Starting Nov 14, 2020  23:51.50

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

14.11.2020 23:51:50 INFO: Creating local folders.
14.11.2020 23:51:50 INFO: *** Starting mount of remote gdrive_media_vfs
14.11.2020 23:51:50 INFO: Checking if this script is already running.
14.11.2020 23:51:50 INFO: Exiting script as already running.
Script Finished Nov 14, 2020  23:51.50

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

_______
Thanks

Link to comment
1 hour ago, live4ever said:

@DZMM After a server reboot I can’t seem to get the the rclone_mount script (same one in my last post) to work all I’m seeing is:

————-

Script Starting Nov 14, 2020  23:51.50

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

14.11.2020 23:51:50 INFO: Creating local folders.
14.11.2020 23:51:50 INFO: *** Starting mount of remote gdrive_media_vfs
14.11.2020 23:51:50 INFO: Checking if this script is already running.
14.11.2020 23:51:50 INFO: Exiting script as already running.
Script Finished Nov 14, 2020  23:51.50

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

_______
Thanks

If the mount wasn't killed nicely there is most likely still the mount_running file in place. I'd also confirm with a ps to grep to see if the mount is indeed not running properly.

 

ps -aux | grep rclone

 

If the mount scripts really isn't running, then you should run the rclone_unmount script as that will cleanup the necessary lock files. The deletion commands used by the script are:

 

find /mnt/user/appdata/other/rclone/remotes -name dockers_started* -delete
find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
find /mnt/user/appdata/other/rclone/remotes -name upload_running* -delete

 

 

Link to comment

@DZMM Have you ever seen the mergerfs straight up crash? Today we've had it happen twice, the mergerfs process crashes with no logs(That I am aware of). The rclone mount and various other scripts are still functioning as expected. Do you know if there is some way to review errors or crash reasons for mergerfs?

Link to comment
1 hour ago, M1kep_ said:

@DZMM Have you ever seen the mergerfs straight up crash? Today we've had it happen twice, the mergerfs process crashes with no logs(That I am aware of). The rclone mount and various other scripts are still functioning as expected. Do you know if there is some way to review errors or crash reasons for mergerfs?

Not sure.  Sometimes it stops working (rare) and I have to do a quick tidy up e.g. my dockers might not have stopped in time and I have to actually managed to physically add files to /mount_mergerfs, that I have to move manually to /local, so I can re-mount.

Link to comment
1 hour ago, Bolagnaise said:

I seem to be getting some episodes showing up as duplicates in plex in the same file location since switching to mergerFS. Kinda of confused as to why it would be happening to only some shows. Sonarr and plex are both mapped to /user which is mapped to /mnt/user

image.thumb.png.f7c44c00998ed833b5173509f15aae6e.png

 

Have you looked at the actual path to see if there are two files there?  I'm not sure if it's a rclone/script or sonarr/radarr issue, but this happens to me sometimes as well e.g. same file like your scenario or two versions of the same show/movie. 

 

If I spot them and I can be bothered I tidy up, but I have so much content now that if it plays I don't do anything.  The one thing I am anal about is fixing Plex posters as I hate the ones with lots of text on that it seems to default to!  Also, movie ratings as I like to have mine consistent as I use them to filter my kids' libraries e.g. they can only see GB/U, GB/PG, and GB/12.

Link to comment
8 hours ago, M1kep_ said:

If the mount wasn't killed nicely there is most likely still the mount_running file in place. I'd also confirm with a ps to grep to see if the mount is indeed not running properly.

 


ps -aux | grep rclone

 

If the mount scripts really isn't running, then you should run the rclone_unmount script as that will cleanup the necessary lock files. The deletion commands used by the script are:

 


find /mnt/user/appdata/other/rclone/remotes -name dockers_started* -delete
find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
find /mnt/user/appdata/other/rclone/remotes -name upload_running* -delete

 

 

@M1kep_ Thanks, the command:

root@Tower:~# find /mnt/user/appdata/other/rclone/remotes -name mount_running*
/mnt/user/appdata/other/rclone/remotes/gdrive_media_vfs/mount_running

find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete

allowed the rclone_mount script to start and download/install the mergerfs software.

 

Link to comment
16 hours ago, DZMM said:

Have you looked at the actual path to see if there are two files there?  I'm not sure if it's a rclone/script or sonarr/radarr issue, but this happens to me sometimes as well e.g. same file like your scenario or two versions of the same show/movie. 

 

If I spot them and I can be bothered I tidy up, but I have so much content now that if it plays I don't do anything.  The one thing I am anal about is fixing Plex posters as I hate the ones with lots of text on that it seems to default to!  Also, movie ratings as I like to have mine consistent as I use them to filter my kids' libraries e.g. they can only see GB/U, GB/PG, and GB/12.

There’s isnt 2 versions and if i delete one of them it deletes both of them as the other file becomes unavailable and no longer plays. 

 

I tried the plex dance, but they continue to come back as duplicates even after.....

Edited by Bolagnaise
Link to comment
On 11/14/2020 at 10:12 AM, MowMdown said:

when writing to the union mount directory “media” the non “vfs” one, shouldn’t be touching the cache because it should only write to the local drives your first upstream in the Union setup. Sounds like maybe you should check the spelling/case of that first path. 
 

you might need to add the flag


-vv

 to the mount command so you can verbosely debug the issue further.

This might be a me thing - since I need the rclone mounts available to my Windows machines, I have it in /mnt/user . 

 

If I try to copy a file into unioned mount from inside of UnRaid via MC, it works exactly as you'd want... the file goes right to the local share. 

 

However, if I do the same from a windows machine - it fails. 

 

Interestingly this doesn't seem to be problem on an Android device (using SolidExplorer). Nor does it happen with the @DZMM mergerfs based scripts. 

 

 

 

 

Link to comment
On 11/12/2020 at 2:39 AM, DZMM said:

I've just updated the mount script to support local file caching.  In my experience this has vastly improved playback experience and reduced transfer, and is definitely worth an upgrade.  To utilise you need to be on V1.5.3+

 

The new toggles to set are in the REQUIRED SETTINGS block:

 


RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="400G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files

I use /user0 as my location as I have 7 teamdrives mounted, so I don't have enough space on my SSD.  Choose wherever works for you.

 

https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount

 

Ive used rclone with mergerFS on my seedbox with no issues, however this is my first time using rclone on unraid and its not working properly. Using one of your most recent scripts with local caching rclone was downloading everything from my google drive and its now sitting in my /mnt/user0/mount_rclone/cache/gsuite/vfs/gsuite/Plex Folder/. Im not sure what to adjust in the script to stop it from doing that. Any guidance is appreciated!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.