Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

9 hours ago, Lucka said:

I wish! Not on uraid unfortunately.

add this to your upload script in the custom commands.  It'll create a log file you can see:

 

--log-file=/home/user/wherever_you_want_$CounterNumber.txt

 

  • Like 1
Link to comment

Okay, not sure what I'm doing wrong.  I have one mount working but want to add another.  I'm trying to copy https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount to a new script and changing "RcloneRemoteName" and removing "--rc".  I run the script but then it tells me it's already running.  How?  I just created it and haven't pressed Run Script or Run In Background.  What info do I need to provide to make this easier?  Thanks!

Link to comment

Hi,

 

I'm completely new to rclone, I tried to follow your post but there are some things that I'm not sure to do everything correctly :

 

On your link https://github.com/BinsonBuzz/unraid_rclone_mount you talk about gdrive_media_vfs but in the scripts it's only gdrive_vfs, should it be named like this (gdrive_vfs)?

 

I'm trying to use everything as the same paths that you choose so I didn't change anything in the scripts execpt :

in rclone_mount => RcloneCacheShare="/mnt/cache/mount_rclone" to use my SSD (/mnt/cache)

in rclone_upload => RcloneUploadRemoteName="gdrive_vfs"

 

my rclone config is like that :

[gdrive]
type = drive
client_id = aaa
client_secret = bbb
scope = drive
root_folder_id = ccc
token = {"access_token":"ttt","expiry":"2022-02-24T01:13:06.534714443+01:00"}
team_drive =

[gdrive_vfs]
type = crypt
remote = gdrive:crypt
password = pass1
password2 = pass2

 

As I understand the behaviour, once rclone_mount script has been launched, if I add anything in /mnt/user/local/gdrive_vfs/movies or tv it's copied into /user/mount_mergerfs/gdrive_vfs/movies or tv. So I tried to add a filed canada.pdf to tv and it successfully has been added from local to mount_mergefs

Then if I launch rclone_upload script it should take the content from movies/tv and move it to /mnt/user/mount_rclone/gdrive_vfs but when I launche the script I have this log :

24.02.2022 00:36:24 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs ***
24.02.2022 00:36:24 INFO: *** Starting rclone_upload script for gdrive_vfs ***
24.02.2022 00:36:24 INFO: Script not running - proceeding.
24.02.2022 00:36:24 INFO: Checking if rclone installed successfully.
24.02.2022 00:36:24 INFO: rclone installed successfully - proceeding with upload.
24.02.2022 00:36:24 INFO: Uploading using upload remote gdrive_vfs
24.02.2022 00:36:24 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
2022/02/24 00:36:24 INFO : Starting bandwidth limiter at 12Mi Byte/s
2022/02/24 00:36:24 INFO : Starting transaction limiter: max 8 transactions/s with burst 1
2022/02/24 00:36:24 DEBUG : --min-age 15m0s to 2022-02-24 00:21:24.183962748 +0100 CET m=-899.965480723
2022/02/24 00:36:24 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/gdrive_vfs" "gdrive_vfs:" "--user-agent=gdrive_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,15M 16:00,12M" "--bind=" "--delete-empty-src-dirs"]
2022/02/24 00:36:24 DEBUG : Creating backend with remote "/mnt/user/local/gdrive_vfs"
2022/02/24 00:36:24 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2022/02/24 00:36:24 DEBUG : Creating backend with remote "gdrive_vfs:"
2022/02/24 00:36:24 DEBUG : Creating backend with remote "gdrive:crypt"
2022/02/24 00:36:24 DEBUG : gdrive: detected overridden config - adding "{y5r0i}" suffix to name
2022/02/24 00:36:24 DEBUG : fs cache: renaming cache item "gdrive:crypt" to be canonical "gdrive{y5r0i}:crypt"
2022/02/24 00:36:24 DEBUG : downloads: Excluded
2022/02/24 00:36:24 DEBUG : tv/Canada.pdf: Excluded
2022/02/24 00:36:24 DEBUG : Encrypted drive 'gdrive_vfs:': Waiting for checks to finish
2022/02/24 00:36:24 DEBUG : Encrypted drive 'gdrive_vfs:': Waiting for transfers to finish
2022/02/24 00:36:24 INFO : tv: Removing directory
2022/02/24 00:36:24 DEBUG : tv: Failed to Rmdir: remove /mnt/user/local/gdrive_vfs/tv: directory not empty
2022/02/24 00:36:24 INFO : music: Removing directory
2022/02/24 00:36:24 INFO : movies: Removing directory
2022/02/24 00:36:24 INFO : gdrive_vfs: Removing directory
2022/02/24 00:36:24 DEBUG : Local file system at /mnt/user/local/gdrive_vfs: failed to delete 1 directories
2022/02/24 00:36:24 DEBUG : Local file system at /mnt/user/local/gdrive_vfs: deleted 3 directories
2022/02/24 00:36:24 INFO : There was nothing to transfer
2022/02/24 00:36:24 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Deleted: 0 (files), 4 (dirs)
Elapsed time: 0.7s
2022/02/24 00:36:24 DEBUG : 7 go routines active
24.02.2022 00:36:24 INFO: Not utilising service accounts.
24.02.2022 00:36:24 INFO: Script complete

 

The file I added : tv/Canada.pdf is listed as Excluded and I don't understand why and I only have a mountcheck file in /mnt/user/mount_rclone/gdrive_vfs

 

One last thing I didn't understand is /mnt/user/mount_unionfs, you talk about it on github but I don't see anything about this in the editable parts of the script, should I create a folder name like this somewhere?

 

I hope someone can help me to understand what I'm doing wrong.

 

Link to comment

Hello,
I'm having some difficulties with playing anything from my remote using this script. The upload script or just browsing the mounted remote works great but whenever I try to play something using plex (or emby) I get this error: 

2022/02/27 17:12:03 ERROR : video_files/movies/4K/_meta70/Interstellar.2014.UHD.BluRay.2160p/Interstellar.2014.UHD.BluRay.2160p.mkv: 
vfs cache: failed to open item: 
vfs cache item: createItemDir failed: failed to create data cache item directory: 
mkdir /mnt/user0/mount_rclone/cache/secure/vfs/secure/video_files/movies/4K/_meta70: no medium found


The only things that I changed in my mount script were RcloneRemoteName, DockerStart and MountFolders.

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="400G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{""\} # comma separated list of folders to create within the mount

[...]


I'm using rclone 1.58.0-beta.5999.f22b703a5 with the following config:
 

[gsuite]
type = drive
client_id = ...
client_secret = ...
scope = drive
token = {"access_token":"...","token_type":"Bearer","refresh_token":"...","expiry":"2022-02-27T17:45:15.295133871+01:00"}
root_folder_id = ...

[secure]
type = crypt
remote = gsuite:secure
filename_encryption = standard
directory_name_encryption = true
password = ...
password2 = ...


This is the path used in plex:
817023262_Screenshot2022-02-27at17_43_14.png.0c27bafdfc77be1a0ee3a293ea0bd74c.png

/gsuite/video_files/movies/4K/_meta70/Interstellar.2014.UHD.BluRay.2160p/Interstellar.2014.UHD.BluRay.2160p.mkv


I was once using this script some years ago without issues, just recently I got stuck on this problem when I had to set it up again from scratch on the same machine. No amount of restarting helps.

 

Any ideas what could be the cause of this issue?

Thanks

Link to comment

Hi - little confused on this. I set up the script with the default settings. I want to use local and remote files and have them merged. However when I go to Plex and try to set up the mount point to be /mnt/user/mount_mergerfs, there is only my mount folder displayed. Is this expected? Where should I be pointing Plex to scan files from?

 

I can set plex to /mnt/user/rclone or /mnt/user/local, but doesn't that defeat the purpose.

Edited by thekiefs
Link to comment
16 minutes ago, thekiefs said:

Hi - little confused on this. I set up the script with the default settings. I want to use local and remote files and have them merged. However when I go to Plex and try to set up the mount point to be /mnt/user/mount_mergerfs, there is only my mount folder displayed. Is this expected? Where should I be pointing Plex to scan files from?

 

I can set plex to /mnt/user/rclone or /mnt/user/local, but doesn't that defeat the purpose.

you should point plex to the mergerfs folder.  You have to start plex AFTER you've created the mount, which might explain your problem.

Link to comment
5 hours ago, DZMM said:

you should point plex to the mergerfs folder.  You have to start plex AFTER you've created the mount, which might explain your problem.

Thanks - it seems like mergerfs keeps getting rebuilt from source and installed everytime I reboot and run the script. Perhaps there is a more stable way to do this that doesn't involve a script. I'm very unfamiliar with unRAID. Any idea why that is happening?

 

Also, the script is creating the title of my rclone remote folder in the local folder and the mergerfs folder.

 

This happens on every reboot:

Script Starting Feb 27, 2022 18:18.03

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

27.02.2022 18:18:03 INFO: Creating local folders.
27.02.2022 18:18:03 INFO: Creating MergerFS folders.
27.02.2022 18:18:03 INFO: *** Starting mount of remote gd
27.02.2022 18:18:03 INFO: Checking if this script is already running.
27.02.2022 18:18:03 INFO: Script not running - proceeding.
27.02.2022 18:18:03 INFO: *** Checking if online
27.02.2022 18:18:04 PASSED: *** Internet online
27.02.2022 18:18:04 INFO: Mount not running. Will now mount gd remote.
27.02.2022 18:18:04 INFO: Recreating mountcheck file for gd remote.
2022/02/27 18:18:04 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "gd:" "-vv" "--no-traverse"]
2022/02/27 18:18:04 DEBUG : Creating backend with remote "mountcheck"
2022/02/27 18:18:04 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2022/02/27 18:18:04 DEBUG : fs cache: adding new entry for parent of "mountcheck", "/usr/local/emhttp"
2022/02/27 18:18:04 DEBUG : Creating backend with remote "gd:"
2022/02/27 18:18:04 DEBUG : mountcheck: Modification times differ by -11m38.700701001s: 2022-02-27 18:18:04.225701001 -0800 PST, 2022-02-28 02:06:25.525 +0000 UTC
2022/02/27 18:18:04 DEBUG : mountcheck: md5 = d41d8cd98f00b204e9800998ecf8427e OK
2022/02/27 18:18:05 INFO : mountcheck: Updated modification time in destination
2022/02/27 18:18:05 DEBUG : mountcheck: Unchanged skipping
2022/02/27 18:18:05 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Checks: 1 / 1, 100%
Elapsed time: 0.9s

2022/02/27 18:18:05 DEBUG : 4 go routines active
27.02.2022 18:18:05 INFO: *** Creating mount for remote gd
27.02.2022 18:18:05 INFO: sleeping for 5 seconds
2022/02/27 18:18:05 NOTICE: Serving remote control on http://localhost:5572/
27.02.2022 18:18:10 INFO: continuing...
27.02.2022 18:18:10 CRITICAL: gd mount failed - please check for problems. Stopping dockers
plex
transmission
Script Finished Feb 27, 2022 18:18.10

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

 

Also, this happens, presumably because dockers_started and mount_running do not get removed on reboot.

 

Script Starting Feb 27, 2022 19:02.14

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

27.02.2022 19:02:14 INFO: Creating local folders.
27.02.2022 19:02:14 INFO: Creating MergerFS folders.
27.02.2022 19:02:14 INFO: *** Starting mount of remote gducla
27.02.2022 19:02:14 INFO: Checking if this script is already running.
27.02.2022 19:02:14 INFO: Exiting script as already running.
Script Finished Feb 27, 2022 19:02.14

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt



 

 



 

 

 

Edited by thekiefs
Link to comment
6 hours ago, thekiefs said:

Any idea why that is happening?

Because unRAID doesn't support mergerfs so the script has to install it.

 

6 hours ago, thekiefs said:

Also, the script is creating the title of my rclone remote folder in the local folder and the mergerfs folder.

 

This happens on every reboot:

which is supposed to happen

 

6 hours ago, thekiefs said:

Also, this happens, presumably because dockers_started and mount_running do not get removed on reboot.

you haven't installed the unmount/startup script to run at array start or array stop

Link to comment
  • 2 weeks later...
On 2/2/2022 at 1:09 AM, DZMM said:

I used to have similar problems where the first attempt would fail, but eventually it would mount as the cron job made further attempts.  This seems to have stopped for me since I reduced the size of RcloneCacheMaxSize.  My suspicion is hat trclone needs a bit of time to compare what's sitting in the cache locally and not uploaded yet, Vs what's in the remote location, before creating the actual mount.

 

With a 3TB cache I think this is the problem.  Try dropping to say 20GB and see if this solves the "issue" and then decide if you want a 1st mount experience, or you're happy for your cron job to handle it and you want a bigger cache.

 

 

I have this same problem:

 

2022/03/19 16:16:26 DEBUG : 4 go routines active
19.03.2022 16:16:26 INFO: *** Creating mount for remote gd
19.03.2022 16:16:26 INFO: sleeping for 5 seconds
2022/03/19 16:16:26 NOTICE: Serving remote control on http://localhost:5572/
19.03.2022 16:16:31 INFO: continuing...
19.03.2022 16:16:31 CRITICAL: gd mount failed - please check for problems.  Stopping dockers

 

I set RcloneCacheMaxSize="20G", but it's still failing on first boot. Any other guidance here for fixing this?

Link to comment
14 hours ago, thekiefs said:

 

 

I have this same problem:

 

2022/03/19 16:16:26 DEBUG : 4 go routines active
19.03.2022 16:16:26 INFO: *** Creating mount for remote gd
19.03.2022 16:16:26 INFO: sleeping for 5 seconds
2022/03/19 16:16:26 NOTICE: Serving remote control on http://localhost:5572/
19.03.2022 16:16:31 INFO: continuing...
19.03.2022 16:16:31 CRITICAL: gd mount failed - please check for problems.  Stopping dockers

 

I set RcloneCacheMaxSize="20G", but it's still failing on first boot. Any other guidance here for fixing this?

set your cron job and it will mount eventually - my cron is set to every 3 mins

Link to comment

Ok thanks.

 

Few basic questions:

1) why have mergerfs in the first place? Can't you just set Radarr/Sonarr to hard link files once they are completed downloading, and then have rclone follow those links to upload eventually without using MergerfS?

 

2) Also, I didn't see it in the guide, but do any of you set the share cache to be preferred for any of the shares? I would think that the rclone cache makes sense to configure that way?583916066_ScreenShot2022-03-20at11_40_56.thumb.png.720dc14e860ee6bf3058a2ddf2903b61.png

 

3) My file structure is as follows:

Rclone mount for media - /mnt/user/mergerfs/gd/* <movies/music/shows

Download folder - /mnt/user/mergerfs/downloads/* <torrents/music/movies/shows

 

I notice that after a download is completed, Sonarr/Raddarr is not hard linking (checked with ls -i). Does the download folder need to be in the rclone mount folder?

Edited by thekiefs
Link to comment

I have 2 shared google drives that contain the same folder structure. Shuld I just make 2 identical scripts and change only rclone name? 

To be specific: in shared drive 1 I have media/tv/tvshow1/*.mp4

In shared drive 2 i have the same tv show under media/tv/tvshow1/*.mp4

Its the same tvshow1 in both locations, just different episodes.

Link to comment

Manage to sort it out with union type mount if anyone ever needs it.

 

Did someown tryed creating service account's on google workspace lateley? It seems that AutoRclone doesn't work anymore? Any new guides out there?

Link to comment
On 3/20/2022 at 6:24 PM, thekiefs said:

I notice that after a download is completed, Sonarr/Raddarr is not hard linking (checked with ls -i). Does the download folder need to be in the rclone mount folder?

It looks like I have the same problem, here are my sonarr and qbt docker conf
image.png.fb22deb14b7183fdfb78e400552ef6f6.pngimage.thumb.png.19b1b6b02860565152466d1499238920.png

image.png.b50a75e57eb1ee0a75cce18dfd9e7712.png

image.png.211520c54bd4075c448c050c8da2562c.png

 

and ls -ldi from a file
 

ls -ldi /mnt/user/mount_mergerfs/gdrive_vfs/medias/tv/hd/Friends\ \(1994\)/Season\ 1/Friends\ -\ S01E01\ -\ The\ One\ Where\ Monica\ Gets\ a\ Roommate\ x265\ AC3\[EN+FR\]\ \[FR+EN\]\ Bluray-1080p.mkv 
6952752600862098309 -rw-r--r-- 1 nobody users 399322889 Mar 22 12:33 /mnt/user/mount_mergerfs/gdrive_vfs/medias/tv/hd/Friends\ (1994)/Season\ 1/Friends\ -\ S01E01\ -\ The\ One\ Where\ Monica\ Gets\ a\ Roommate\ x265\ AC3[EN+FR]\ [FR+EN]\ Bluray-1080p.mkv

 

Edited by kesm
Link to comment

Hi all! 

All is working well, i only have this issue, and it makes me clean up the path like once a month. 
so in my /local/gdrive/downloads/complete/ i have some files just laying around filling up my cache. 

there is no failed files in sabnzbd so it cant be from there? anybody tried something like this? 

Skærmbillede 2022-03-25 111100.png

Link to comment
22 hours ago, kesm said:

It looks like I have the same problem, here are my sonarr and qbt docker conf
image.png.fb22deb14b7183fdfb78e400552ef6f6.pngimage.thumb.png.19b1b6b02860565152466d1499238920.png

image.png.b50a75e57eb1ee0a75cce18dfd9e7712.png

image.png.211520c54bd4075c448c050c8da2562c.png

 

and ls -ldi from a file
 

ls -ldi /mnt/user/mount_mergerfs/gdrive_vfs/medias/tv/hd/Friends\ \(1994\)/Season\ 1/Friends\ -\ S01E01\ -\ The\ One\ Where\ Monica\ Gets\ a\ Roommate\ x265\ AC3\[EN+FR\]\ \[FR+EN\]\ Bluray-1080p.mkv 
6952752600862098309 -rw-r--r-- 1 nobody users 399322889 Mar 22 12:33 /mnt/user/mount_mergerfs/gdrive_vfs/medias/tv/hd/Friends\ (1994)/Season\ 1/Friends\ -\ S01E01\ -\ The\ One\ Where\ Monica\ Gets\ a\ Roommate\ x265\ AC3[EN+FR]\ [FR+EN]\ Bluray-1080p.mkv

 

 

Your destination in the Flood download client config in Sonarr should be /user/mount_mergerfs/downloads/complete/sonarr/ I believe. That should hopefully get it sorted :)

Edited by Akatsuki
  • Like 1
  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.