Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

2468 posts in this topic Last Reply

Recommended Posts

  • Replies 2.5k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

3/1/20 UPDATE: TO MIGRATE FROM UNIONFS TO MERGERFS READ THIS POST.  New users continue reading 13/3/20 Update: For a clean version of the 'How To' please use the github site https://github.com/B

Multiple mounts, one upload and one tidy-up script.   @watchmeexplode5 did some testing and performance gets worse as you get closer to the 400k mark, so you'll need to do something like bel

so ive been reading for ages but theres so many pages here, my issue is that when the array is stopped the mergerfs mount is not unmounted casuing unraid to keep retrying indefinately to unmount share

Posted Images

AM I GOING CRAZY?

i saw a post yesterday saying that mergerfs did not install after a reboot (have me quite scared lol since we have regular power problems.)

 

assume it resolved itself?

Link to post
12 minutes ago, Neo_x said:

AM I GOING CRAZY?

i saw a post yesterday saying that mergerfs did not install after a reboot (have me quite scared lol since we have regular power problems.)

 

assume it resolved itself?

I deleted it as it was a false alarm to try and not scare anyone - sorry!

Link to post

Quick question. Could mergerfs be used to combine a 4k movies folder and 1080p movies folder for Plex, but keep the two folders separate for my 2 instances of Radarr to address?

 

Example: I use @DZMM's script to mount my 4k movies folder and 1080p movies folder, I have a Radarr 1080p docker that maps to the 1080p Folder and Radarr 4k docker that maps to the 4k movies folder. I then add another line to DZMM's script that also creates a mergerfs folder that combines those two locations for Plex to address. Since plex now has the ability to choose a 1080p file over a 4k file if the client can't natively stream 4k, this would eliminate the need to have 2 separate libraries of movies. I believe the 2 Radarr instances would keep the movie folder's naming convention the same inside each of their parent folders, so only the movie files themselves would be different since I have radarr add the quality to the filename. 

 

If anyone could give me a sanity check on if this would work and also the mergerfs syntax, that would be huge. 

Link to post
1 hour ago, veritas2884 said:

Quick question. Could mergerfs be used to combine a 4k movies folder and 1080p movies folder for Plex, but keep the two folders separate for my 2 instances of Radarr to address?

 

Example: I use @DZMM's script to mount my 4k movies folder and 1080p movies folder, I have a Radarr 1080p docker that maps to the 1080p Folder and Radarr 4k docker that maps to the 4k movies folder. I then add another line to DZMM's script that also creates a mergerfs folder that combines those two locations for Plex to address. Since plex now has the ability to choose a 1080p file over a 4k file if the client can't natively stream 4k, this would eliminate the need to have 2 separate libraries of movies. I believe the 2 Radarr instances would keep the movie folder's naming convention the same inside each of their parent folders, so only the movie files themselves would be different since I have radarr add the quality to the filename. 

 

If anyone could give me a sanity check on if this would work and also the mergerfs syntax, that would be huge. 

I am certainly no expert in this - but I believe you can accomplish this by running another instance of the script that points at your 4K collection - and set the option to NOT create merger FS mount for that script. MergerfsMountShare="ignore" at the top variables. 

 

Then on other script that you have the MergerFS, you update LocalFilesShare2 or whatever to include the path you create above. 

 

I have something similar with my TV shows. I have TV shows that are in-progress and TV shows that are completed separated out. The completed ones are on the cloud mount, the in-progress ones are local. The scenario is different because the libraries meant to show up separately. However, I'd imagine it'd work for your purpose as well. 

Link to post
20 minutes ago, axeman said:

I am certainly no expert in this - but I believe you can accomplish this by running another instance of the script that points at your 4K collection - and set the option to NOT create merger FS mount for that script. MergerfsMountShare="ignore" at the top variables. 

RcloneRemoteName="secure" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/disks/" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/mediacloud" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/cache/rcloneshare" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="400G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/disks/merge" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

 

Thank you for your reply. I am not quite sure I follow though. So above is my current setup. 

 

This creates/mounts these folders: 

/mnt/disks/merge/secure/Movies-UHD/

mnt/disks/merge/secure/Movies

 

What I'd like to do is create another mergerfs mount that combines them into here:

 

/mnt/user/mediacloud/secure/movie_merge

 

Would you be able to help me with the solution you provided based on those variables? 

 

 

FYI I tried to run a one-off mergerfs command

mergerfs /mnt/disks/merge/secure/Movies-UHD:/mnt/disks/merge/secure/Movies /mnt/user/mediacloud/secure/movie_merge -o ro,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

 

But I get this error:

mount: /mnt/user/mediacloud/secure/movie_merge: can't find UUID="HD:".

 

Edited by veritas2884
Link to post
2 hours ago, veritas2884 said:

Quick question. Could mergerfs be used to combine a 4k movies folder and 1080p movies folder for Plex, but keep the two folders separate for my 2 instances of Radarr to address?

 

Example: I use @DZMM's script to mount my 4k movies folder and 1080p movies folder, I have a Radarr 1080p docker that maps to the 1080p Folder and Radarr 4k docker that maps to the 4k movies folder. I then add another line to DZMM's script that also creates a mergerfs folder that combines those two locations for Plex to address. Since plex now has the ability to choose a 1080p file over a 4k file if the client can't natively stream 4k, this would eliminate the need to have 2 separate libraries of movies. I believe the 2 Radarr instances would keep the movie folder's naming convention the same inside each of their parent folders, so only the movie files themselves would be different since I have radarr add the quality to the filename. 

 

If anyone could give me a sanity check on if this would work and also the mergerfs syntax, that would be huge. 

I think what you are proposing to do isn't the best way to solve your problem as Radarr won't like it and will end up deleting files e.g. Radarr_4K will delete and upgade the 1080p file. 

 

This is what I do - short version as in a rush - but you should be able to follow the logic:

 

  1. Radarr 1080p (R1080) docker looking at HD folder
  2. Radarr 4k docker (R4K) looking at 4K folder
  3. Radarr sync script so R1080P movies with profile UHD get synced to R4K
  4. R1080P UHD set to not upgrade beyond 1080p remux
  5. Files get added to seperate folders
  6. Both those folders added to my Movies library in Plex that manages which version to play
Link to post
9 hours ago, DZMM said:

I think what you are proposing to do isn't the best way to solve your problem as Radarr won't like it and will end up deleting files e.g. Radarr_4K will delete and upgade the 1080p file. 

 

This is what I do - short version as in a rush - but you should be able to follow the logic:

 

  1. Radarr 1080p (R1080) docker looking at HD folder
  2. Radarr 4k docker (R4K) looking at 4K folder
  3. Radarr sync script so R1080P movies with profile UHD get synced to R4K
  4. R1080P UHD set to not upgrade beyond 1080p remux
  5. Files get added to seperate folders
  6. Both those folders added to my Movies library in Plex that manages which version to play

Thank you for the reply. I will look into the radar sync script. One thing, I thought since the RadarrHD and Radarr4K dockers would be addressing different root

movie folders and only Plex would be reading the mergerfs folder of the two of them , that they wouldn’t be able to see or delete each other’s files. 

Edited by veritas2884
Link to post

my rclone mount would crash after the upload script finishes. i just finished setting it up and every time my upload script finishes it would crash. i copied the script from the guide changing only the paths, upload bandwidth and buffer size. 

i can't remount until i use fusermount to unmount.

 

2021/02/07 20:19:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:20:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:21:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:22:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:23:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:24:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:25:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
/usr/sbin/rclone: line 3: 16353 Killed                  rcloneorig --config $config "$@"

 

Link to post

Is it possible to mount a local path into a specific gdrive folder?

I thought i merge a local "/mnt/user/folder1" with it's remote version "gdrive/path/folder1".

 

Using "LocalFilesShare2="/mnt/user/folder1"" mounts the folder into the mergerfs gdrive root directory mixing up the paths.

 

Not entirely sure if this makes a whole lot of sense but i only need to merge one specific folder and not the whole gdrive directoy so i was wondering what the best practice would be.

 

any help is appreciated!

Link to post

Thank you for the scripts and how-to guide!  I have gotten through all of the steps and it looks like things are working well for Plex.  I had a couple of questions to clarify how some things work (I apologize if these have already been answered, I might have missed it while I was reading the thread):

 

  1.  I would like to be able to use some files from my Google Drive in my VMs.  I am unable to "see" the mount_mergerfs directory in my network.  When I checked, it looks like the share was created as root:root instead of nobody:users.  Is this correct?  If so, how do I sync files to/from Google Drive within my VM? 
  2. I have 2 remotes like the instructions described (unencrypted and encrypted).  My encrypted remote is a sub-directory inside of my Google Drive root directory.  So, if I move a file into mount_mergerfs/remotename/encrypteddirectory, will it automatically be encrypted during upload to Google?  If not, what did I miss to set this up correctly?
Link to post
10 hours ago, lilbumblebear said:

Thank you for the scripts and how-to guide!  I have gotten through all of the steps and it looks like things are working well for Plex.  I had a couple of questions to clarify how some things work (I apologize if these have already been answered, I might have missed it while I was reading the thread):

 

  1.  I would like to be able to use some files from my Google Drive in my VMs.  I am unable to "see" the mount_mergerfs directory in my network.  When I checked, it looks like the share was created as root:root instead of nobody:users.  Is this correct?  If so, how do I sync files to/from Google Drive within my VM? 
  2. I have 2 remotes like the instructions described (unencrypted and encrypted).  My encrypted remote is a sub-directory inside of my Google Drive root directory.  So, if I move a file into mount_mergerfs/remotename/encrypteddirectory, will it automatically be encrypted during upload to Google?  If not, what did I miss to set this up correctly?

1. you should be able to see your files just like other unRAID shares.  Check your unRAID share settings

2. Correct - rclone encrypts the file.  If you want to check for peace of mind, create a new folder on your server and monitor gdrive to see the encrypted folder/file being created

Link to post

Thank you for the help! 

 

I rebooted the server this morning and all of the shares are now visible.  I will leave my notes here in case they are useful to anyone. 

  1. When I first added the scripts, all of the shares were created as follows (all with same settings for export:yes and security:public)
    1. local
    2. mount_mergerfs
    3. mount_rclone
  2. I was able to see the local share in my VM's via the network, but not the mount_mergerfs or mount_rclone shares
  3. I checked the ownership of the shares with: 
    cd /mnt/user
    ls -l

     

  4. I saw the mount_mergerfs and mount_rclone shares were listed as root:root instead of nobody:users
  5. Rebooted server
  6. Shares mounted and are visible inside of VM. Checked the ownership again, this time all shares are listed as nobody:users

 

Please let me know if I did something incorrectly with the scripts the first time.  It does look like a reboot fixed my issue. Thank you again!

Link to post
  • 2 weeks later...

Mounting the Mega cloud:

rclone mount --max-read-ahead 1024k --allow-other mega: /mnt/disks/mega &


In response to this

021/02/20 17:51:34 NOTICE: mega root '': --vfs-cache-mode writes or full is recommended for this remote as it can't stream

What should be done?

Link to post
14 hours ago, muwahhid said:

Mounting the Mega cloud:


rclone mount --max-read-ahead 1024k --allow-other mega: /mnt/disks/mega &


In response to this


021/02/20 17:51:34 NOTICE: mega root '': --vfs-cache-mode writes or full is recommended for this remote as it can't stream

What should be done?

If you're not using my scripts, use my scripts to get help in this thread

Link to post

Hi Guys,

 

Since i reboot my server i'm not able to use the mount script

 

I got this message everytime :

 

 

Quote

unexpected EOF while looking for matching


Full logs for this script are available at /tmp/user.scripts/tmpScripts/Rclone - Mount 1/log.txt

/tmp/user.scripts/tmpScripts/Rclone - Mount 1/script: line 247: unexpected EOF while looking for matching `"'
/tmp/user.scripts/tmpScripts/Rclone - Mount 1/script: line 249: syntax error: unexpected end of file
Script Finished Feb 21, 2021 20:19.51

 

 

 

 

Script:

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.2 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/mount_rclone_upload" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="100G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="ignore" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="plex" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\"plex/TV"\ # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
	LocalFilesLocation="/tmp/$RcloneRemoteName"
	eval mkdir -p $LocalFilesLocation
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
	eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
	mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--dir-cache-time $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 15s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 5
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
		docker stop $DockerStart
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
		exit
	fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
	else
# check if mergerfs already installed
		if [[ -f "/bin/mergerfs" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
		else
# Build mergerfs binary
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
			sleep 5
			if [[ -f "/bin/mergerfs" ]]; then
				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
			else
				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
				exit
			fi
		fi
# Create mergerfs mount
		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
		if [[  $LocalFilesShare2 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare2=":$LocalFilesShare2"
		else
			LocalFilesShare2=""
		fi
		if [[  $LocalFilesShare3 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare3=":$LocalFilesShare3"
		else
			LocalFilesShare3=""
		fi
		if [[  $LocalFilesShare4 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare4=":$LocalFilesShare4"
		else
			LocalFilesShare4=""
		fi
# make sure mergerfs mount point is empty
		mv $MergerFSMountLocation $LocalFilesLocation
		mkdir -p $MergerFSMountLocation
# mergerfs mount command
		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
		else
			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
			docker stop $DockerStart
			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
			exit
		fi
	fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
	else
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
		docker start $DockerStart
	fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"
exit

 

Link to post
1 hour ago, livingonline8 said:

Please can someone help me with this, I am getting error messages after 2 years of perfect workflow

 

Posting the error messages and your script settings would be a good thing to start with!

Link to post
15 minutes ago, DZMM said:

Posting the error messages and your script settings would be a good thing to start with!

@DZMM yeah I pasted everything in my subject in the rclone support but I couldn’t get an answer 

 

please click on the linked subject and you will find all the details you need 

Link to post

@DZMM I am posting all the details here to make it easier for you to help me

 

this is my mount script...it used to work perfectly but now I am getting an error 

 

The current mounting script:

 

#!/bin/bash

#######  Check if script is already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_mount_running

fi

#######  End Check if script already running  ##########

#######  Start rclone google mount  ##########

# check if google mount already created

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

else

echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

# create directories for rclone mount and unionfs mount

mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs

rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off media_vfs: /mnt/user/mount_rclone/google_vfs &

# check if mount successful

# slight pause to give mount time to finalise

sleep 5

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone google vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone google vfs mount failed - please check for problems."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End rclone google mount  ##########

#######  Start unionfs mount   ##########

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."

else

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End Mount unionfs   ##########

############### starting dockers that need unionfs mount ######################

# only start dockers once

if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: dockers already started"

else

touch /mnt/user/appdata/other/rclone/dockers_started

echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."

docker start binhex-emby
docker start binhex-sabnzbd
docker start binhex-radarr
docker start binhex-sonarr

fi

############### end dockers that need unionfs mount ######################

exit

 

Then all of sudden, after 2 years of working perfectly...I started getting the following error message:

 

27.02.2021 15:00:01 INFO: mounting rclone vfs.
2021/02/27 15:00:03 Fatal error: Directory is not empty: /mnt/user/mount_rclone/google_vfs If you want to mount it anyway use: --allow-non-empty option
27.02.2021 15:00:06 CRITICAL: rclone google vfs mount failed - please check for problems.
Script Finished Feb 27, 2021 15:00.06

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_unionfs_mount/log.txt

 

so as suggested in the error message I added "--allow-non-empty" and the mounting script looks like this:

 

rclone mount --allow-non-empty --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off media_vfs: /mnt/user/mount_rclone/google_vfs &

 

Now, I am getting this error:

 

27.02.2021 15:07:21 INFO: mounting rclone vfs.
2021/02/27 15:07:22 mount helper error: fusermount: unknown option 'nonempty'
2021/02/27 15:07:22 Fatal error: failed to mount FUSE fs: fusermount: exit status 1
27.02.2021 15:07:26 CRITICAL: rclone google vfs mount failed - please check for problems.
Script Finished Feb 27, 2021 15:07.26

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_unionfs_mount/log.txt

 

 

Many people tell me stop trying to mount to non-empty folder but I cannot... first of all, this was working for more than 2 years so why it stopped all of a sudden " I just want to understand "

 

the main reason I have to mount to non-empty folder is because I am combining two folders of my media, one on google drive and the other on my unraid....they are both not empty and I have to mount to my non empty media folder 

Edited by livingonline8
Link to post
1 hour ago, livingonline8 said:

the main reason I have to mount to non-empty folder is because I am combining two folders of my media, one on google drive and the other on my unraid....they are both not empty and I have to mount to my non empty media folder 

the rclone mount should always be to an empty directory - mergerfs/unionfs merges the local and cloud folder.  The simple solution is to make sure the rclone mount location is empty.

 

Also, I recommend you switch to the mergerfs scripts which are much better and will make debugging much easier.

Link to post

I am EXTREMELY Frustrated with trying to get this working. I have done it before on my Seedbox a year ago with MergerFS but the Unraid Shares is tripping me up. 

I have my shares setup like so:
mnt/user/Backups
mnt/user/Movies
mnt/user/TV
mnt/user/Music
.....

I think I understand that I need to manually move all my existing files to my MergerFS mount so they get uploaded to GDrive and then point my containers like Radarr and Sonarr to the MergFS mount. Correct me if this is incorrect.

What is throwing me off is everyone in their Rclone Mount Scipt has this path: `LocalFilesShare=/mnt/user/local`

I guess this path is where everyone keeps there Unraid shares?

I have my shares setup so that there is a Share for each folder (See my paths at the top of this post) so I can't just specify one Directory.

I tried using multiple LocalFile Share Paths which kind of worked except it just mixed up all the files from my Shares (Backups, Music, SW, etc) all together in my mount_mergerfs/gdrive directory with no folder structure or organization.

 I need help!! lol. I have been working on this for 12 hours lol. I have added a screenshot of my Shares. If someone can edit my config and show me how I would want to do it with my setup I would be so grateful. I've lost sleep over this lol.

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.2 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/rclone_mount" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="200G" # Maximum size of rclone cache
RcloneCacheMaxAge="48h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="sonarr radarr Overseerr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"Backup,SW,Movies,TV"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3=""
LocalFilesShare4=""

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

 

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="copy" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/rclone_upload" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/rclone_mount" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="15M"
BWLimit3Time="16:00"
BWLimit3="12M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
#Command1="--exclude downloads/**"
Command1="--exclude downloads/**"
Command2="--exclude gdrive/**"
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######


image.png.3ba7fce40d4e328fc3660f8039a0d341.png

Link to post

Hi DZMM.  Thank you for the hard work on this.  Much appreciated.  I am posting this because I am having a problem getting it set up.

 

I have used Rclone via PGBlitz and PlexGuide for like 4 years now and only post this to say this is not my first go with Rclone and a remote Gdrive set up.  That said I am clearly missing something in getting this set up.  Apologies if this has already been covered.  96 pages of a thread can be a beast.

 

[Gdrive]
type = drive
client_id = {Private}
client_secret = {Private}
scope = drive.readonly
token = {"access_token":"Obviously private"}
team_drive = {I think this is also Private but it is the proper identifier for my Team Drive}
root_folder_id = {left blank per instructions during set up}

 

 

I think I should start here.  I have run the script and it has crated a mount_mergefs folder with a sub folder named Gdrive but I can't see any files.  It feels like I have missed something simple.

 

Here is my script log which may be much more helpful.  There are some errors that I can connect the dots on-

 

/tmp/user.scripts/tmpScripts/rclone_mount/script: line 18: Gdrive: command not found
01.03.2021 21:05:20 INFO: Creating local folders.
01.03.2021 21:05:20 INFO: Creating MergerFS folders.
01.03.2021 21:05:20 INFO: *** Starting mount of remote
01.03.2021 21:05:20 INFO: Checking if this script is already running.
01.03.2021 21:05:20 INFO: Script not running - proceeding.
01.03.2021 21:05:20 INFO: *** Checking if online
01.03.2021 21:05:21 PASSED: *** Internet online
01.03.2021 21:05:21 INFO: Mount not running. Will now mount remote.
01.03.2021 21:05:21 INFO: Recreating mountcheck file for remote.
2021/03/01 21:05:21 DEBUG : rclone: Version "v1.54.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" ":" "-vv" "--no-traverse"]
2021/03/01 21:05:21 DEBUG : Creating backend with remote "mountcheck"
2021/03/01 21:05:21 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2021/03/01 21:05:21 DEBUG : fs cache: adding new entry for parent of "mountcheck", "/usr/local/emhttp"
2021/03/01 21:05:21 DEBUG : Creating backend with remote ":"
2021/03/01 21:05:21 Failed to create file system for ":": config name contains invalid characters - may only contain 0-9, A-Z ,a-z ,_ , - and space
01.03.2021 21:05:21 INFO: *** Creating mount for remote
01.03.2021 21:05:21 INFO: sleeping for 5 seconds
2021/03/01 21:05:21 Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use
01.03.2021 21:05:26 INFO: continuing...
01.03.2021 21:05:26 CRITICAL: mount failed - please check for problems. Stopping dockers
radarr
Error response from daemon: No such container: nzbget
Error response from daemon: No such container: plex
Error response from daemon: No such container: sonarr
Error response from daemon: No such container: ombi
Script Finished Mar 01, 2021 21:05.26

 

 

The line that says "config name contains invalid characters" has me stumped.  I do not have a colon in the name of the script.  it is just Gdrive.  Seems like this may be the issue.

 

Any help from DZMM or the group is appreciated.

 

Thanks,

 

 

 

 

 

Link to post

Hi

I'm using nzbget and normally i need to go into my cache location and delete files every few days because my cache fills up, sonarr/radarr says something along the lines that it doesnt have access to the files. 

so, some files makes it though no problem, then, there is some of em' that stays in the /mnt/cache/local

i point my radarr to /mnt/user/mount_mergerfs/nidhog (gdrive)/movies this is the correct way to mount it right? i do use the /mnt/user/ -> /nidhog/ 

this has been an ongoing issue for months, but just now i have had time to take a look at it. 

Link to post

Has anyone here migrated their plex media server off of unraid onto a quick sync box?

 

In the process of doing it now (need to handle more transcodes and this is a very cheap solution), and curious about mounting the gdrives on Ubuntu - wondering if I can use a modified version of this script?

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.