Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

my rclone mount would crash after the upload script finishes. i just finished setting it up and every time my upload script finishes it would crash. i copied the script from the guide changing only the paths, upload bandwidth and buffer size. 

i can't remount until i use fusermount to unmount.

 

2021/02/07 20:19:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:20:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:21:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:22:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:23:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:24:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
2021/02/07 20:25:24 INFO  : vfs cache: cleaned: objects 782 (was 782) in use 1, to upload 0, uploading 0, total size 13.841G (was 13.841G)
/usr/sbin/rclone: line 3: 16353 Killed                  rcloneorig --config $config "$@"

 

Link to comment

Is it possible to mount a local path into a specific gdrive folder?

I thought i merge a local "/mnt/user/folder1" with it's remote version "gdrive/path/folder1".

 

Using "LocalFilesShare2="/mnt/user/folder1"" mounts the folder into the mergerfs gdrive root directory mixing up the paths.

 

Not entirely sure if this makes a whole lot of sense but i only need to merge one specific folder and not the whole gdrive directoy so i was wondering what the best practice would be.

 

any help is appreciated!

Link to comment

Thank you for the scripts and how-to guide!  I have gotten through all of the steps and it looks like things are working well for Plex.  I had a couple of questions to clarify how some things work (I apologize if these have already been answered, I might have missed it while I was reading the thread):

 

  1.  I would like to be able to use some files from my Google Drive in my VMs.  I am unable to "see" the mount_mergerfs directory in my network.  When I checked, it looks like the share was created as root:root instead of nobody:users.  Is this correct?  If so, how do I sync files to/from Google Drive within my VM? 
  2. I have 2 remotes like the instructions described (unencrypted and encrypted).  My encrypted remote is a sub-directory inside of my Google Drive root directory.  So, if I move a file into mount_mergerfs/remotename/encrypteddirectory, will it automatically be encrypted during upload to Google?  If not, what did I miss to set this up correctly?
Link to comment
10 hours ago, lilbumblebear said:

Thank you for the scripts and how-to guide!  I have gotten through all of the steps and it looks like things are working well for Plex.  I had a couple of questions to clarify how some things work (I apologize if these have already been answered, I might have missed it while I was reading the thread):

 

  1.  I would like to be able to use some files from my Google Drive in my VMs.  I am unable to "see" the mount_mergerfs directory in my network.  When I checked, it looks like the share was created as root:root instead of nobody:users.  Is this correct?  If so, how do I sync files to/from Google Drive within my VM? 
  2. I have 2 remotes like the instructions described (unencrypted and encrypted).  My encrypted remote is a sub-directory inside of my Google Drive root directory.  So, if I move a file into mount_mergerfs/remotename/encrypteddirectory, will it automatically be encrypted during upload to Google?  If not, what did I miss to set this up correctly?

1. you should be able to see your files just like other unRAID shares.  Check your unRAID share settings

2. Correct - rclone encrypts the file.  If you want to check for peace of mind, create a new folder on your server and monitor gdrive to see the encrypted folder/file being created

Link to comment

Thank you for the help! 

 

I rebooted the server this morning and all of the shares are now visible.  I will leave my notes here in case they are useful to anyone. 

  1. When I first added the scripts, all of the shares were created as follows (all with same settings for export:yes and security:public)
    1. local
    2. mount_mergerfs
    3. mount_rclone
  2. I was able to see the local share in my VM's via the network, but not the mount_mergerfs or mount_rclone shares
  3. I checked the ownership of the shares with: 
    cd /mnt/user
    ls -l

     

  4. I saw the mount_mergerfs and mount_rclone shares were listed as root:root instead of nobody:users
  5. Rebooted server
  6. Shares mounted and are visible inside of VM. Checked the ownership again, this time all shares are listed as nobody:users

 

Please let me know if I did something incorrectly with the scripts the first time.  It does look like a reboot fixed my issue. Thank you again!

Link to comment
  • 2 weeks later...
14 hours ago, muwahhid said:

Mounting the Mega cloud:


rclone mount --max-read-ahead 1024k --allow-other mega: /mnt/disks/mega &


In response to this


021/02/20 17:51:34 NOTICE: mega root '': --vfs-cache-mode writes or full is recommended for this remote as it can't stream

What should be done?

If you're not using my scripts, use my scripts to get help in this thread

Link to comment

Hi Guys,

 

Since i reboot my server i'm not able to use the mount script

 

I got this message everytime :

 

 

Quote

unexpected EOF while looking for matching


Full logs for this script are available at /tmp/user.scripts/tmpScripts/Rclone - Mount 1/log.txt

/tmp/user.scripts/tmpScripts/Rclone - Mount 1/script: line 247: unexpected EOF while looking for matching `"'
/tmp/user.scripts/tmpScripts/Rclone - Mount 1/script: line 249: syntax error: unexpected end of file
Script Finished Feb 21, 2021 20:19.51

 

 

 

 

Script:

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.2 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/mount_rclone_upload" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="100G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="ignore" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="plex" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\"plex/TV"\ # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
	LocalFilesLocation="/tmp/$RcloneRemoteName"
	eval mkdir -p $LocalFilesLocation
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
	eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
	mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--dir-cache-time $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 15s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 5
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
		docker stop $DockerStart
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
		exit
	fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
	else
# check if mergerfs already installed
		if [[ -f "/bin/mergerfs" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
		else
# Build mergerfs binary
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
			sleep 5
			if [[ -f "/bin/mergerfs" ]]; then
				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
			else
				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
				exit
			fi
		fi
# Create mergerfs mount
		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
		if [[  $LocalFilesShare2 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare2=":$LocalFilesShare2"
		else
			LocalFilesShare2=""
		fi
		if [[  $LocalFilesShare3 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare3=":$LocalFilesShare3"
		else
			LocalFilesShare3=""
		fi
		if [[  $LocalFilesShare4 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare4=":$LocalFilesShare4"
		else
			LocalFilesShare4=""
		fi
# make sure mergerfs mount point is empty
		mv $MergerFSMountLocation $LocalFilesLocation
		mkdir -p $MergerFSMountLocation
# mergerfs mount command
		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
		else
			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
			docker stop $DockerStart
			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
			exit
		fi
	fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
	else
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
		docker start $DockerStart
	fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"
exit

 

Link to comment

@DZMM I am posting all the details here to make it easier for you to help me

 

this is my mount script...it used to work perfectly but now I am getting an error 

 

The current mounting script:

 

#!/bin/bash

#######  Check if script is already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_mount_running

fi

#######  End Check if script already running  ##########

#######  Start rclone google mount  ##########

# check if google mount already created

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

else

echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

# create directories for rclone mount and unionfs mount

mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs

rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off media_vfs: /mnt/user/mount_rclone/google_vfs &

# check if mount successful

# slight pause to give mount time to finalise

sleep 5

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone google vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone google vfs mount failed - please check for problems."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End rclone google mount  ##########

#######  Start unionfs mount   ##########

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."

else

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End Mount unionfs   ##########

############### starting dockers that need unionfs mount ######################

# only start dockers once

if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: dockers already started"

else

touch /mnt/user/appdata/other/rclone/dockers_started

echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."

docker start binhex-emby
docker start binhex-sabnzbd
docker start binhex-radarr
docker start binhex-sonarr

fi

############### end dockers that need unionfs mount ######################

exit

 

Then all of sudden, after 2 years of working perfectly...I started getting the following error message:

 

27.02.2021 15:00:01 INFO: mounting rclone vfs.
2021/02/27 15:00:03 Fatal error: Directory is not empty: /mnt/user/mount_rclone/google_vfs If you want to mount it anyway use: --allow-non-empty option
27.02.2021 15:00:06 CRITICAL: rclone google vfs mount failed - please check for problems.
Script Finished Feb 27, 2021 15:00.06

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_unionfs_mount/log.txt

 

so as suggested in the error message I added "--allow-non-empty" and the mounting script looks like this:

 

rclone mount --allow-non-empty --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off media_vfs: /mnt/user/mount_rclone/google_vfs &

 

Now, I am getting this error:

 

27.02.2021 15:07:21 INFO: mounting rclone vfs.
2021/02/27 15:07:22 mount helper error: fusermount: unknown option 'nonempty'
2021/02/27 15:07:22 Fatal error: failed to mount FUSE fs: fusermount: exit status 1
27.02.2021 15:07:26 CRITICAL: rclone google vfs mount failed - please check for problems.
Script Finished Feb 27, 2021 15:07.26

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_unionfs_mount/log.txt

 

 

Many people tell me stop trying to mount to non-empty folder but I cannot... first of all, this was working for more than 2 years so why it stopped all of a sudden " I just want to understand "

 

the main reason I have to mount to non-empty folder is because I am combining two folders of my media, one on google drive and the other on my unraid....they are both not empty and I have to mount to my non empty media folder 

Edited by livingonline8
Link to comment
1 hour ago, livingonline8 said:

the main reason I have to mount to non-empty folder is because I am combining two folders of my media, one on google drive and the other on my unraid....they are both not empty and I have to mount to my non empty media folder 

the rclone mount should always be to an empty directory - mergerfs/unionfs merges the local and cloud folder.  The simple solution is to make sure the rclone mount location is empty.

 

Also, I recommend you switch to the mergerfs scripts which are much better and will make debugging much easier.

Link to comment

I am EXTREMELY Frustrated with trying to get this working. I have done it before on my Seedbox a year ago with MergerFS but the Unraid Shares is tripping me up. 

I have my shares setup like so:
mnt/user/Backups
mnt/user/Movies
mnt/user/TV
mnt/user/Music
.....

I think I understand that I need to manually move all my existing files to my MergerFS mount so they get uploaded to GDrive and then point my containers like Radarr and Sonarr to the MergFS mount. Correct me if this is incorrect.

What is throwing me off is everyone in their Rclone Mount Scipt has this path: `LocalFilesShare=/mnt/user/local`

I guess this path is where everyone keeps there Unraid shares?

I have my shares setup so that there is a Share for each folder (See my paths at the top of this post) so I can't just specify one Directory.

I tried using multiple LocalFile Share Paths which kind of worked except it just mixed up all the files from my Shares (Backups, Music, SW, etc) all together in my mount_mergerfs/gdrive directory with no folder structure or organization.

 I need help!! lol. I have been working on this for 12 hours lol. I have added a screenshot of my Shares. If someone can edit my config and show me how I would want to do it with my setup I would be so grateful. I've lost sleep over this lol.

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.2 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/rclone_mount" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="200G" # Maximum size of rclone cache
RcloneCacheMaxAge="48h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="sonarr radarr Overseerr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"Backup,SW,Movies,TV"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3=""
LocalFilesShare4=""

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

 

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="copy" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/rclone_upload" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/rclone_mount" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="15M"
BWLimit3Time="16:00"
BWLimit3="12M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
#Command1="--exclude downloads/**"
Command1="--exclude downloads/**"
Command2="--exclude gdrive/**"
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######


image.png.3ba7fce40d4e328fc3660f8039a0d341.png

Link to comment

Hi DZMM.  Thank you for the hard work on this.  Much appreciated.  I am posting this because I am having a problem getting it set up.

 

I have used Rclone via PGBlitz and PlexGuide for like 4 years now and only post this to say this is not my first go with Rclone and a remote Gdrive set up.  That said I am clearly missing something in getting this set up.  Apologies if this has already been covered.  96 pages of a thread can be a beast.

 

[Gdrive]
type = drive
client_id = {Private}
client_secret = {Private}
scope = drive.readonly
token = {"access_token":"Obviously private"}
team_drive = {I think this is also Private but it is the proper identifier for my Team Drive}
root_folder_id = {left blank per instructions during set up}

 

 

I think I should start here.  I have run the script and it has crated a mount_mergefs folder with a sub folder named Gdrive but I can't see any files.  It feels like I have missed something simple.

 

Here is my script log which may be much more helpful.  There are some errors that I can connect the dots on-

 

/tmp/user.scripts/tmpScripts/rclone_mount/script: line 18: Gdrive: command not found
01.03.2021 21:05:20 INFO: Creating local folders.
01.03.2021 21:05:20 INFO: Creating MergerFS folders.
01.03.2021 21:05:20 INFO: *** Starting mount of remote
01.03.2021 21:05:20 INFO: Checking if this script is already running.
01.03.2021 21:05:20 INFO: Script not running - proceeding.
01.03.2021 21:05:20 INFO: *** Checking if online
01.03.2021 21:05:21 PASSED: *** Internet online
01.03.2021 21:05:21 INFO: Mount not running. Will now mount remote.
01.03.2021 21:05:21 INFO: Recreating mountcheck file for remote.
2021/03/01 21:05:21 DEBUG : rclone: Version "v1.54.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" ":" "-vv" "--no-traverse"]
2021/03/01 21:05:21 DEBUG : Creating backend with remote "mountcheck"
2021/03/01 21:05:21 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2021/03/01 21:05:21 DEBUG : fs cache: adding new entry for parent of "mountcheck", "/usr/local/emhttp"
2021/03/01 21:05:21 DEBUG : Creating backend with remote ":"
2021/03/01 21:05:21 Failed to create file system for ":": config name contains invalid characters - may only contain 0-9, A-Z ,a-z ,_ , - and space
01.03.2021 21:05:21 INFO: *** Creating mount for remote
01.03.2021 21:05:21 INFO: sleeping for 5 seconds
2021/03/01 21:05:21 Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use
01.03.2021 21:05:26 INFO: continuing...
01.03.2021 21:05:26 CRITICAL: mount failed - please check for problems. Stopping dockers
radarr
Error response from daemon: No such container: nzbget
Error response from daemon: No such container: plex
Error response from daemon: No such container: sonarr
Error response from daemon: No such container: ombi
Script Finished Mar 01, 2021 21:05.26

 

 

The line that says "config name contains invalid characters" has me stumped.  I do not have a colon in the name of the script.  it is just Gdrive.  Seems like this may be the issue.

 

Any help from DZMM or the group is appreciated.

 

Thanks,

 

 

 

 

 

Link to comment

Hi

I'm using nzbget and normally i need to go into my cache location and delete files every few days because my cache fills up, sonarr/radarr says something along the lines that it doesnt have access to the files. 

so, some files makes it though no problem, then, there is some of em' that stays in the /mnt/cache/local

i point my radarr to /mnt/user/mount_mergerfs/nidhog (gdrive)/movies this is the correct way to mount it right? i do use the /mnt/user/ -> /nidhog/ 

this has been an ongoing issue for months, but just now i have had time to take a look at it. 

Link to comment

Has anyone here migrated their plex media server off of unraid onto a quick sync box?

 

In the process of doing it now (need to handle more transcodes and this is a very cheap solution), and curious about mounting the gdrives on Ubuntu - wondering if I can use a modified version of this script?

Link to comment
44 minutes ago, privateer said:

Has anyone here migrated their plex media server off of unraid onto a quick sync box?

 

In the process of doing it now (need to handle more transcodes and this is a very cheap solution), and curious about mounting the gdrives on Ubuntu - wondering if I can use a modified version of this script?

I maybe missing the mark here - but let UnRaid run the scripts and share the data. Whereever Plex is, just point to the UnRaid shares. That's exactly how my current setup is. None of my stuff here runs in the UnRaid dockers. The only downside - is if the mount goes down, your library might get wonky.

 

Typically, Sonarr will complain about it - Emby doesn't do anything other than stall. 

Link to comment
32 minutes ago, axeman said:

I maybe missing the mark here - but let UnRaid run the scripts and share the data. Whereever Plex is, just point to the UnRaid shares. That's exactly how my current setup is. None of my stuff here runs in the UnRaid dockers. The only downside - is if the mount goes down, your library might get wonky.

 

Typically, Sonarr will complain about it - Emby doesn't do anything other than stall. 

 

I have a separate box (Ubuntu) so I can use quicksync to transcode. I currently have the unraid drives mounted using AutoFS.

 

I'm asking about directly mounting the gdrive to the quicksync box. The scripts here are still needed on the unraid box for sonarr, radarr, and uploading files to the cloud (etc). It seems like you're suggesting something that would be like mount gdrive on unraid, and mount the mounted drive on qs box. 

 

Why would I do that instead of mount the gdrive directly on the QS box?

Link to comment
3 minutes ago, privateer said:

 

I have a separate box (Ubuntu) so I can use quicksync to transcode. I currently have the unraid drives mounted using AutoFS.

 

I'm asking about directly mounting the gdrive to the quicksync box. The scripts here are still needed on the unraid box for sonarr, radarr, and uploading files to the cloud (etc). It seems like you're suggesting something that would be like mount gdrive on unraid, and mount the mounted drive on qs box. 

 

Why would I do that instead of mount the gdrive directly on the QS box?

 

That's just how I have it - because of circumstance, really. UnRaid and Emby (Sonnar too) were on different VMs for ages. I just added the scripts to UnRaid, and updated the existing instances to point to the mounts on UnRaid. I didn't have to do anything else. 

 

I also have non cloud shares that I still need UnRaid for - so to me, having all things storage related be on UnRaid server (local and cloud), and presentation and gathering on a separate machine is a good separation of concerns. 

Link to comment

Don't know if you guys can help but I'm stuck at creating the service accounts. when I run python3 gen_sa_accounts.py --quick-setup 1 I get request had insufficient authentication scopes. I've enabled the drive api and have the credentials.json so not sure what's wrong.

 

EDIT: OK, got it working with manually created service accounts. Everything worked first try and I uploaded a 4k movie. Plex playback is fine, but for movies with Atmos I use MRMC (KODI) on my shield because the plex app stutters. Within MRMC I tried to add gdrive_media_vfs/4kmovies. MRMC cannot see anything below gdrive_media_vfs if trying to add as an NFS share. It sees it with SMB and playback is fine so far, but I prefer NFS because its faster. Ill keep playing with it but if anyone has any insight please let me know. Ive gotta upload 4k Lord Of The Rings and see how the Atmos works.

 

Also I assume it is storing the streaming movies in ram?

 

EDIT: Thought up a couple more questions. Im not very smart so these may be dumb questions and assumptions.

 

Is there a reason not to change

RcloneCacheShare="/mnt/user0/mount_rclone"

to

RcloneCacheShare="/mnt/disks/cache/mount_rclone"

My thinking is then it would stay completely on my 4 TB ssd cache drive.

 

After I got it working I created a folder with midnight commander in /mnt/user/mount_mergerfs/gdrive_media_vfs/ called 4kmovies. Would it be better to change

MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\}

to include other directories I want such as "4kmovies" and "4ktvshows? This wont overwrite anything each time the mount script runs?

 

I want to start uploading 73TB this weekend so I just want to get everything right and understand how it works.

Edited by Megaman69
Link to comment
6 hours ago, neeiro said:

Quick Question - Is it possible to have 2 unraid servers using the same google account at the same time or will it cause problems?

 

Also would you just use the same config file/scripts on each?

Should be possible - just make sure only one is changing files to be safe, and the other is polling regularly to see new files.

Link to comment
On 3/5/2021 at 9:12 AM, axeman said:

 

That's just how I have it - because of circumstance, really. UnRaid and Emby (Sonnar too) were on different VMs for ages. I just added the scripts to UnRaid, and updated the existing instances to point to the mounts on UnRaid. I didn't have to do anything else. 

 

I also have non cloud shares that I still need UnRaid for - so to me, having all things storage related be on UnRaid server (local and cloud), and presentation and gathering on a separate machine is a good separation of concerns. 

Just went with an rclone mount actually.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.