Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

4 minutes ago, Kaizac said:

 

No why would that be needed? You're just mounting all the folders you have on this Google Drive folder. This will also be available through the windows explorer for example. So if that is not a problem then you can just leave as is.

Thanks. Now I think I got the hang of it. 

 

Thank you very very much :)

Link to comment

Hello,
I'm kinda stuck on this, and I read through the guide multiple times, but it seems like something is implicit that I don't get.

For the rclone config, this is my setup (details are blurred):
image.png.74aea681830f3cad1ed52b0a533aebae.png

The terminal rclone commands works just fine.

 

I'm not sure if this is done right:

image.thumb.png.e75653270a3656060f7ffffb69116e8f.png

(I know there's a lot of containers and paths)

and I read this sentence "To get the best performance out of mergerfs, map dockers to /user --> /mnt/user" I'm not sure what this means. Should I add a new path to Plex for with "/mnt/user/ and the same for Sonarr, Radarr and the Deluge containers, or what am I supposed to do?
Also, why?

I don't quite get it.

 

I know how to setup the cronjob, but I want to make sure I get it right before I use it.

 

Maybe someone could walk me through it?

Link to comment
5 minutes ago, Nanobug said:

Hello,
I'm kinda stuck on this, and I read through the guide multiple times, but it seems like something is implicit that I don't get.

For the rclone config, this is my setup (details are blurred):
image.png.74aea681830f3cad1ed52b0a533aebae.png

The terminal rclone commands works just fine.

 

I'm not sure if this is done right:

image.thumb.png.e75653270a3656060f7ffffb69116e8f.png

(I know there's a lot of containers and paths)

and I read this sentence "To get the best performance out of mergerfs, map dockers to /user --> /mnt/user" I'm not sure what this means. Should I add a new path to Plex for with "/mnt/user/ and the same for Sonarr, Radarr and the Deluge containers, or what am I supposed to do?
Also, why?

I don't quite get it.

 

I know how to setup the cronjob, but I want to make sure I get it right before I use it.

 

Maybe someone could walk me through it?

In your mount script change RcloneRemoteName to "crypt".

I'm wondering about your crypt config though. I'm not sure if the / before Dropbox works. Did you test that? I would just have dropbox:crypt for the crypt remote, / is not needed afaik.

 

Regarding the /user path. You have to create the path named /user which points to /mnt/user on your filesystem on your docker templates.

Then within the docker containers (the actual software like Plex) you start your file mappings from /user. For example /user/mount_mergerfs/crypt/movies for the movies path in Plex. You do the same for Radarr.

 

This helps performance because the containers are isolated, so they have their own paths and file system which they also use to communicate with each other. So if you have plex user /Films but Radarr /Movies then they won't be able to find that path because they don't know it.

And this also gives the system the idea that it's 1 share/folder/disk so it will move files directly instead of creating overhead by copy writing the file. This is not a 100% technical accurate explanation, but I hope it makes some sense.

 

Another thing: I'm not sure if the dockers for the docker start need a comma in between. You might want to test/double check that first as well.

Link to comment
8 hours ago, Kaizac said:

In your mount script change RcloneRemoteName to "crypt".

I'm wondering about your crypt config though. I'm not sure if the / before Dropbox works. Did you test that? I would just have dropbox:crypt for the crypt remote, / is not needed afaik.

 

Regarding the /user path. You have to create the path named /user which points to /mnt/user on your filesystem on your docker templates.

Then within the docker containers (the actual software like Plex) you start your file mappings from /user. For example /user/mount_mergerfs/crypt/movies for the movies path in Plex. You do the same for Radarr.

 

This helps performance because the containers are isolated, so they have their own paths and file system which they also use to communicate with each other. So if you have plex user /Films but Radarr /Movies then they won't be able to find that path because they don't know it.

And this also gives the system the idea that it's 1 share/folder/disk so it will move files directly instead of creating overhead by copy writing the file. This is not a 100% technical accurate explanation, but I hope it makes some sense.

 

Another thing: I'm not sure if the dockers for the docker start need a comma in between. You might want to test/double check that first as well.

The dropbox:/crypt part works, I can use the rclone commands in the terminal with it, so I'm happy with it. I'm not sure it's done the right/best way, but it works, and it took me a while to get it to work, so I don't want to mess with it :P

 

Regarding the  /user path, why not just point it to /user/mount_mergerfs/crypt/movies instead of pointing it to /mnt/user?

 

It kinda makes sense so it sees it as one disk instead.

If someone knows a deeper explanation, I'd love to hear it and learn from it :)

 

Regarding the docker start, there's no commas in between. There dash ( - ) though, which is a part of the container/docker name that I use.

 

 

Link to comment

Had to edit the script from gdrive_upload_vfs to gdrive_vfs

 

it works now :)

 

 

 

Hi again.

 

a new install on an Intel Nuc 11 i7

 

when i try to make an upload. nothing happens. I'm not sure if i'm making any mistakes.

 

the picture shows i have something in my folders that are ready to upload :)

 

 

 

 

Script location: /tmp/user.scripts/tmpScripts/rclone_upload/script
Note that closing this window will abort the execution of this script
27.04.2023 02:07:58 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_upload_vfs ***
27.04.2023 02:07:58 INFO: *** Starting rclone_upload script for gdrive_upload_vfs ***
27.04.2023 02:07:58 INFO: Script not running - proceeding.
27.04.2023 02:07:58 INFO: Checking if rclone installed successfully.
27.04.2023 02:07:58 INFO: rclone installed successfully - proceeding with upload.
27.04.2023 02:07:58 INFO: Uploading using upload remote gdrive_upload_vfs
27.04.2023 02:07:58 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
2023/04/27 02:07:58 INFO : Starting transaction limiter: max 8 transactions/s with burst 1
2023/04/27 02:07:58 DEBUG : --min-age 15m0s to 2023-04-27 01:52:58.840599861 -0700 PDT m=-899.967713461
2023/04/27 02:07:58 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/gdrive_vfs" "gdrive_upload_vfs:" "--user-agent=gdrive_upload_vfs" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,15M 16:00,12M" "--bind=" "--delete-empty-src-dirs"]
2023/04/27 02:07:58 DEBUG : Creating backend with remote "/mnt/user/local/gdrive_vfs"
2023/04/27 02:07:58 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2023/04/27 02:07:58 DEBUG : Creating backend with remote "gdrive_upload_vfs:"
2023/04/27 02:07:58 Failed to create file system for "gdrive_upload_vfs:": didn't find section in config file
27.04.2023 02:07:58 INFO: Not utilising service accounts.
27.04.2023 02:07:58 INFO: Script complete

uploading.png

Edited by bubbadk
got it to work
Link to comment
  • 2 weeks later...

i just looked at my upload script log.

 

it apparently not uploading to google. can any1 see why :)

 

what files and logs do you need.

 

06.05.2023 09:30:03 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs ***
06.05.2023 09:30:03 INFO: *** Starting rclone_upload script for gdrive_vfs ***
06.05.2023 09:30:03 INFO: Exiting as script already running.
Script Finished May 06, 2023 09:30.03

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt
 

Link to comment
2 hours ago, bubbadk said:

i just looked at my upload script log.

 

it apparently not uploading to google. can any1 see why :)

 

what files and logs do you need.

 

06.05.2023 09:30:03 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs ***
06.05.2023 09:30:03 INFO: *** Starting rclone_upload script for gdrive_vfs ***
06.05.2023 09:30:03 INFO: Exiting as script already running.
Script Finished May 06, 2023 09:30.03

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt
 

You probably rebooted your server and the checker file has not been deleted. Should be in your /mnt/user/appdata/other/rclone/remotes/gdrive_vfs/ directory called upload_running_daily (or something along those lines).

  • Thanks 1
Link to comment
4 hours ago, Kaizac said:

You probably rebooted your server and the checker file has not been deleted. Should be in your /mnt/user/appdata/other/rclone/remotes/gdrive_vfs/ directory called upload_running_daily (or something along those lines).

 

That did the trick.

 

Thank you so kindly :)

Link to comment
On 4/27/2023 at 8:01 AM, Nanobug said:

The dropbox:/crypt part works, I can use the rclone commands in the terminal with it, so I'm happy with it. I'm not sure it's done the right/best way, but it works, and it took me a while to get it to work, so I don't want to mess with it :P

 

Regarding the  /user path, why not just point it to /user/mount_mergerfs/crypt/movies instead of pointing it to /mnt/user?

 

It kinda makes sense so it sees it as one disk instead.

If someone knows a deeper explanation, I'd love to hear it and learn from it :)

 

Regarding the docker start, there's no commas in between. There dash ( - ) though, which is a part of the container/docker name that I use.

 

 

I'm reading your post back, and I now see I'm missing the conclusion. Did you get it to work as you wanted? If not, let me know what your issue is.

 

By the way regarding the /mnt/user and the performance increase of moving files, it's called "atomic moving". You can look that up if you want to know more about it.

Link to comment
2 hours ago, fzligerzronz said:

:(

 

With the advent demise of Google Drive in 2 months, looks like i'll be moving stuff over to dropbox.

 

Would this be the same setup as we do for Google Drive?

I didn't get the e-mail (yet). I'm also actually using Google Workspace for my business, so maybe they check for private and business accounts/bank accounts? Could be different factors, I have no clue. But in my admin console it says unlimited storage still, so they would be breaking the contract I suppose by disconnecting people. I've already anticipated a moment where it would shut down though, it's been happening with all the other services. I wouldn't count on Dropbox to stay unlimited. You'll have to ask yourself as well what your purpose for it is. Depending on your country's prizes, you can buy 50-100TB of drives each year you use Dropbox. That's permanent storage you can use for years to come.

 

And for media consumption, I pretty much moved over to Stremio with Torrentio + Debrid (Real-Debrid is a preferred one). For 3-4 euro's a month, you can watch all you want. The only caveat is that you can only use most Debrid services from 1 IP at the same time, but no limit on amount of users from that same IP. There is already a project called plex_debrid which you can use to run your Debrid through your Plex, so it will count as 1 IP for users from outside as well.

 

To answer your question, Dropbox has different API limits. I think (but it might have changed) they don't have the limits on how much you can upload and download, only on how often you hit the API. So using rotating service accounts and multiple mounts won't be needed (big plus). But you need to rate limit the API hits correctly to prevent temporary bans.

You can always request the trial and ask for more than the 5TB you get in the trial to see how things are. I've seen different experiences with Dropbox, sometimes it's very easy to get big amounts of storage immediately (100-200TB) other times you'll have to ask for every 5TB extra storage. Sometimes it is added automatically, apparently. So everyone's miles will vary, basically ;).

Link to comment

 
With the advent demise of Google Drive in 2 months, looks like i'll be moving stuff over to dropbox.
 
Would this be the same setup as we do for Google Drive?
You need min 3 user for dropbox like 25$ a user

Im looking into it also but gdrive notif seem to be on some type of account i got 2 accounts with dame setup different enterprise and only got 1 email so far

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment

I give up, does anyone know what the hell is this error actually about? 

 

2023/05/15 10:11:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdt-Lz90u9Nes1YU9fbno82aTyk9La51mu1QEnq3UbWL3Shb2lLaFGQvwDdR76XjFluBGLd02Gls5nR90LwR_qVyvg": context canceled

 

A script copies files into the merger_fs folder/share. Most stuff works fine, every now and again the above error happens.

 

'/mnt/user/data/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv' -> '/mnt/user/mount_mergerfs/google/Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv'

Link to comment
30 minutes ago, 00b5 said:

I give up, does anyone know what the hell is this error actually about? 

 

2023/05/15 10:11:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdt-Lz90u9Nes1YU9fbno82aTyk9La51mu1QEnq3UbWL3Shb2lLaFGQvwDdR76XjFluBGLd02Gls5nR90LwR_qVyvg": context canceled

 

A script copies files into the merger_fs folder/share. Most stuff works fine, every now and again the above error happens.

 

'/mnt/user/data/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv' -> '/mnt/user/mount_mergerfs/google/Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv'

Hard to troubleshoot without the scripts you're running.

Link to comment
7 minutes ago, Kaizac said:

Hard to troubleshoot without the scripts you're running.

You mean the main mount script, or the one that copies files into the merger folder? 

 

Main Mount Script

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="google" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
#LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
LocalFilesShare="ignore" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="600G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
# DockerStart="nzbget plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
DockerStart="plex" # list of dockers, separated by space, to start once mergerfs mount verified
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
	LocalFilesLocation="/tmp/$RcloneRemoteName"
	eval mkdir -p $LocalFilesLocation
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
	eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
	mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 000 \
	--dir-cache-time $RcloneMountDirCacheTime \
	--attr-timeout $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 15 seconds"
# slight pause to give mount time to finalise
	sleep 15
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
		docker stop $DockerStart
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
		exit
	fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
	else
# check if mergerfs already installed
		if [[ -f "/bin/mergerfs" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
		else
# Build mergerfs binary
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
			sleep 5
			if [[ -f "/bin/mergerfs" ]]; then
				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
			else
				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
				exit
			fi
		fi
# Create mergerfs mount
		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
		if [[  $LocalFilesShare2 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare2=":$LocalFilesShare2"
		else
			LocalFilesShare2=""
		fi
		if [[  $LocalFilesShare3 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare3=":$LocalFilesShare3"
		else
			LocalFilesShare3=""
		fi
		if [[  $LocalFilesShare4 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare4=":$LocalFilesShare4"
		else
			LocalFilesShare4=""
		fi
# make sure mergerfs mount point is empty
		mv $MergerFSMountLocation $LocalFilesLocation
		mkdir -p $MergerFSMountLocation
# mergerfs mount command
		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
		else
			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
			docker stop $DockerStart
			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
			exit
		fi
	fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
	else
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
		docker start $DockerStart
	fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

 

Copy script (copies files from a folder that is syncing with another server via Resilio Sync), runs about every 5 mins or so

 

# btsync capture SCRIPT
#!/bin/bash
# exec 3>&1 4>&2
# trap 'exec 2>&4 1>&3' 0 1 2 3
#  Everything below will go to the file 'rsync-date.log':

LOCKFILE=/tmp/lock.txt
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
    echo "[ $(date ${date_format}) ] Rsync already running @ ${LOCKFILE}"
    exit
fi

# make sure the lockfile is removed when we exit and then claim it
trap "rm -f ${LOCKFILE}; exit" INT TERM EXIT
echo $$ > ${LOCKFILE}

if [[ -f "/mnt/user/mount_rclone/google/mountcheck" ]]; then
	echo "[ $(date ${date_format}) ] INFO: rclone remote is mounted, starting copy"

echo "[ $(date ${date_format}) ] #################################### ################"
echo "[ $(date ${date_format}) ] ################# Copy TV Shows ################"
echo "[ $(date ${date_format}) ] rsync-ing TV shows from resiloSync:"
cp -rv /mnt/user/data/TV/* /mnt/user/mount_mergerfs/google/Media/TV/

echo "[ $(date ${date_format}) ] ################# Copy Movies ################"
echo "[ $(date ${date_format}) ] rsync-ing Movies from resiloSync:"
cp -rv /mnt/user/data/Movies/* /mnt/user/mount_mergerfs/google/Media/Movies/

echo "[ $(date ${date_format}) ] ###################################################"

else
	echo "[ $(date ${date_format}) ] INFO: Mount not running. Will now abort copy"
fi

sleep 30
rm -f ${LOCKFILE}

 

 

Here is 10 mins of the log where it tries to copy this file up:

 

2023/05/15 10:17:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:18:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:19:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 10:20.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

15.05.2023 10:20:01 INFO: Not creating local folders as requested.
15.05.2023 10:20:01 INFO: Creating MergerFS folders.
15.05.2023 10:20:01 INFO: *** Starting mount of remote google
15.05.2023 10:20:01 INFO: Checking if this script is already running.
15.05.2023 10:20:01 INFO: Script not running - proceeding.
15.05.2023 10:20:01 INFO: *** Checking if online
15.05.2023 10:20:02 PASSED: *** Internet online
15.05.2023 10:20:02 INFO: Success google remote is already mounted.
15.05.2023 10:20:02 INFO: Check successful, google mergerfs mount in place.
15.05.2023 10:20:02 INFO: dockers already started.
15.05.2023 10:20:02 INFO: Script complete
Script Finished May 15, 2023 10:20.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

2023/05/15 10:20:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:21:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycds6tAjl-sjvsxWVtbO2hr6eHhfX58FibGCOIPFijx8n5_LhEaKRKVeLAmdM7rdxiIM6AnlhInp9n8Bl1IGxgz4oBg": context canceled
2023/05/15 10:21:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:21:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.419Gi (was 596.419Gi)
2023/05/15 10:22:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.515Gi (was 599.515Gi)
2023/05/15 10:22:44 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:22:47 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:22:47 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:23:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:24:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:25:01 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtO-3iUahxqOPfrLtJvUGzKbVbC_jIet8MR1hSM4t-JDEvJGPEXYgjVyO3alao3Jira9AI0ZWLeDbVKmtRXvy9FdQ": context canceled
2023/05/15 10:25:01 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:25:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.522Gi (was 596.521Gi)
2023/05/15 10:26:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.625Gi (was 599.625Gi)
2023/05/15 10:26:41 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:27:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:28:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:29:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 10:30.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

15.05.2023 10:30:01 INFO: Not creating local folders as requested.
15.05.2023 10:30:01 INFO: Creating MergerFS folders.
15.05.2023 10:30:01 INFO: *** Starting mount of remote google
15.05.2023 10:30:01 INFO: Checking if this script is already running.
15.05.2023 10:30:01 INFO: Script not running - proceeding.
15.05.2023 10:30:01 INFO: *** Checking if online
15.05.2023 10:30:02 PASSED: *** Internet online
15.05.2023 10:30:02 INFO: Success google remote is already mounted.
15.05.2023 10:30:02 INFO: Check successful, google mergerfs mount in place.
15.05.2023 10:30:02 INFO: dockers already started.
15.05.2023 10:30:02 INFO: Script complete
Script Finished May 15, 2023 10:30.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

2023/05/15 10:30:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:31:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdvY7OCd_9eS-M8zevNS2RwdMIUrvChpIfFvJbZwXkA3WTLZOSbQnxi03cunE_-VdMLlRHt4ElXs-7BokEs1s_V1yQ": context canceled
2023/05/15 10:31:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:31:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.452Gi (was 596.452Gi)
2023/05/15 10:32:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.526Gi (was 599.526Gi)
2023/05/15 10:32:44 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:33:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:34:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:35:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdsZfHAOT0kaMBaqBO-V9hllnnC1cj2FoFWxpu4k1ugT4MmBnWt5d-4ozDwEbcjp9STh-TGSnC9nmFamo3hhW1ueNw": context canceled
2023/05/15 10:35:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:35:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.583Gi (was 596.583Gi)
2023/05/15 10:36:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.645Gi (was 599.645Gi)
2023/05/15 10:36:41 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:37:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:38:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:39:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 10:40.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

15.05.2023 10:40:01 INFO: Not creating local folders as requested.
15.05.2023 10:40:01 INFO: Creating MergerFS folders.
15.05.2023 10:40:01 INFO: *** Starting mount of remote google
15.05.2023 10:40:01 INFO: Checking if this script is already running.
15.05.2023 10:40:01 INFO: Script not running - proceeding.
15.05.2023 10:40:01 INFO: *** Checking if online
15.05.2023 10:40:02 PASSED: *** Internet online
15.05.2023 10:40:02 INFO: Success google remote is already mounted.
15.05.2023 10:40:02 INFO: Check successful, google mergerfs mount in place.
15.05.2023 10:40:02 INFO: dockers already started.
15.05.2023 10:40:02 INFO: Script complete
Script Finished May 15, 2023 10:40.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

2023/05/15 10:40:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:41:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdt5s0GvZo7nDCoSPIB_BwQwa-PU1FSe0i8UWPJDOQ_cwFYx6WL33iTkh85OnXiegp5yn9OoRJLn8xAbe94O0fXcZQ": context canceled
2023/05/15 10:41:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:41:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.444Gi (was 596.444Gi)
2023/05/15 10:42:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.494Gi (was 599.494Gi)
2023/05/15 10:42:44 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:43:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:44:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:45:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycds-MUxpNB4t2OVXgjxdH8u9gUF4gTbJb8x_MmVSimgBiAxIl-txOpkWeOKxkJ2NvpBqHTvvYDLC1KwidTegrCt7lA": context canceled
2023/05/15 10:45:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:45:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.457Gi (was 596.457Gi)
2023/05/15 10:46:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.576Gi (was 599.576Gi)
2023/05/15 10:46:42 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:47:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:48:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:49:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 10:50.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

15.05.2023 10:50:01 INFO: Not creating local folders as requested.
15.05.2023 10:50:01 INFO: Creating MergerFS folders.
15.05.2023 10:50:01 INFO: *** Starting mount of remote google
15.05.2023 10:50:01 INFO: Checking if this script is already running.
15.05.2023 10:50:01 INFO: Script not running - proceeding.
15.05.2023 10:50:01 INFO: *** Checking if online
15.05.2023 10:50:02 PASSED: *** Internet online
15.05.2023 10:50:02 INFO: Success google remote is already mounted.
15.05.2023 10:50:02 INFO: Check successful, google mergerfs mount in place.
15.05.2023 10:50:02 INFO: dockers already started.
15.05.2023 10:50:02 INFO: Script complete
Script Finished May 15, 2023 10:50.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

2023/05/15 10:50:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:51:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdsy0omBKNyQQLp0swJqar7qlA531fiz4eHWL-ZtvsmkRTulOE9QsZkw_8RNZ4kHM8ZFoO220c3HDF06SM3K4nMcyg": context canceled
2023/05/15 10:51:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:51:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.540Gi (was 596.540Gi)
2023/05/15 10:52:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.569Gi (was 599.569Gi)
2023/05/15 10:52:42 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:53:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:54:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:55:01 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdsBnUc_AykzI7geN4fr0mzK34xZZkcuOCCDyX2SUFOl4GqYX80eS2xYcpVlqXqyqu3gnyYFJxYLNQbuW5_v1Bly6gJN1CG7": context canceled
2023/05/15 10:55:01 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:55:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.478Gi (was 596.478Gi)
2023/05/15 10:56:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.554Gi (was 599.554Gi)
2023/05/15 10:56:43 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:57:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:58:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:59:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 11:00.01

 

Link to comment
1 hour ago, 00b5 said:

You mean the main mount script, or the one that copies files into the merger folder? 

 

Main Mount Script

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="google" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
#LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
LocalFilesShare="ignore" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="600G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
# DockerStart="nzbget plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
DockerStart="plex" # list of dockers, separated by space, to start once mergerfs mount verified
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
	LocalFilesLocation="/tmp/$RcloneRemoteName"
	eval mkdir -p $LocalFilesLocation
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
	eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
	mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 000 \
	--dir-cache-time $RcloneMountDirCacheTime \
	--attr-timeout $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 15 seconds"
# slight pause to give mount time to finalise
	sleep 15
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
		docker stop $DockerStart
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
		exit
	fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
	else
# check if mergerfs already installed
		if [[ -f "/bin/mergerfs" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
		else
# Build mergerfs binary
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
			sleep 5
			if [[ -f "/bin/mergerfs" ]]; then
				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
			else
				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
				exit
			fi
		fi
# Create mergerfs mount
		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
		if [[  $LocalFilesShare2 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare2=":$LocalFilesShare2"
		else
			LocalFilesShare2=""
		fi
		if [[  $LocalFilesShare3 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare3=":$LocalFilesShare3"
		else
			LocalFilesShare3=""
		fi
		if [[  $LocalFilesShare4 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare4=":$LocalFilesShare4"
		else
			LocalFilesShare4=""
		fi
# make sure mergerfs mount point is empty
		mv $MergerFSMountLocation $LocalFilesLocation
		mkdir -p $MergerFSMountLocation
# mergerfs mount command
		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
		else
			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
			docker stop $DockerStart
			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
			exit
		fi
	fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
	else
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
		docker start $DockerStart
	fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

 

Copy script (copies files from a folder that is syncing with another server via Resilio Sync), runs about every 5 mins or so

 

# btsync capture SCRIPT
#!/bin/bash
# exec 3>&1 4>&2
# trap 'exec 2>&4 1>&3' 0 1 2 3
#  Everything below will go to the file 'rsync-date.log':

LOCKFILE=/tmp/lock.txt
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
    echo "[ $(date ${date_format}) ] Rsync already running @ ${LOCKFILE}"
    exit
fi

# make sure the lockfile is removed when we exit and then claim it
trap "rm -f ${LOCKFILE}; exit" INT TERM EXIT
echo $$ > ${LOCKFILE}

if [[ -f "/mnt/user/mount_rclone/google/mountcheck" ]]; then
	echo "[ $(date ${date_format}) ] INFO: rclone remote is mounted, starting copy"

echo "[ $(date ${date_format}) ] #################################### ################"
echo "[ $(date ${date_format}) ] ################# Copy TV Shows ################"
echo "[ $(date ${date_format}) ] rsync-ing TV shows from resiloSync:"
cp -rv /mnt/user/data/TV/* /mnt/user/mount_mergerfs/google/Media/TV/

echo "[ $(date ${date_format}) ] ################# Copy Movies ################"
echo "[ $(date ${date_format}) ] rsync-ing Movies from resiloSync:"
cp -rv /mnt/user/data/Movies/* /mnt/user/mount_mergerfs/google/Media/Movies/

echo "[ $(date ${date_format}) ] ###################################################"

else
	echo "[ $(date ${date_format}) ] INFO: Mount not running. Will now abort copy"
fi

sleep 30
rm -f ${LOCKFILE}

 

 

Here is 10 mins of the log where it tries to copy this file up:

 

2023/05/15 10:17:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:18:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:19:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 10:20.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

15.05.2023 10:20:01 INFO: Not creating local folders as requested.
15.05.2023 10:20:01 INFO: Creating MergerFS folders.
15.05.2023 10:20:01 INFO: *** Starting mount of remote google
15.05.2023 10:20:01 INFO: Checking if this script is already running.
15.05.2023 10:20:01 INFO: Script not running - proceeding.
15.05.2023 10:20:01 INFO: *** Checking if online
15.05.2023 10:20:02 PASSED: *** Internet online
15.05.2023 10:20:02 INFO: Success google remote is already mounted.
15.05.2023 10:20:02 INFO: Check successful, google mergerfs mount in place.
15.05.2023 10:20:02 INFO: dockers already started.
15.05.2023 10:20:02 INFO: Script complete
Script Finished May 15, 2023 10:20.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

2023/05/15 10:20:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:21:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycds6tAjl-sjvsxWVtbO2hr6eHhfX58FibGCOIPFijx8n5_LhEaKRKVeLAmdM7rdxiIM6AnlhInp9n8Bl1IGxgz4oBg": context canceled
2023/05/15 10:21:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:21:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.419Gi (was 596.419Gi)
2023/05/15 10:22:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.515Gi (was 599.515Gi)
2023/05/15 10:22:44 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:22:47 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:22:47 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:23:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:24:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:25:01 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdtO-3iUahxqOPfrLtJvUGzKbVbC_jIet8MR1hSM4t-JDEvJGPEXYgjVyO3alao3Jira9AI0ZWLeDbVKmtRXvy9FdQ": context canceled
2023/05/15 10:25:01 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:25:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.522Gi (was 596.521Gi)
2023/05/15 10:26:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.625Gi (was 599.625Gi)
2023/05/15 10:26:41 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:27:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:28:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:29:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 10:30.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

15.05.2023 10:30:01 INFO: Not creating local folders as requested.
15.05.2023 10:30:01 INFO: Creating MergerFS folders.
15.05.2023 10:30:01 INFO: *** Starting mount of remote google
15.05.2023 10:30:01 INFO: Checking if this script is already running.
15.05.2023 10:30:01 INFO: Script not running - proceeding.
15.05.2023 10:30:01 INFO: *** Checking if online
15.05.2023 10:30:02 PASSED: *** Internet online
15.05.2023 10:30:02 INFO: Success google remote is already mounted.
15.05.2023 10:30:02 INFO: Check successful, google mergerfs mount in place.
15.05.2023 10:30:02 INFO: dockers already started.
15.05.2023 10:30:02 INFO: Script complete
Script Finished May 15, 2023 10:30.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

2023/05/15 10:30:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:31:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdvY7OCd_9eS-M8zevNS2RwdMIUrvChpIfFvJbZwXkA3WTLZOSbQnxi03cunE_-VdMLlRHt4ElXs-7BokEs1s_V1yQ": context canceled
2023/05/15 10:31:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:31:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.452Gi (was 596.452Gi)
2023/05/15 10:32:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.526Gi (was 599.526Gi)
2023/05/15 10:32:44 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:33:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:34:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:35:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdsZfHAOT0kaMBaqBO-V9hllnnC1cj2FoFWxpu4k1ugT4MmBnWt5d-4ozDwEbcjp9STh-TGSnC9nmFamo3hhW1ueNw": context canceled
2023/05/15 10:35:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:35:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.583Gi (was 596.583Gi)
2023/05/15 10:36:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.645Gi (was 599.645Gi)
2023/05/15 10:36:41 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:37:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:38:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:39:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 10:40.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

15.05.2023 10:40:01 INFO: Not creating local folders as requested.
15.05.2023 10:40:01 INFO: Creating MergerFS folders.
15.05.2023 10:40:01 INFO: *** Starting mount of remote google
15.05.2023 10:40:01 INFO: Checking if this script is already running.
15.05.2023 10:40:01 INFO: Script not running - proceeding.
15.05.2023 10:40:01 INFO: *** Checking if online
15.05.2023 10:40:02 PASSED: *** Internet online
15.05.2023 10:40:02 INFO: Success google remote is already mounted.
15.05.2023 10:40:02 INFO: Check successful, google mergerfs mount in place.
15.05.2023 10:40:02 INFO: dockers already started.
15.05.2023 10:40:02 INFO: Script complete
Script Finished May 15, 2023 10:40.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

2023/05/15 10:40:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:41:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdt5s0GvZo7nDCoSPIB_BwQwa-PU1FSe0i8UWPJDOQ_cwFYx6WL33iTkh85OnXiegp5yn9OoRJLn8xAbe94O0fXcZQ": context canceled
2023/05/15 10:41:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:41:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.444Gi (was 596.444Gi)
2023/05/15 10:42:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.494Gi (was 599.494Gi)
2023/05/15 10:42:44 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:43:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:44:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:45:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycds-MUxpNB4t2OVXgjxdH8u9gUF4gTbJb8x_MmVSimgBiAxIl-txOpkWeOKxkJ2NvpBqHTvvYDLC1KwidTegrCt7lA": context canceled
2023/05/15 10:45:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:45:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.457Gi (was 596.457Gi)
2023/05/15 10:46:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.576Gi (was 599.576Gi)
2023/05/15 10:46:42 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:47:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:48:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:49:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 10:50.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

15.05.2023 10:50:01 INFO: Not creating local folders as requested.
15.05.2023 10:50:01 INFO: Creating MergerFS folders.
15.05.2023 10:50:01 INFO: *** Starting mount of remote google
15.05.2023 10:50:01 INFO: Checking if this script is already running.
15.05.2023 10:50:01 INFO: Script not running - proceeding.
15.05.2023 10:50:01 INFO: *** Checking if online
15.05.2023 10:50:02 PASSED: *** Internet online
15.05.2023 10:50:02 INFO: Success google remote is already mounted.
15.05.2023 10:50:02 INFO: Check successful, google mergerfs mount in place.
15.05.2023 10:50:02 INFO: dockers already started.
15.05.2023 10:50:02 INFO: Script complete
Script Finished May 15, 2023 10:50.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/unraid_rclone_mount/log.txt

2023/05/15 10:50:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:51:02 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdsy0omBKNyQQLp0swJqar7qlA531fiz4eHWL-ZtvsmkRTulOE9QsZkw_8RNZ4kHM8ZFoO220c3HDF06SM3K4nMcyg": context canceled
2023/05/15 10:51:02 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:51:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.540Gi (was 596.540Gi)
2023/05/15 10:52:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.569Gi (was 599.569Gi)
2023/05/15 10:52:42 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:53:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:54:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:55:01 ERROR : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdsBnUc_AykzI7geN4fr0mzK34xZZkcuOCCDyX2SUFOl4GqYX80eS2xYcpVlqXqyqu3gnyYFJxYLNQbuW5_v1Bly6gJN1CG7": context canceled
2023/05/15 10:55:01 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: upload canceled
2023/05/15 10:55:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 596.478Gi (was 596.478Gi)
2023/05/15 10:56:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 0, total size 599.554Gi (was 599.554Gi)
2023/05/15 10:56:43 INFO : Media/Movies/The.Covenant.2023.1080p.AMZN.WEB-DL.DDP5.1.Atmos.H.264-FLUX.mkv: vfs cache: queuing for upload in 5s
2023/05/15 10:57:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:58:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
2023/05/15 10:59:40 INFO : vfs cache: cleaned: objects 831 (was 831) in use 1, to upload 0, uploading 1, total size 599.694Gi (was 599.694Gi)
Script Starting May 15, 2023 11:00.01

 

The context canceled is usually an error reporting a timeout. So maybe the files can not be accessed yet or can't be deleted?

 

I don't understand why you are using simple copy instead of rclone to move the files? With rclone you are certain that files arrive at their destination and has better error handling in case of problems.

Link to comment
On 5/15/2023 at 12:45 PM, Kaizac said:

The context canceled is usually an error reporting a timeout. So maybe the files can not be accessed yet or can't be deleted?

 

I don't understand why you are using simple copy instead of rclone to move the files? With rclone you are certain that files arrive at their destination and has better error handling in case of problems.

 

did you mean rsync instead of copy? I've been using this "copy script" for multiple things for years.

 

The workflow is like this:

  • *darr apps run on my home server, and request files
  • requests are put into a folder, which syncs with a seedbox
  • Seedbox downloads files to a specific folder, which then syncs back to home server
  • *darr apps process/move/etc files and everything is good
  • the copy script is run on a 3rd server that is running plex and rclone to host a 2nd plex server for sharing (i don't share my home plex server) the copy script is just grabbing files (every xx mins) and copying them to the mergerFS folder so they can then be also available for the plex cloud instance. 

I don't run the *darr apps on the seedbox, it really only seeds, and moves files around with ResilioSync. I used to rent a server to host plex in the cloud tied to gdrive (for when I am remote, and for sharing) since my home upload bandwidth is subpar. Now I have been able to co-locate a server on a nice fiber connection, so I'm trying to move toward using it. The main difference is moving from an online rented server with linux to an owned server running unraid, and this rclone plugin to keep plex using the gdrive source files (at least until it gets killed off). 

 

I was letting the copy script run every 2 mins to make sure it would grab any files in that sync folder before the other end cleaned up and processed them. I'll try slowing it down, or only letting it run every 10 mins or something and see if I can avoid these weird errors. 

Link to comment

Been working on getting this running on my server, before I was just mounting to a /mnt/disks/google.

 

But I've been having some issues. Once I run the mount script, it sets everything up and everything works in terms of downloading to the mount_mergerfs share, and the upload script uploading to the google drive. The issue seems to be somewhere in the mounting of the google drive.

 

Once a file is uploaded to google drive (I can see it's uploaded on the drive itself), I can't access the file anymore. And in the mount_rclone, I can't access the gdrive folder at all. If I try to mount regularly like before, I can access everything.

 

I've followed all the instructions as posted and haven't changed much (I want to eventually change the share names but I'd like to get it working first). Could it be because I'm not using a crypt? 

Link to comment
19 hours ago, 00b5 said:

 

did you mean rsync instead of copy? I've been using this "copy script" for multiple things for years.

 

The workflow is like this:

  • *darr apps run on my home server, and request files
  • requests are put into a folder, which syncs with a seedbox
  • Seedbox downloads files to a specific folder, which then syncs back to home server
  • *darr apps process/move/etc files and everything is good
  • the copy script is run on a 3rd server that is running plex and rclone to host a 2nd plex server for sharing (i don't share my home plex server) the copy script is just grabbing files (every xx mins) and copying them to the mergerFS folder so they can then be also available for the plex cloud instance. 

I don't run the *darr apps on the seedbox, it really only seeds, and moves files around with ResilioSync. I used to rent a server to host plex in the cloud tied to gdrive (for when I am remote, and for sharing) since my home upload bandwidth is subpar. Now I have been able to co-locate a server on a nice fiber connection, so I'm trying to move toward using it. The main difference is moving from an online rented server with linux to an owned server running unraid, and this rclone plugin to keep plex using the gdrive source files (at least until it gets killed off). 

 

I was letting the copy script run every 2 mins to make sure it would grab any files in that sync folder before the other end

cleaned up and processed them. I'll try slowing it down, or only letting it run every 10 mins or something and see if I can avoid these weird errors. 

I'm not talking about Rsync, but about Rclone. Rclone can both sync, copy and move. The upload script from this topic uses rclone move which will move the file and then delete it at source when it's validated to have been moved correctly. With copy you still keep the source file, which could be what you want. Rclone sync just keeps 2 folders in sync one way to another.

 

So am I understanding it right, you are using the copy script to copy directly into your rclone google drive mount? Or are you using an upload script for that as well? Copying directly into the rclone mount (would be mount_rclone/) is problematic. Copying into mount_mergerfs/ and then using the rclone upload script is fine.

 

In general, I would really advise against using cp since it's not validating the transfer and is basically copy-paste. You can then also put in values as minimum file age to transfer. Rsync is also a possibility when you are familiar with it, but for me rclone is easier because I know how to run it.

 

 

Link to comment
2 hours ago, TacosWillEatUs said:

Been working on getting this running on my server, before I was just mounting to a /mnt/disks/google.

 

But I've been having some issues. Once I run the mount script, it sets everything up and everything works in terms of downloading to the mount_mergerfs share, and the upload script uploading to the google drive. The issue seems to be somewhere in the mounting of the google drive.

 

Once a file is uploaded to google drive (I can see it's uploaded on the drive itself), I can't access the file anymore. And in the mount_rclone, I can't access the gdrive folder at all. If I try to mount regularly like before, I can access everything.

 

I've followed all the instructions as posted and haven't changed much (I want to eventually change the share names but I'd like to get it working first). Could it be because I'm not using a crypt? 

Crypt shouldn't matter. Post your mount and upload scripts please. What do you mean with "can't access the file anymore"?

Link to comment
18 hours ago, Kaizac said:

Crypt shouldn't matter. Post your mount and upload scripts please. What do you mean with "can't access the file anymore"?

I've posted the two scripts below, the only changes I made were to match up with different share names and a small change to the folders created within the mounts. When I run the mount script, it mounts everything without any errors (as far as I can tell at least), but if I go into google_remote, there are no folders within the gdrive folder, from my understanding, it should have the folders that on the drive there, no? The google_merged has the folders that I wanted it to create (4k, 1080p and tv), but nothing else. 

 

I can add a file to /google_merged/gdrive/1080p and it shows up within google_local/gdrive/1080p. But once I run the upload script, it uploads to the drive within 1080p but directories get deleted in google_local which removes them from google_merged as well. I'm assuming google_remote should have the directories still which should show up in google_merged but because google_remote is empty, google_merged ends up being empty after the upload.

 

By losing access to the file, I mean that none of the files on the google drive seem to be accessible. Not sure if I made a mistake somewhere or what. Appreciate any help, thanks.

 

Mount script:

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/google_remote" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/google_local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/google_remote" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="400G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/google_merged" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="nzbget plexx sonarrx radarrx ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"downloads/complete,downloads/torrents,4k,1080p,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
	LocalFilesLocation="/tmp/$RcloneRemoteName"
	eval mkdir -p $LocalFilesLocation
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
	eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
	mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 000 \
	--dir-cache-time $RcloneMountDirCacheTime \
	--attr-timeout $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 5
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
		docker stop $DockerStart
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
		exit
	fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
	else
# check if mergerfs already installed
		if [[ -f "/bin/mergerfs" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
		else
# Build mergerfs binary
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
			sleep 5
			if [[ -f "/bin/mergerfs" ]]; then
				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
			else
				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
				exit
			fi
		fi
# Create mergerfs mount
		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
		if [[  $LocalFilesShare2 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare2=":$LocalFilesShare2"
		else
			LocalFilesShare2=""
		fi
		if [[  $LocalFilesShare3 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare3=":$LocalFilesShare3"
		else
			LocalFilesShare3=""
		fi
		if [[  $LocalFilesShare4 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare4=":$LocalFilesShare4"
		else
			LocalFilesShare4=""
		fi
# make sure mergerfs mount point is empty
		mv $MergerFSMountLocation $LocalFilesLocation
		mkdir -p $MergerFSMountLocation
# mergerfs mount command
		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
		else
			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
			docker stop $DockerStart
			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
			exit
		fi
	fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
	else
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
		docker start $DockerStart
	fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

Upload script:
 

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/google_local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/google_remote" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="15M"
BWLimit3Time="16:00"
BWLimit3="12M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

###############################################################################
#####    DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING   #####
###############################################################################

####### Preparing mount location variables #######
if [[  $BackupJob == 'Y' ]]; then
	LocalFilesLocation="$LocalFilesShare"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
else
	LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
fi

RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount

####### create directory for script files #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files

#######  Check if script already running  ##########
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
fi

#######  check if rclone installed  ##########
echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
else
	echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
	exit
fi

####### Rotating serviceaccount.json file if using Service Accounts #######
if [[ $UseServiceAccountUpload == 'Y' ]]; then
	cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
	CounterNumber=$(find -name 'counter*' | cut -c 11,12)
	CounterCheck="1"
	if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
		echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
	else
		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		CounterNumber="1"
	fi
	ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
	echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
	ServiceAccount=""
fi

#######  Upload files  ##########

# Check bind option
if [[  $CreateBindMount == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
		echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	else
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
		ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
	fi
else
	RCloneMountIP=""
fi

#  Remove --delete-empty-src-dirs if rclone sync or copy
if [[  $RcloneCommand == 'move' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
	DeleteEmpty="--delete-empty-src-dirs "
else
	echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
	DeleteEmpty=""
fi

#  Check --backup-directory
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
	LocalFilesLocation="$LocalFilesShare"
	BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
else
	BackupRemoteLocation=""
	BackupRemoteDeletedLocation=""
	BackupRetention=""
	BackupDir=""
fi

# process files
	rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
	--user-agent="$RcloneUploadRemoteName" \
	-vv \
	--buffer-size 512M \
	--drive-chunk-size 512M \
	--tpslimit 8 \
	--checkers 8 \
	--transfers 4 \
	--order-by modtime,$ModSort \
	--min-age $MinimumAge \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--exclude *fuse_hidden* \
	--exclude *_HIDDEN \
	--exclude .recycle** \
	--exclude .Recycle.Bin/** \
	--exclude *.backup~* \
	--exclude *.partial~* \
	--drive-stop-on-upload-limit \
	--bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
	--bind=$RCloneMountIP $DeleteEmpty

# Delete old files from mount
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
	rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
fi

#######  Remove Control Files  ##########

# update counter and remove other control files
if [[  $UseServiceAccountUpload == 'Y' ]]; then
	if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
	else
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		CounterNumber=$((CounterNumber+1))
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
		echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
	fi
else
	echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
fi

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

Link to comment
On 5/18/2023 at 11:22 PM, TacosWillEatUs said:

but if I go into google_remote, there are no folders within the gdrive folder, from my understanding, it should have the folders that on the drive there, no?

 

when you run the mount script and go into your google_merged folder do you see any files in there? if not there is something wrong with your mount set up.

the naming is always a bit wonky but the here is an easy explanation.
google_remote : this is a LOCAL copy of what exists on your remote (in this case whatever is inside your google drive)

google_local: this is a local location for all the files that are added into the mergerFS (files will stay here until the upload script is run)

user0/google_remote : this is rclone's VFS caching system that will pull items from the remote (google) and cache them locally on your machine. ( in this case i would refrain from using user0 since user0 is specifically for items that exist on your array ONLY plus it makes it hard to differentiate from the other above google_remote mount you have. i would use something like [user/google_remote_cache])

google_merged: this is the amalgamation of the google_remote and the google_local mounts. (when you run the upload script it will process files in your google_local folder and push them into the cloud. after that the files will be deleted off of the local mount and should be accessible inside your merged folder.)

 

I would check first on mount that you have 3 things.. i do this every time i mount just to make sure everything is running correctly..

1. run the mount script. (sometimes it takes a min or two since it downloads mergerFS on the first shot... after that check the logs and make sure everything is mounted correctly. )

2. check merged folder to make sure all your files are in there..

 

in your case i would also check the google_local for the mount folders.. they should be there BUT all of them should be empty. 

check your google_remote folder to make sure all your remote files are there.. and also check your google_remote_cache folder. you should see a VFS folder for the rclone VFS cache along with maybe a metadata folder. 

 

test running a file where you know it is only in remote.. and you should see some the same files populate in the VFS cache..

 

then test moving some files into the merged folder. you should also see that file show up in the google_local folder.. 

 

then lastly run the upload script. and you should see it disappear from the google local folder but stay in your merged folder. 

 

Link to comment
On 5/17/2023 at 11:21 PM, bubbadk said:

just a silly question :)

 

why is it that everytime i have rebootet i have to run the mount script 2 times. it's the second time it figures out that fuse is not installed.

 

And why doesn't it stay installed :)

After you run it the first time are you going into your logs and checking what it says?

i would do that first.. i have no problems just running it once even from a cold start of my entire Unraid machine.. it does take a min or two since it has to download the mergerFS repo and build it manually.. 

  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.