Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

I have started uploading and because I have an enterprise standard, I assumed that I have unlimited storage. 

Google drive shows that I have a maximum of 5TB and I just got a mail telling me that if I exceed it, the files will become read only. Looking at the ways to increase the size, I see that I have to contact support and request more storage. 

 

Does anyone else have to do this? Or is the 5TB limit not enforced?

image.png.0058c92e5f7c2a55c75f49ee97610176.png

 

image.png.30c2ab56303082f29452ea64fdcb8b7c.png

Edited by workermaster
Link to comment
2 minutes ago, workermaster said:

I have started uploading and because I have an enterprise standard, I assumed that I have unlimited storage. 

Google drive shows that I have a maximum of 5TB and I just got a mail telling me that if I exceed it, the files will become read only. Looking at the ways to increase the size, I see that I have to contact support and request more storage. 

 

Does anyone else have to do this? Or is the 5TB limit not enforced?

image.png.0058c92e5f7c2a55c75f49ee97610176.png

 

image.png.30c2ab56303082f29452ea64fdcb8b7c.png

It's something Google recently started doing for new accounts. I've told you about this possibility earlier. You can ask for more but they will probably demand you buy 3 accounts first and then still have to explain and ask for more storage with increases of 5TB only.

 

Alternative is to use Dropbox unlimited which you need 3 accounts for as well but it's really unlimited with no daily upload limits. But it all depends on how much storage you need and your wallet.

Link to comment
25 minutes ago, Kaizac said:

It's something Google recently started doing for new accounts. I've told you about this possibility earlier. You can ask for more but they will probably demand you buy 3 accounts first and then still have to explain and ask for more storage with increases of 5TB only.

 

Alternative is to use Dropbox unlimited which you need 3 accounts for as well but it's really unlimited with no daily upload limits. But it all depends on how much storage you need and your wallet.

I just contacted support. They told me that I could add another account to double the storage, or contact sales to see what they can offer me. They did not have an answer for me as to why the enterprise account says unlimited storage, but is only 5TB. 

 

I am now looking at dropbox. They are quite a bit more expensive, but taking power consumption, drive costs and capacity in mind, it might be worth it. I am going to be trying the trial of Dropbox to see if streaming works well. 

Link to comment
3 minutes ago, workermaster said:

I just contacted support. They told me that I could add another account to double the storage, or contact sales to see what they can offer me. They did not have an answer for me as to why the enterprise account says unlimited storage, but is only 5TB. 

 

I am now looking at dropbox. They are quite a bit more expensive, but taking power consumption, drive costs and capacity in mind, it might be worth it. I am going to be trying the trial of Dropbox to see if streaming works well. 

Dropbox should work fine, others have switched over to that. Just be aware that the trial is only 5TB. And when you do decide to use their service you should press on getting a good amount of storage beforehand, otherwise you will keep having to ask for more as well while migrating.

Link to comment

@Kaizac @slimshizn and others, I need some help with testing. 

 

I think I've got rclone union working i.e. can remove mergerfs so there are few moving parts.  Plus, I think that rclone union is faster for our scenario than mergerfs, but let me know what you think.

 

The problem with including /mnt/user/local in the union, was that rclone can't poll changes written direct to /mnt/user/local fast enough...so, just don't write to it, and write only to /mnt/user/mount_mergerfs/tdrive_vfs i.e. like we have already been doing.


Here are my settings if anyone wants to try them out - basically disable mergerfs by adding "ignore" for MergerfsMountShare="ignore", and then paste in my quick rclone union section - I had to make some quick changes to the rclone mount section that I'll tidy up when I have time.

 

Here's my rclone config:

[local_tdrive_union]
type = smb
host = localhost
user = rclone
pass = xxxxxxxxxxxxxxxxxxx

[tdrive_union]
type = unio4
upstreams = local_tdrive_union:local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs:  tdrive6_vfs: 
action_policy = all
create_policy = ff
search_policy = ff


For some strange reason I found the settings above were faster than the settings below,  with writes to /mnt/user/local appearing instantly, whereas there was a pause with the settings below.  I think when writes are "handled" fully by rclone it works better:

 

[tdrive_union]
type = unio4
upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs:  tdrive6_vfs: 
action_policy = all
create_policy = ff
search_policy = ff


And my adjusted script:
 

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted."
# ADDED MOUNT RUNNING REMOVAL HERE
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 000 \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size 200G \
	--vfs-cache-max-age 24h \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 10
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
		docker stop $DockerStart
		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
# ADDED MOUNT RUNNING REMOVAL HERE AND I THINK FAST CHECK REMOVAL
		find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
		rm /mnt/user/appdata/other/scripts/running/fast_check
		exit
	fi
fi

# create union mount

echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of tdrive_union"
# Check If Rclone Mount Already Created
if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success tdrive_union is already mounted."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Starting mount of tdrive_union."
	mkdir -p /mnt/user/mount_mergerfs/tdrive_vfs
	rclone mount \
	--allow-other \
    --umask 000 \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=/mnt/user/mount_rclone/cache/tdrive_union \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size 100G \
	--vfs-cache-max-age 24h \
	--vfs-read-ahead 1G \
	tdrive_union: /mnt/user/mount_mergerfs/tdrive_vfs &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 10
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of tdrive_union mount."
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: tdrive_union mount failed - please check for problems."
		docker stop $DockerStart
		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
		rm /mnt/user/appdata/other/scripts/running/fast_check
		exit
	fi
fi

# MERGERFS SECTION AFTER SHOULD BE OK WITHOUT ANY CHANGES IF 'IGNORE' ADDED EARLIER

 

  • Thanks 1
Link to comment
3 hours ago, DZMM said:

@Kaizac @slimshizn and others, I need some help with testing. 

 

I think I've got rclone union working i.e. can remove mergerfs so there are few moving parts.  Plus, I think that rclone union is faster for our scenario than mergerfs, but let me know what you think.

 

The problem with including /mnt/user/local in the union, was that rclone can't poll changes written direct to /mnt/user/local fast enough...so, just don't write to it, and write only to /mnt/user/mount_mergerfs/tdrive_vfs i.e. like we have already been doing.


Here are my settings if anyone wants to try them out - basically disable mergerfs by adding "ignore" for MergerfsMountShare="ignore", and then paste in my quick rclone union section - I had to make some quick changes to the rclone mount section that I'll tidy up when I have time.

 

Here's my rclone config:

[local_tdrive_union]
type = smb
host = localhost
user = rclone
pass = xxxxxxxxxxxxxxxxxxx

[tdrive_union]
type = unio4
upstreams = local_tdrive_union:local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs:  tdrive6_vfs: 
action_policy = all
create_policy = ff
search_policy = ff


For some strange reason I found the settings above were faster than the settings below,  with writes to /mnt/user/local appearing instantly, whereas there was a pause with the settings below.  I think when writes are "handled" fully by rclone it works better:

 

[tdrive_union]
type = unio4
upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4vfs: tdrive5_vfs:  tdrive6_vfs: 
action_policy = all
create_policy = ff
search_policy = ff


And my adjusted script:
 

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted."
# ADDED MOUNT RUNNING REMOVAL HERE
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 000 \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=/mnt/user/mount_rclone/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size 200G \
	--vfs-cache-max-age 24h \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 10
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
		docker stop $DockerStart
		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
# ADDED MOUNT RUNNING REMOVAL HERE AND I THINK FAST CHECK REMOVAL
		find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
		rm /mnt/user/appdata/other/scripts/running/fast_check
		exit
	fi
fi

# create union mount

echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of tdrive_union"
# Check If Rclone Mount Already Created
if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success tdrive_union is already mounted."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Starting mount of tdrive_union."
	mkdir -p /mnt/user/mount_mergerfs/tdrive_vfs
	rclone mount \
	--allow-other \
    --umask 000 \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=/mnt/user/mount_rclone/cache/tdrive_union \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size 100G \
	--vfs-cache-max-age 24h \
	--vfs-read-ahead 1G \
	tdrive_union: /mnt/user/mount_mergerfs/tdrive_vfs &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 10
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "/mnt/user/mount_mergerfs/tdrive_vfs/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of tdrive_union mount."
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: tdrive_union mount failed - please check for problems."
		docker stop $DockerStart
		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
		rm /mnt/user/appdata/other/scripts/running/fast_check
		exit
	fi
fi

# MERGERFS SECTION AFTER SHOULD BE OK WITHOUT ANY CHANGES IF 'IGNORE' ADDED EARLIER

 

Experiment over - it got really slow when lots of scans / file access was going on

  • Thanks 1
  • Haha 1
Link to comment
6 minutes ago, DZMM said:

Experiment over - it got really slow when lots of scans / file access was going on

Thanks for testing it already, didn't have time to do it sooner. Disappointing results, would have been nice when we can fully rely on rclone. I wonder why the implementation seems so bad?

Link to comment
13 hours ago, Kaizac said:

Thanks for testing it already, didn't have time to do it sooner. Disappointing results, would have been nice when we can fully rely on rclone. I wonder why the implementation seems so bad?

@Kaizac Ok, I've had another go at it this morning for a few reasons.

Firstly, because my mount(s) (not sure which) keep disconnecting every couple of hours or so.  The problem started when I added another 5 tdrives to balance out some tdrives that had over 200k files in.  I think it's probably a unraid issue with the number of open connections or memory - no idea to fix.

 

The second reason is why I decided to have another go.  I realised that having >10 tdrives mounted so that they could be combined via mergerfs was using up a lot of resources. I realised with union I only needed 1 mount i.e. this must be saving a lot of resources.

 

Anyway, here's a tidied up mount script I just pulled together - I'll add version numbers etc when I upload to github. You can see how much smaller it is - my old script that mounted >10 tdrives which was over 3000 lines in now under 200!

 

rclone config:

 

[tdrive_union]
type = union
upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs: tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4_vfs: tdrive5_vfs: 
action_policy = all
create_policy = ff
search_policy = ff

 

New mount script:
 

#!/bin/bash

#######  Check if script already running  ##########
if [[ -f "/mnt/user/appdata/other/scripts/running/fast_check" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Fast check already running"
	exit
else
	mkdir -p /mnt/user/appdata/other/scripts/running
	touch /mnt/user/appdata/other/scripts/running/fast_check
fi

###############################
#####  Replace Folders  #######
###############################


mkdir -p /mnt/user/local/tdrive_vfs/{downloads/complete/youtube,downloads/complete/MakeMKV/}
mkdir -p /mnt/user/local/backup_vfs/duplicat19i

###############################
#######  Ping Check  ##########
###############################

if [[ -f "/mnt/user/appdata/other/scripts/running/connectivity" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Already passed connectivity test"
else
# Ping Check
	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
	ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
		echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
# check if mounts need restoring
	else
		echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
		rm /mnt/user/appdata/other/scripts/running/fast_check
		exit
	fi
fi

################################################################
###################### mount tdrive_union   ########################
################################################################

# REQUIRED SETTINGS
RcloneRemoteName="tdrive_union"
RcloneMountLocation="/mnt/user/mount_mergerfs/tdrive_vfs"
RcloneCacheShare="/mnt/user/mount_rclone/cache"
RcloneCacheMaxSize="500G"
DockerStart="bazarr qbittorrentvpn readarr plex radarr_new radarr-uhd sonarr sonarr-uhd"

# OPTIONAL SETTINGS

# Add extra commands or filters
Command1=""
Command2="--log-file=/var/log/rclone"
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""
CreateBindMount="N" 
RCloneMountIP="192.168.1.77" 
NetworkAdapter="eth0" 
VirtualIPNumber="7"

####### END SETTINGS #######

####### Preparing mount location variables #######

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files
mkdir -p $RcloneCacheShare/$RcloneRemoteName #for cache files
mkdir -p $RcloneMountLocation

#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
	rm /mnt/user/appdata/other/scripts/running/fast_check
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted."
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 000 \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=$RcloneCacheShare/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-age 1h \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age 24h \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 10
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
		docker stop $DockerStart
		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
		find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
		rm /mnt/user/appdata/other/scripts/running/fast_check
		exit
	fi
fi

####### Starting Dockers That Need Mount To Work Properly #######

if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ]] || [[ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then
	echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
	docker start $DockerStart

fi

echo "$(date "+%d.%m.%Y %T") INFO: ${RcloneRemoteName} Script complete"

rm /mnt/user/appdata/other/scripts/running/fast_check

exit


 

 

Link to comment
3 minutes ago, DZMM said:

@Kaizac Ok, I've had another go at it this morning for a few reasons.

Firstly, because my mount(s) (not sure which) keep disconnecting every couple of hours or so.  The problem started when I added another 5 tdrives to balance out some tdrives that had over 200k files in.  I think it's probably a unraid issue with the number of open connections or memory - no idea to fix.

 

The second reason is why I decided to have another go.  I realised that having >10 tdrives mounted so that they could be combined via mergerfs was using up a lot of resources. I realised with union I only needed 1 mount i.e. this must be saving a lot of resources.

 

Anyway, here's a tidied up mount script I just pulled together - I'll add version numbers etc when I upload to github. You can see how much smaller it is - my old script that mounted >10 tdrives which was over 3000 lines in now under 200!

 

rclone config:

 

[tdrive_union]
type = union
upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs: tdrive1_vfs: tdrive2_vfs: tdrive3_vfs: tdrive4_vfs: tdrive5_vfs: 
action_policy = all
create_policy = ff
search_policy = ff

 

New mount script:
 

#!/bin/bash

#######  Check if script already running  ##########
if [[ -f "/mnt/user/appdata/other/scripts/running/fast_check" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Fast check already running"
	exit
else
	mkdir -p /mnt/user/appdata/other/scripts/running
	touch /mnt/user/appdata/other/scripts/running/fast_check
fi

###############################
#####  Replace Folders  #######
###############################


mkdir -p /mnt/user/local/tdrive_vfs/{downloads/complete/youtube,downloads/complete/MakeMKV/}
mkdir -p /mnt/user/local/backup_vfs/duplicat19i

###############################
#######  Ping Check  ##########
###############################

if [[ -f "/mnt/user/appdata/other/scripts/running/connectivity" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Already passed connectivity test"
else
# Ping Check
	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
	ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
		echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
# check if mounts need restoring
	else
		echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
		rm /mnt/user/appdata/other/scripts/running/fast_check
		exit
	fi
fi

################################################################
###################### mount tdrive_union   ########################
################################################################

# REQUIRED SETTINGS
RcloneRemoteName="tdrive_union"
RcloneMountLocation="/mnt/user/mount_mergerfs/tdrive_vfs"
RcloneCacheShare="/mnt/user/mount_rclone/cache"
RcloneCacheMaxSize="500G"
DockerStart="bazarr qbittorrentvpn readarr plex radarr_new radarr-uhd sonarr sonarr-uhd"

# OPTIONAL SETTINGS

# Add extra commands or filters
Command1=""
Command2="--log-file=/var/log/rclone"
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""
CreateBindMount="N" 
RCloneMountIP="192.168.1.77" 
NetworkAdapter="eth0" 
VirtualIPNumber="7"

####### END SETTINGS #######

####### Preparing mount location variables #######

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files
mkdir -p $RcloneCacheShare/$RcloneRemoteName #for cache files
mkdir -p $RcloneMountLocation

#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
	rm /mnt/user/appdata/other/scripts/running/fast_check
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success DS check ${RcloneRemoteName} remote is already mounted."
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 000 \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=$RcloneCacheShare/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-age 1h \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age 24h \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 10
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
		docker stop $DockerStart
		fusermount -uz /mnt/user/mount_mergerfs/tdrive_vfs
		find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
		rm /mnt/user/appdata/other/scripts/running/fast_check
		exit
	fi
fi

####### Starting Dockers That Need Mount To Work Properly #######

if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ]] || [[ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then
	echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
	docker start $DockerStart

fi

echo "$(date "+%d.%m.%Y %T") INFO: ${RcloneRemoteName} Script complete"

rm /mnt/user/appdata/other/scripts/running/fast_check

exit


 

 

Playback is a bit slower so far, but I'm doing a lot of scanning - Plex, sonarr, radarr etc to add back all my files.  Will see what it's like when it's finished

Link to comment
1 minute ago, DZMM said:

Playback is a bit slower so far, but I'm doing a lot of scanning - Plex, sonarr, radarr etc to add back all my files.  Will see what it's like when it's finished

 

Interesting. I have like 6 mounts right now, but I did notice that Rclone is eating a lot off resources indeed, especially with uploads going on as well. So I'm curious what your performance will be when you're finished. Are you still using the VFS cache on the union as well?

 

Another thing I'm thinking about is just going with the Dropbox alternative. It is a bit more expensive, but we don't have the bullshit limitations of 400k files/folders per Tdrive. No upload and download limits. And just 1 mount to connect everything to. It only had api hits per minute which we have to account for.

And I can't shake the feeling of Google sunsetting unlimited any time soon for existing users as well.

Link to comment
1 hour ago, Kaizac said:

 

Interesting. I have like 6 mounts right now, but I did notice that Rclone is eating a lot off resources indeed, especially with uploads going on as well. So I'm curious what your performance will be when you're finished. Are you still using the VFS cache on the union as well?

 

Another thing I'm thinking about is just going with the Dropbox alternative. It is a bit more expensive, but we don't have the bullshit limitations of 400k files/folders per Tdrive. No upload and download limits. And just 1 mount to connect everything to. It only had api hits per minute which we have to account for.

And I can't shake the feeling of Google sunsetting unlimited any time soon for existing users as well.

 

Until my recent issues with something in one of my 10 rclone mounts + mergerfs mount dropping, I wasn't tempted to move to Dropbox as my setup was fine. If union works, although the tdrives are there in the background, I'll have just one mount.

How would you move all your files to Dropbox if you did move - rclone server side transfer?

Link to comment
1 hour ago, Kaizac said:

So I'm curious what your performance will be when you're finished.

Ok, ditching again - the performance is too slow. It's been running for an hour and it still won't play anything without buffering.  Maybe union doesn't use VFS - dunno.

I might have to go with dropbox as my problem is definitely from the number of tdrives I have - 10 for my media, 3 for my backup files, and a couple of others.  Unless I can find out if it's e.g. an unraid number of connections issue.

Link to comment
25 minutes ago, DZMM said:

 

Until my recent issues with something in one of my 10 rclone mounts + mergerfs mount dropping, I wasn't tempted to move to Dropbox as my setup was fine. If union works, although the tdrives are there in the background, I'll have just one mount.

How would you move all your files to Dropbox if you did move - rclone server side transfer?

 

Don't think server side copy works like that. It just works on the same remote and moving a file/folder within that 1 remote.

 

So it will probably just end up being a 24/7 sync job for a month. Or maybe get a 10GB VPS and run rclone from there for the one time sync/migration. Problem mostly is the 10TB (if they didn't lower it by now) download limit per day. I'm not sure about the move yet though. I also use the workspace for e-mail, my business and such, so it has it's uses. But them not being clear about the plans and what is and isn't allowed is just annoying and a liability on the long run.

 

18 minutes ago, DZMM said:

Ok, ditching again - the performance is too slow. It's been running for an hour and it still won't play anything without buffering.  Maybe union doesn't use VFS - dunno.

I might have to go with dropbox as my problem is definitely from the number of tdrives I have - 10 for my media, 3 for my backup files, and a couple of others.  Unless I can find out if it's e.g. an unraid number of connections issue.

 

Didn't we start with unionfs? And I think we only started to use VFS after we got mergerfs available. So that could be the explanation of your performance issue during your test.

 

I wonder if there are other ways to combine multiple team drives to make it look like 1 remote and thus hopefully increasing performance for you. I'll have to think about that.

 

EDIT: Is  your RAM not filling up? Or CPU maybe? Does "top" in your unraid terminal show high usage by rclone?

Edited by Kaizac
Link to comment
8 hours ago, Kaizac said:

 

Don't think server side copy works like that. It just works on the same remote and moving a file/folder within that 1 remote.

 

So it will probably just end up being a 24/7 sync job for a month. Or maybe get a 10GB VPS and run rclone from there for the one time sync/migration. Problem mostly is the 10TB (if they didn't lower it by now) download limit per day. I'm not sure about the move yet though. I also use the workspace for e-mail, my business and such, so it has it's uses. But them not being clear about the plans and what is and isn't allowed is just annoying and a liability on the long run.

 

 

Didn't we start with unionfs? And I think we only started to use VFS after we got mergerfs available. So that could be the explanation of your performance issue during your test.

 

I wonder if there are other ways to combine multiple team drives to make it look like 1 remote and thus hopefully increasing performance for you. I'll have to think about that.

 

EDIT: Is  your RAM not filling up? Or CPU maybe? Does "top" in your unraid terminal show high usage by rclone?

My load and CPU rarely go over 50%.  I think it's something like fs.inotify.max_user_watches being too low.

I think I'm under 10TB per day, but it feels like a uncomfortably low ceiling that I'll break through at some point. 

I'm just going to ditch some of the teamdrives. I've just checked and I've only got an average of 40K items in each.  The max is 400K and I think the drives slow down at 150K, so my recent balancing was a bit too excessive.  The slowdown at 150k is probably only marginally, and I'm a long way from that anyway.

Link to comment

rclone has evolved so much since the initial youtube video of @SpaceInvaderOne - looking at all the discussions with regards rclone, I wonder if is not time for a new (updated) video on the topic 😁. I am happy to support (as a newbie) however I can: testing, writing the script, donating $ to whoever is willing to take on the challenge of creating a video on how to properly configure rclone in 2022 on unraid. anyone? @Sycotix 😉

Link to comment

I ask you for help to configure unionfs


I already use this script to mount gdrives but I don't use the upload script nor unionfs.
Now I have sonarr installed and cannot use two different root folders.

 

My drives are currently configured like this

Tv series locally /mnt/user/media/Tv
Gdrive archive /mnt/user/mount_rclone/media_archive_gdrive/Tv

Now I have created a new share which should merge the previous /mnt/user/media_unionfs

 

In my case I happen to have
/mnt/user/media/Tv/series1/season1
/mnt/user/media/Tv/series1/season2

and
/mnt/user/mount_rclone/media_archive_gdrive/Tv/series1/season3
/mnt/user/mount_rclone/media_archive_gdrive/Tv/series1/season4

for plex no problem, it manages to combine folders by itself but sonarr no.

 

I set up the script like this
RcloneRemoteName="media_archive_gdrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/media/Tv" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="150G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/media_unionfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="Plex-Media-Server" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{""\} # comma separated list of folders to create within the mount


But in this way in the folder /mnt/user/media_unionfs I find only the files present on /mnt/user/mount_rclone/media_archive_gdrive/Tv
but not combined with those present locally.

Where am I wrong?

Edited by Sildenafil
Link to comment
1 hour ago, Sildenafil said:

I ask you for help to configure unionfs


I already use this script to mount gdrives but I don't use the upload script nor unionfs.
Now I have sonarr installed and cannot use two different root folders.

 

My drives are currently configured like this

Tv series locally /mnt/user/media/Tv
Gdrive archive /mnt/user/mount_rclone/media_archive_gdrive/Tv

Now I have created a new share which should merge the previous /mnt/user/media_unionfs

 

In my case I happen to have
/mnt/user/media/Tv/series1/season1
/mnt/user/media/Tv/series1/season2

and
/mnt/user/mount_rclone/media_archive_gdrive/Tv/series1/season3
/mnt/user/mount_rclone/media_archive_gdrive/Tv/series1/season4

for plex no problem, it manages to combine folders by itself but sonarr no.

 

I set up the script like this
RcloneRemoteName="media_archive_gdrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/media/Tv" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="150G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/media_unionfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="Plex-Media-Server" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{""\} # comma separated list of folders to create within the mount


But in this way in the folder /mnt/user/media_unionfs I find only the files present on /mnt/user/mount_rclone/media_archive_gdrive/Tv
but not combined with those present locally.

Where am I wrong?

Change to:

 

LocalFilesShare="/mnt/user/media"

Link to comment
15 minutes ago, Sildenafil said:

I tried but nothing has changed

ahh on a PC now so can read better.  you need to store your local files in  /mnt/user/media/media_archive_gdrive/Tv for it to work, and then:

LocalFilesShare="/mnt/user/media"

I.e the union then combines the two /media_archive_gdrive directories.

Link to comment

In regards to why I'm loosing my share from time to time, I get this error:

 

Oct 31 22:39:45 Unraid kernel: [ 18010] 0 18010 16976 2511 110592 0 0 notify

Oct 31 22:39:45 Unraid kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/,task=rcloneorig,pid=6284,uid=0

Oct 31 22:39:45 Unraid kernel: Out of memory: Killed process 6284 (rcloneorig) total-vm:13626112kB, anon-rss:12490308kB, file-rss:4kB, shmem-rss:35408kB, UID:0 pgtables:25380kB oom_score_adj:0

Link to comment
10 hours ago, Bjur said:

In regards to why I'm loosing my share from time to time, I get this error:

 

Oct 31 22:39:45 Unraid kernel: [ 18010] 0 18010 16976 2511 110592 0 0 notify

Oct 31 22:39:45 Unraid kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/,task=rcloneorig,pid=6284,uid=0

Oct 31 22:39:45 Unraid kernel: Out of memory: Killed process 6284 (rcloneorig) total-vm:13626112kB, anon-rss:12490308kB, file-rss:4kB, shmem-rss:35408kB, UID:0 pgtables:25380kB oom_score_adj:0

That's what I expected to be the cause, you don't have enough RAM in your server. If you start uploading it will consume a lot of RAM. So you could look at the upload script and check the seperate flags and reduce the ones that use your RAM. When I have my mounts running and uploads it will often take a lot of my quite beefy server.

Link to comment
6 hours ago, Kaizac said:

That's what I expected to be the cause, you don't have enough RAM in your server. If you start uploading it will consume a lot of RAM. So you could look at the upload script and check the seperate flags and reduce the ones that use your RAM. When I have my mounts running and uploads it will often take a lot of my quite beefy server.

Thanks for the info.

I'm running Unraid in a VMware host with i5-9600K 32 GB memory where 16 GB are allocated to Unraid.

How much should there be then?

Ps. I wasn't even uploading, just normal playback where it stopped when I skipped back. 

Link to comment

Hi all, I would be interested to know how the rclone mount is updated. I have the problem that I always have to restart the server to see the current files. Since after files are uploaded, they are deleted locally and thus unavailable until I restart the server. Do you use a routine once a week? Which updates the RClone mount?  Thanks in advance for your help. 

BR Paff

Link to comment
5 hours ago, Bjur said:

Thanks for the info.

I'm running Unraid in a VMware host with i5-9600K 32 GB memory where 16 GB are allocated to Unraid.

How much should there be then?

Ps. I wasn't even uploading, just normal playback where it stopped when I skipped back. 

I run 64GB ram and have 30% constantly used. Doesn't mean you also need that, but 16GB is not a lot. You have to remember that while doing playback all the chunks are stored in your RAM if you are not using VFS cache. And if you have Plex also transcoding in RAM it will also consume. So I would lower the chunk sizes in both upload as mount scripts.

 

1 hour ago, Paff said:

Hi all, I would be interested to know how the rclone mount is updated. I have the problem that I always have to restart the server to see the current files. Since after files are uploaded, they are deleted locally and thus unavailable until I restart the server. Do you use a routine once a week? Which updates the RClone mount?  Thanks in advance for your help. 

BR Paff

This is an issue you should not have when you followed the instructions. You are missing files because you are not using mergerfs. For me files don't dissapear ever, because it doesn't matter if the files move from local to cloud.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.