Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

16 minutes ago, Kaizac said:

Knowing that Google has been limiting new accounts and stopping unlimited storage and multiple stories of people who got their whole drive deleted because of copyright material.

I am already aware of this, but I took the risk knowing that no single employee looks at which files are stored and that this runs automatically via the hash value - this should be changed by encrypting the file. Even if an employee then looks manually, he sees a ".bin" file without a password to decrypt it, and Google doesn't really have to care how I name my Linux Isos and holiday videos.
But an offline solution is already being created, which is why I switched to Unraid.

Sorry for the off-topic, back to the topic.

Link to comment
7 hours ago, Kaizac said:

Krusader working is not surprising since it uses the root account, so it's not limited with permissions. I looked over your scripts quickly and I see nothing strange. So I think it's an samba issue. Did you upgrade to 6.11? There were changes to sambe in there, maybe check that out? Could explain why it stopped working.

 

 

Wow, I'm amazed your script for Server A worked....

 

You configured your Gcrypt as : gsuite:/crypt/media. But the correct method would be gsuite:crypt/media.

Then you also had your cache-dir in your A script to /cache. Did that work? In unraid you will have to define the actual path. Seems you did that correctly in the new script.

On the A script you didn't close the mount script with a "&", in the new script this is fixed already by default.

In your new script you put in --allow-non-empty as extra command, this is very risky to do. So make sure you thought about doing that.

 

What I find most worrying is that your crypt doesn't actually encrypt anything. Is that by choice? If you do want to switch to an actually encrypted Crypt you will have to send all your files through the crypt to your storage. It won't automatically encrypt all the files within that mount.

 

Your specific questions:

1. Don't use "Run script" in User Scripts. Always use run script in the background when running the script. If you use the run script option it will just stop the script as soon as you close the popup. That might explain why your mount drops right away.

2. You have the rclone commands here: https://rclone.org/commands/. Other than that you can for example use "rclone size gcrypt:" to see how big your gcrypt mount is in the cloud.

3. You can unmount with fusermount -uz /path/to/remote. Make sure you don't have any dockers or transfers running with access to that mount though, because they will start writing in your merger folder causing problems when you mount again.

yes 6.11 and win 11

 

i will checked to see thx a lot

Link to comment

hi all other question

 

about the upload script.

 

I'm getting this now 

 

2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 90.241631ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 143.701353ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 222.186098ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 305.125972ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 402.588316ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 499.64329ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 589.545348ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 676.822802ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 680.141577ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 694.895337ms
2022/10/06 05:21:58 DEBUG : pacer: Reducing sleep to 307.907209ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 0s
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 78.586386ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 159.649286ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 198.168036ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 245.411694ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 330.517403ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 429.05441ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 523.306138ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 609.645869ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 690.942129ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 681.587878ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 639.166177ms
2022/10/06 05:22:00 DEBUG : pacer: Reducing sleep to 66.904708ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 0s
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 87.721382ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 187.616721ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 186.994169ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 285.041735ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 352.336246ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 449.015128ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 547.412525ms
2022/10/06 05:22:01 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 1h22m0.4s

 

 

Do i need to worry  i saw this for the last week maybe

 

 

i put my upload script  on the post

 

thx all

 

upload.txt

Link to comment
On 10/4/2022 at 6:07 PM, francrouge said:

hi yes  on krusader i got no problem but on windows  with network shares its not working anymore

image.png.280f532277af0c32d9b6442ad9ece11f.png

image.png.8587cc2e10568da03f901685502de95b.png

i can't edit or rename delete etc. on the gdrive mount on windows my local shares are ok

image.png.a07182d78fc62551a8b954364d57cc38.png

 

 

 

I will add also my mount and upload script

 

Maybe i'm missing something

 

 

 

should i try the new permission feature you think ?

 

thx

upload.txt 10.07 kB · 0 downloads mount.txt 11.42 kB · 3 downloads unraid-diagnostics-20221004-1818.zip 295.72 kB · 0 downloads

 

Are you accessing your content through the "mount_rclone" folder? Is it shown under the unRAID shares tab? If so, what are the share settings for the folder?

Link to comment
7 hours ago, francrouge said:

hi all other question

 

about the upload script.

 

I'm getting this now 

 

2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 90.241631ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 143.701353ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 222.186098ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 305.125972ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 402.588316ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 499.64329ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 589.545348ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 676.822802ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 680.141577ms
2022/10/06 05:21:57 DEBUG : pacer: Reducing sleep to 694.895337ms
2022/10/06 05:21:58 DEBUG : pacer: Reducing sleep to 307.907209ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 0s
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 78.586386ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 159.649286ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 198.168036ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 245.411694ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 330.517403ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 429.05441ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 523.306138ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 609.645869ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 690.942129ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 681.587878ms
2022/10/06 05:21:59 DEBUG : pacer: Reducing sleep to 639.166177ms
2022/10/06 05:22:00 DEBUG : pacer: Reducing sleep to 66.904708ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 0s
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 87.721382ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 187.616721ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 186.994169ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 285.041735ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 352.336246ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 449.015128ms
2022/10/06 05:22:01 DEBUG : pacer: Reducing sleep to 547.412525ms
2022/10/06 05:22:01 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 1h22m0.4s

 

 

Do i need to worry  i saw this for the last week maybe

 

 

i put my upload script  on the post

 

thx all

 

upload.txt 10.07 kB · 0 downloads

No problem, but I think it shows that you are rate limited so it will keep trying until the limitation is gone again. I don't understand why you added --max-transfer? You already have --drive-stop-on-upload-limit, so that will end the script when you hit the limit. Maybe try removing your max-transfer flag and see what the script does?

  • Like 1
Link to comment

Is anyone else having a problem (I think since 6.11) where files become briefly unavailable?  I used to run my script via cron every couple of mins, but I've found over the last couple of days or so that the mount will be up but Plex etc need manually restarting i.e. I think the files became unavailable for such a short period of time, that my script missed the event and didn't stop and restart my dockers?

 

I'm wondering if there's a better way than looking for the mountcheck file e.g. write a plex script that sits alongside the current script that says stop plex if files become unavailable and they rely on the existing script to restart plex?

Link to comment
14 hours ago, DZMM said:

Is anyone else having a problem (I think since 6.11) where files become briefly unavailable?  I used to run my script via cron every couple of mins, but I've found over the last couple of days or so that the mount will be up but Plex etc need manually restarting i.e. I think the files became unavailable for such a short period of time, that my script missed the event and didn't stop and restart my dockers?

 

I'm wondering if there's a better way than looking for the mountcheck file e.g. write a plex script that sits alongside the current script that says stop plex if files become unavailable and they rely on the existing script to restart plex?

 

I haven't been having issues with that. It's been running pretty good actually. Maybe add a log file and change the log-level to debug to see if there is any indication of what's going on when it happens? It could also be some issue with mergerfs.

 

    --log-level DEBUG \

    --log-file=/var/log/rclone \

  • Upvote 1
Link to comment
7 hours ago, FabrizioMaurizio said:

Hi all, I updated from 3.9 to 3.11 recently and now I don't have write permission to the mergerfs folder. I tried Docker Safe New Perm but it didn't help. Any suggestions?

 

Can you post your mount script and file permissions of your directories?

Link to comment
17 hours ago, Roudy said:

 

Can you post your mount script and file permissions of your directories?

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.2 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_t1_1" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/gdrive_upload" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="100G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
#MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--vfs-read-ahead 30G"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
	LocalFilesLocation="/tmp/$RcloneRemoteName"
	eval mkdir -p $LocalFilesLocation
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
	eval mkdir -p $LocalFilesLocation
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
	mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
	echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
	echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
	exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
	echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
	touch mountcheck
	rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
	if [[  $CreateBindMount == 'Y' ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
		if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
			echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
			ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
		fi
		echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
	else
		RCloneMountIP=""
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
	fi
# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--dir-cache-time $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 15s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
	echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
	sleep 5
	echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
	if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
	else
		echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
		docker stop $DockerStart
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
		exit
	fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
	if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
		echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
	else
# check if mergerfs already installed
		if [[ -f "/bin/mergerfs" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
		else
# Build mergerfs binary
			echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
			mkdir -p /mnt/user/appdata/other/rclone/mergerfs
			docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
			mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
			echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
			sleep 5
			if [[ -f "/bin/mergerfs" ]]; then
				echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
			else
				echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
				rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
				exit
			fi
		fi
# Create mergerfs mount
		echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
		if [[  $LocalFilesShare2 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare2=":$LocalFilesShare2"
		else
			LocalFilesShare2=""
		fi
		if [[  $LocalFilesShare3 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare3=":$LocalFilesShare3"
		else
			LocalFilesShare3=""
		fi
		if [[  $LocalFilesShare4 != 'ignore' ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
			LocalFilesShare4=":$LocalFilesShare4"
		else
			LocalFilesShare4=""
		fi
# make sure mergerfs mount point is empty
		mv $MergerFSMountLocation $LocalFilesLocation
		mkdir -p $MergerFSMountLocation
# mergerfs mount command
		mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
		echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
		if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
			echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
		else
			echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
			docker stop $DockerStart
			rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
			exit
		fi
	fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
		echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
	else
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
		echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
		docker start $DockerStart
	fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

  

image.png

Edited by FabrizioMaurizio
Link to comment
On 10/7/2022 at 3:56 PM, Roudy said:

 

Can you temporarily make it public and see if you're able to edit the files? Want to make sure it is actually inheriting those permissions.

I did try right now and it does not seem to be effective.

 

I can create files folders but i cant delete or rename them and only on my moun_rclone folder.

 

 

thx

 

Link to comment
On 10/9/2022 at 12:04 AM, francrouge said:

I did try right now and it does not seem to be effective.

 

I can create files folders but i cant delete or rename them and only on my moun_rclone folder.

 

 

thx

 

Did you update to the latest stable? They fixed something with samba again.

Link to comment
On 11/6/2018 at 1:44 PM, DZMM said:

Getting Started

 

Install the rclone plugin and via command line run rclone config and create 2 remotes:

  • gdrive: - a drive remote that connects to your gdrive account. Recommend creating your own client_id
  • gdrive_media_vfs: - a crypt remote that is mounted locally and decrypts the encrypted files uploaded to gdrive:

I am trying to get this setup but am struggeling with a few things. You mention that I have to setup 2 remotes, but I don't know how. I found this video from Spaceinvaderone: 

 

but that video is very old. I followed what he did and created 2 remotes. Is that the correct way or do we need to create them some other way? This is how they look now:image.thumb.png.89f45e04596991d0122ba87f4f0ac045.png

 

I have made a Google business account and have my own domain. I then upgraded to Enterprise where it said unlimited storage, but when I look in Google Drive, it says that there is only 5TB. Do you know why that is?

Edited by workermaster
Link to comment

#1 Anyone has hits to be able to play files faster.

 

Trying to read 1080p file 10mbits and its takes like 1 min

 

# create rclone mount
    rclone mount \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --allow-other \
    --umask 000 \
    --uid 99 \
    --gid 100 \
    --dir-cache-time $RcloneMountDirCacheTime \
    --attr-timeout $RcloneMountDirCacheTime \
    --log-level INFO \
    --poll-interval 10s \
    --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
    --drive-pacer-min-sleep 10ms \
    --drive-pacer-burst 1000 \
    --vfs-cache-mode full \
    --vfs-cache-max-size $RcloneCacheMaxSize \
    --vfs-cache-max-age $RcloneCacheMaxAge \
    --vfs-read-ahead 500m \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &
 

 

#2 Also do you know if its possible to direct play throught plex or its always converting ?🤔

Link to comment
12 minutes ago, francrouge said:

#1 Anyone has hits to be able to play files faster.

 

Trying to read 1080p file 10mbits and its takes like 1 min

 

# create rclone mount
    rclone mount \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --allow-other \
    --umask 000 \
    --uid 99 \
    --gid 100 \
    --dir-cache-time $RcloneMountDirCacheTime \
    --attr-timeout $RcloneMountDirCacheTime \
    --log-level INFO \
    --poll-interval 10s \
    --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
    --drive-pacer-min-sleep 10ms \
    --drive-pacer-burst 1000 \
    --vfs-cache-mode full \
    --vfs-cache-max-size $RcloneCacheMaxSize \
    --vfs-cache-max-age $RcloneCacheMaxAge \
    --vfs-read-ahead 500m \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &
 

 

#2 Also do you know if its possible to direct play throught plex or its always converting ?🤔

My mount settings are above in one of my posts. I don't know your download speed, but if it isn't that high you might be downloading too big of a chunks or too far ahead. And your dir-cache-time can be 9999h.

 

Regarding your second question, what do you mean with direct play? Normally direct play within the plex context means that your client device (media player) is playing the files directly, without any transcoding needed. So that is definitely possible, just depends on your mediaplayer. Playing a 4K on a chromecast 1080p will lead to a transcode of course. And a lot of burned in subtitles also lead to transcoding for the subtitles part, but not the video part.

 

For your samba issues I would suggest you reboot without anything mounted, no dockers on and such. And then just put in a file in your mount_mergerfs folder an see whether you can open and edit that file. 

  • Like 1
Link to comment
1 hour ago, workermaster said:

I am trying to get this setup but am struggeling with a few things. You mention that I have to setup 2 remotes, but I don't know how. I found this video from Spaceinvaderone: 

 

but that video is very old. I followed what he did and created 2 remotes. Is that the correct way or do we need to create them some other way? This is how they look now:image.thumb.png.89f45e04596991d0122ba87f4f0ac045.png

 

I have made a Google business account and have my own domain. I then upgraded to Enterprise where it said unlimited storage, but when I look in Google Drive, it says that there is only 5TB. Do you know why that is?

Did you create these through the shell/terminal? Seems you are missing the following for the crypt mount:

filename_encryption = standard
directory_name_encryption = true

Or maybe you don't want those options enabled?

 

The googleworkspace mount seems fine. You could add:

server_side_across_configs = true

 

About the 5TB limit. Right now the google workspace accounts are a bit of unclear for new accounts and their storage. The 5TB is the personal drive limit. It will show this for me as well, but I can just go past it (I use team drives). But people with new accounts have also been reporting that they can't upload more than 5TB and you will need 2 more accounts and then ask Google for more storage every time with an explanation. You can upload 750gb per day per account (or use service accounts to have 750gb per service account, but this is a bit too complicated for you right now I think). So you'll just have to test it whether you can go past the 5TB storage.

Link to comment
21 hours ago, Kaizac said:

Did you create these through the shell/terminal? Seems you are missing the following for the crypt mount:

filename_encryption = standard
directory_name_encryption = true

Or maybe you don't want those options enabled?

 

The googleworkspace mount seems fine. You could add:

server_side_across_configs = true

 

About the 5TB limit. Right now the google workspace accounts are a bit of unclear for new accounts and their storage. The 5TB is the personal drive limit. It will show this for me as well, but I can just go past it (I use team drives). But people with new accounts have also been reporting that they can't upload more than 5TB and you will need 2 more accounts and then ask Google for more storage every time with an explanation. You can upload 750gb per day per account (or use service accounts to have 750gb per service account, but this is a bit too complicated for you right now I think). So you'll just have to test it whether you can go past the 5TB storage.

Thanks for your help. 

 

I thought that encryption was enabled, but am not sure. 

I have opened the terminal again:

image.png.3e73b42a25645495b464e63f007d9148.png

 

I then edited the googleworkspace remote

image.png.a6885bbb2527eb859d7b049e31a93d84.png

I select 1 there because I want the remote to have full access. 

I left this one blank:
image.png.f6c7635221f615a5bed066ffed63bfd5.png

 

I left the next options on default, but I am not sure what to do with the shared drive option. 

image.png.0316efb85ccff157561710dfb4237137.png

 

I left it on No for now

 

image.png.e1fd9cb8e77b72dec8b15443d71835ac.png

 

 

I think that should be fine, but am not sure if it is configured corectly. 

 

Here is the encrypt remote:
image.png.d4f65ff71cbaf346764ea633afb7ef99.png

I just press Enter because I want it to encrypt the googleworkspace remote

 

image.png.66c63146186593ab94746a433cd6c8d7.png

 

image.png.7327bebb6c4638374454c652bafe17d8.png

 

image.png.369a7239e57131c8e25f42351669f4a8.png

 

I am not sure if I need to change anything here. 

Link to comment

How/where would one suggest to have the (local) mounts, looking at Trash's guides?

To keep hardlinks active across all arrs and downloaders, I should not use the default /user/local/gdrive_media_vfs/ , correct?

 

 

Trash's hardlink folder guide:

data
├── torrents
│  ├── movies
│  ├── music
│  └── tv
├── usenet
│  ├── movies
│  ├── music
│  └── tv
└── media
    ├── movies
    ├── music
    └── tv

 

Link to comment
On 11/6/2018 at 1:44 PM, DZMM said:

Getting Started

 

Install the rclone plugin and via command line run rclone config and create 2 remotes:

  • gdrive: - a drive remote that connects to your gdrive account. Recommend creating your own client_id
  • gdrive_media_vfs: - a crypt remote that is mounted locally and decrypts the encrypted files uploaded to gdrive:

 

 

 

I am going to create the mounts again, since my last ones were a bit weird. I have just one question (for now), you mention that we need 2 mounts. You have given them both a name here, but in the scripts you call the mount "gdrive_vfs". Is this correct? Should the name of the mount in the script not be the name of the crypt remote if you want all the data going to to gdrive to be encrypted?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.