Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

10 hours ago, Kaizac said:

I run 64GB ram and have 30% constantly used. Doesn't mean you also need that, but 16GB is not a lot. You have to remember that while doing playback all the chunks are stored in your RAM if you are not using VFS cache. And if you have Plex also transcoding in RAM it will also consume. So I would lower the chunk sizes in both upload as mount scripts.

 

This is an issue you should not have when you followed the instructions. You are missing files because you are not using mergerfs. For me files don't dissapear ever, because it doesn't matter if the files move from local to cloud.

The only thing I changed was the name of the mountcheck file. I changed it, because I have several servers that access it with the same .config and thought, that there will be no problems then. So I changed the name. I just changed it at the noted places. Otherwise everything is as in the standard start script.
Maybe I have not seen a connection here:

 

 

RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
    LocalFilesLocation="/tmp/$RcloneRemoteName"
    eval mkdir -p $LocalFilesLocation
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
    mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
    echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheckserverIT" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheckserverIT file in case it doesn't already exist
    echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheckserverIT file for ${RcloneRemoteName} remote."
    touch mountcheckserverIT
    rclone copy mountcheckserverIT $RcloneRemoteName: -vv --no-traverse
# Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
        if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
            echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        else
            echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
            ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
        fi
        echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    else
        RCloneMountIP=""
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    fi
# create rclone mount
    rclone mount \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --allow-other \
    --umask 022 \
    --uid 99 \
    --gid 100 \
    --dir-cache-time $RcloneMountDirCacheTime \
    --attr-timeout $RcloneMountDirCacheTime \
    --log-level INFO \
    --poll-interval 10s \
    --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
    --drive-pacer-min-sleep 10ms \
    --drive-pacer-burst 1000 \
    --vfs-cache-mode full \
    --vfs-cache-max-size $RcloneCacheMaxSize \
    --vfs-cache-max-age $RcloneCacheMaxAge \
    --vfs-read-ahead 1G \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
    echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
    sleep 5
    echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    if [[ -f "$RcloneMountLocation/mountcheckserverIT" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    else
        echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
        docker stop $DockerStart
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
        exit
    fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
    if [[ -f "$MergerFSMountLocation/mountcheckserverIT" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    else
# check if mergerfs already installed
        if [[ -f "/bin/mergerfs" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
        else
# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
            echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
            sleep 5
            if [[ -f "/bin/mergerfs" ]]; then
                echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
            else
                echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
                rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
                exit
            fi
        fi
# Create mergerfs mount
        echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
        if [[  $LocalFilesShare2 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare2=":$LocalFilesShare2"
        else
            LocalFilesShare2=""
        fi
        if [[  $LocalFilesShare3 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare3=":$LocalFilesShare3"
        else
            LocalFilesShare3=""
        fi
        if [[  $LocalFilesShare4 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare4=":$LocalFilesShare4"
        else
            LocalFilesShare4=""
        fi
# make sure mergerfs mount point is empty
        mv $MergerFSMountLocation $LocalFilesLocation
        mkdir -p $MergerFSMountLocation
# mergerfs mount command
        mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
        echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
        if [[ -f "$MergerFSMountLocation/mountcheckserverIT" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
        else
            echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
            docker stop $DockerStart
            rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
            exit
        fi
    fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
    if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
        echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
    else
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
        echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
        docker start $DockerStart
    fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

 

Would it be a problem to have the same mountcheck file created with several servers uploading media? If not I would then just move back to the original filename, if this would be the problem.

 

Thanks in advance.

 

BR Paff

Link to comment
On 11/1/2022 at 8:23 PM, Kaizac said:

I run 64GB ram and have 30% constantly used. Doesn't mean you also need that, but 16GB is not a lot. You have to remember that while doing playback all the chunks are stored in your RAM if you are not using VFS cache. And if you have Plex also transcoding in RAM it will also consume. So I would lower the chunk sizes in both upload as mount scripts.

 

I will try and see if I can find some ok priced memory and expand. I only have 2 memory slot on my board.

 

Also can you see where this goes wrong.

I can't write to my shares except the blue colored. Can see their permissions are changed compared to the others.

They should have the same scripts.

image.png.363a08cd9c085b5fe73c706a02fbf1d7.png

Link to comment
On 9/3/2022 at 9:26 AM, Bolagnaise said:

 

 

Nope not needed,

 

You need to run the new permissions option inside tools after stopping docker to update their perms.

You can also run this script to update permissions for other folders seperate to the new permissions tool or to force it.

#!/bin/sh 
for dir in "/mnt/user/!!you folder path here!!" 
do echo $dir 
chmod -R ug+rw,ug+X,o-rwx $dir 
chown -R nobody:users $dir 
done

 

The reason it works in 6.9 is because 6.10-rc3 introduced a bug fix where in the user share file system permissions were not being honoured, and containers with permissions assigned as 99:100 (nobody:user) actually had root access, 6.10 fixed this.

Limetech should really make this new permissions tool default upon first boot of 6.10 as a lot of people have had this issue.

 

 

An update.

If I didn't schedule to run the shares, but run the permission tool on mountmerger, local and rclone and afterwards manually started each of the shares I had, I can write to it in Windows.

 

So my question is on the script above, how do I include to run the 3 folders in the script instead of having to create a script for each library?

 

I have 5 rclone mounts so have created 5 scripts for those, but would like to not have 3 scripts for permissions, so I can schedule it to run on startup and I will have to manually mount each rclone share afterwards in the future.

Link to comment

Hi guys 

 

quick question when you choose the "move" feature is it suppose to keep the folders in rclone upload share

 

My script uploads the file with folder but does not delete the empty folder locally after

 

 

 

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/mount_rclone_upload" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="10M"
BWLimit2Time="08:00"
BWLimit2="10M"
BWLimit3Time="16:00"
BWLimit3="10M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="5" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

###############################################################################
#####    DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING   #####
###############################################################################

####### Preparing mount location variables #######
if [[  $BackupJob == 'Y' ]]; then
	LocalFilesLocation="$LocalFilesShare"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
else
	LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
fi

RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount

####### create directory for script files #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files

#######  Check if script already running  ##########
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
fi

#######  check if rclone installed  ##########
echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
else
	echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
	exit
fi

####### Rotating serviceaccount.json file if using Service Accounts #######
if [[ $UseServiceAccountUpload == 'Y' ]]; then
	cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
	CounterNumber=$(find -name 'counter*' | cut -c 11,12)
	CounterCheck="1"
	if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
		echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
	else
		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		CounterNumber="1"
	fi
	ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
	echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
	ServiceAccount=""
fi

#######  Upload files  ##########

# Check bind option
if [[  $CreateBindMount == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
		echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	else
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
		ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
	fi
else
	RCloneMountIP=""
fi

#  Remove --delete-empty-src-dirs if rclone sync or copy
if [[  $RcloneCommand == 'move' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
	DeleteEmpty="--delete-empty-src-dirs "
else
	echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
	DeleteEmpty=""
fi

#  Check --backup-directory
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
	LocalFilesLocation="$LocalFilesShare"
	BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
else
	BackupRemoteLocation=""
	BackupRemoteDeletedLocation=""
	BackupRetention=""
	BackupDir=""
fi

# process files
	rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
	--user-agent="$RcloneUploadRemoteName" \
	-vv \
	--buffer-size 512M \
	--drive-chunk-size 512M \
	--max-transfer 725G
	--tpslimit 3 \
	--checkers 3 \
	--transfers 3 \
	--order-by modtime,$ModSort \
	--min-age $MinimumAge \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--exclude *fuse_hidden* \
	--exclude *_HIDDEN \
	--exclude .recycle** \
	--exclude .Recycle.Bin/** \
	--exclude *.backup~* \
	--exclude *.partial~* \
	--drive-stop-on-upload-limit \
	--bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
	--bind=$RCloneMountIP $DeleteEmpty

# Delete old files from mount
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
	rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
fi

#######  Remove Control Files  ##########

# update counter and remove other control files
if [[  $UseServiceAccountUpload == 'Y' ]]; then
	if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
	else
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		CounterNumber=$((CounterNumber+1))
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
		echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
	fi
else
	echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
fi

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

Link to comment
On 11/3/2022 at 1:43 PM, animeking said:

Why is mount-mergefs changing my permissions and locking everything. I cant write to anything. When i change it it automatically changes it back to read only?

I have the same problem ever after manually using the permission script from bolognaise. Any thoughts? 

  • Confused 1
Link to comment

Be aware that there seems to be a bug in the latest version of mergerfs on the github page.
I've noticed it after I did reboot my unraid machine and afterwards the mergerfs was crashing everytime.

The crashes did occur after write events.

 

In my dmesg logging:

[  467.808897] mergerfs[7466]: segfault at 0 ip 0000000000000000 sp 0000147fb0e0e1a8 error 14
[  467.808921] Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6.

 

The only way to get the mounts working again was using the unmount and mounting script, but as soon there was a write event the issue occured directly again (0 kb written files).

 

I've temporary solved it by edditing the mergerfs-static-build image so it wouldn't pull the latest version of mergerfs from github.
Instead I'm using now the 'd1762b2bac67fbd076d4cca0ffb2b81f91933f63' version from 7 aug. 

And that seems to be working again after copying the mergerfs to /bin :-)

 

Not working mergerfs version is:

mergerfs version: 2.33.5-22-g629806e

 

Working version is:

mergerfs version: 2.33.5

 

 

 


 

 

  • Upvote 1
Link to comment
On 11/13/2022 at 4:16 PM, robinh said:

Be aware that there seems to be a bug in the latest version of mergerfs on the github page.
I've noticed it after I did reboot my unraid machine and afterwards the mergerfs was crashing everytime.

The crashes did occur after write events.

 

In my dmesg logging:

[  467.808897] mergerfs[7466]: segfault at 0 ip 0000000000000000 sp 0000147fb0e0e1a8 error 14
[  467.808921] Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6.

 

The only way to get the mounts working again was using the unmount and mounting script, but as soon there was a write event the issue occured directly again (0 kb written files).

 

I've temporary solved it by edditing the mergerfs-static-build image so it wouldn't pull the latest version of mergerfs from github.
Instead I'm using now the 'd1762b2bac67fbd076d4cca0ffb2b81f91933f63' version from 7 aug. 

And that seems to be working again after copying the mergerfs to /bin 🙂

 

Not working mergerfs version is:

mergerfs version: 2.33.5-22-g629806e

 

Working version is:

mergerfs version: 2.33.5

 

 

 

On 11/13/2022 at 4:16 PM, robinh said:


 

 

 

 

 

Where do you do this, because I seems to have the same problem?

Link to comment

Sorry I didn't document all my steps but according to my history file it should be something like this:

 

# Pull the docker image
docker pull trapexit/mergerfs-static-build:latest

# list docker images - check for image id of the mergerfs-static-build
sudo docker images

# Go into container mergerfs-static-build
sudo docker run -it  <your image ID> /bin/sh

# Edit the build-mergerfs file
edit the build-mergerfs

# Add the checkout to use a certain version github
# add after cd mergerfs
git checkout d1762b2bac67fbd076d4cca0ffb2b81f91933f63

# save file
# exit the container by using the exit command

# Saving your settings
sudo docker commit [CONTAINER_ID] [new_image_name]

e.g. docker commit 1015996dd4ee test-mergerfsstat

# Start new container
docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm test-mergerfsstat /tmp/build-mergerfs


#moving the compiled version to /bin

mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin

 

Hopefully this makes it a bit clear. 

Edited by robinh
Link to comment
24 minutes ago, robinh said:

Sorry I didn't document all my steps but according to my history file it should be something like this:

 

# Pull the docker image
docker pull trapexit/mergerfs-static-build:latest

# list docker images - check for image id of the mergerfs-static-build
sudo docker images

# Go into container mergerfs-static-build
sudo docker run -it  <your image ID> /bin/sh

# Edit the build-mergerfs file
edit the build-mergerfs

# Add the checkout to use a certain version github
# add after cd mergerfs
git checkout d1762b2bac67fbd076d4cca0ffb2b81f91933f63

# save file
# exit the container by using the exit command

# Saving your settings
sudo docker commit [CONTAINER_ID] [new_image_name]

e.g. docker commit 1015996dd4ee test-mergerfsstat

# Start new container
docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm test-mergerfsstat /tmp/build-mergerfs

 

Hopefully this makes it a bit clear. 

Thanks for the explanation.

My mergerfs is not in a docker, I just followed the guide here, so I'm guessing mergerfs is being built in the script.

So don't know how to fix this.

Link to comment

Are you sure the mount script is not using docker to build the mergerfs? and then place the compiled version to /bin on your host?

 

in the posted examples in this topic you see the following happening in the mount script. 
Maybe you were not aware it was using docker to build the mergerfs file. 

 

# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin

 

Edited by robinh
Link to comment
14 minutes ago, robinh said:

Are you sure the mount script is not using docker to build the mergerfs? and then place the compiled version to /bin on your host?

 

in the posted examples in this topic you see the following happening in the mount script. 
Maybe you were not aware it was using docker to build the mergerfs file. 

 

# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin

 

It's being build as a docker yes. You can see the mergerfs docker often in your deleted dockers. Will have to find the correct syntax to get the working mergerfs if this issue persists.

Link to comment
On 11/13/2022 at 4:16 PM, robinh said:

Be aware that there seems to be a bug in the latest version of mergerfs on the github page.
I've noticed it after I did reboot my unraid machine and afterwards the mergerfs was crashing everytime.

The crashes did occur after write events.

 

In my dmesg logging:

[  467.808897] mergerfs[7466]: segfault at 0 ip 0000000000000000 sp 0000147fb0e0e1a8 error 14
[  467.808921] Code: Unable to access opcode bytes at RIP 0xffffffffffffffd6.

 

The only way to get the mounts working again was using the unmount and mounting script, but as soon there was a write event the issue occured directly again (0 kb written files).

 

I've temporary solved it by edditing the mergerfs-static-build image so it wouldn't pull the latest version of mergerfs from github.
Instead I'm using now the 'd1762b2bac67fbd076d4cca0ffb2b81f91933f63' version from 7 aug. 

And that seems to be working again after copying the mergerfs to /bin 🙂

 

Not working mergerfs version is:

mergerfs version: 2.33.5-22-g629806e

 

Working version is:

mergerfs version: 2.33.5

 

 

 


 

 

Just checked the github and it still shows 2.33.5 from april in releases. So what is the august one you are referring to?

Link to comment
23 minutes ago, robinh said:

The release on the github is from april indeed but the docker is pulling the master which is recently updated ( 4 days ago).
 

Just checked with a reboot and the script is currently pulling the 2.33.5. So right now no issues with using this script as far as I can tell. Thanks for the heads up, in case the bug prevails in the next release we know to switch back!

Link to comment
1 hour ago, Kaizac said:

Just checked with a reboot and the script is currently pulling the 2.33.5. So right now no issues with using this script as far as I can tell. Thanks for the heads up, in case the bug prevails in the next release we know to switch back!

 

In that case they might have solved the issue since on sunday the following version was getting installed:  mergerfs 2.33.5-22-g629806e.


I will reboot my Unraid machine later this week to reinstall the 6.11.3. Did install it last weekend but was suspecting the release as root-cause for the issues with mergerfs so reverted it back to 6.11.2.

 

 

 

Link to comment
On 11/15/2022 at 4:40 PM, robinh said:

 

In that case they might have solved the issue since on sunday the following version was getting installed:  mergerfs 2.33.5-22-g629806e.


I will reboot my Unraid machine later this week to reinstall the 6.11.3. Did install it last weekend but was suspecting the release as root-cause for the issues with mergerfs so reverted it back to 6.11.2.

 

 

 

I hope so - I think I've been a victim of the bug where my mount would keep disconnecting - and sometimes too fast for my script to fix.

Link to comment
19 hours ago, Viper359 said:

I have attached my mount script. What should I change to stop hundreds of GB of data being downloaded daily via my google drive?

mount.txt 11.53 kB · 2 downloads

That's one of the drawbacks of the cache - that it caches all reads e.g. even when Plex, Sonarr etc are doing scans. You could turn off any background scans that your apps are doing - I accept it as a necessary evil in return for the amount of storage I'm getting for £11/pm (I think that's what I pay)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.