Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

20 minutes ago, eqjunkie829 said:

 

Ive used rclone with mergerFS on my seedbox with no issues, however this is my first time using rclone on unraid and its not working properly. Using one of your most recent scripts with local caching rclone was downloading everything from my google drive and its now sitting in my /mnt/user0/mount_rclone/cache/gsuite/vfs/gsuite/Plex Folder/. Im not sure what to adjust in the script to stop it from doing that. Any guidance is appreciated!

Are you sure it's downloading everything - maybe Plex is analysing files as part of scheduled maintenance?

 

If you want to reduce the size of the cache, reduce the size of RcloneCacheMaxSize="400G"

Link to comment
5 minutes ago, DZMM said:

Are you sure it's downloading everything - maybe Plex is analysing files as part of scheduled maintenance?

 

If you want to reduce the size of the cache, reduce the size of RcloneCacheMaxSize="400G"

Well, the folder on my unraid box didn't exist a few days ago as it was created by running the scripts you have on github. Also I tracked the data usage on my unifi router and it shows about 3TB of data received through google api over the weekend. Im really sure the script caused it to start downloading my whole google drive to cache locally as I only installed rclone a week ago and only used your script so far.

Link to comment
3 minutes ago, eqjunkie829 said:

Is there a way for me to modify the Mount Script Version 0.96.9.1 to disable caching completely?

Roll back.  I can't explain the behaviour your seeing as I've been running the latest rclone version for a while without any problems.

Link to comment

For some reason I had a problem with line 241 in the new script where you check if CA Appdata is backing up or restoring. I had to remove the outside square brackets on the conditional in order for it to work... weird:

# Check CA Appdata plugin not backing up or restoring
	if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]]; then

kept getting this error:

/tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error in conditional expression
/tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error near `]'
/tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: `   if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then'
Script Finished Nov 17, 2020  16:41.15

so i removed the outside brackets and it worked as such with no error:

# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]; then

 

  • Thanks 1
Link to comment
7 hours ago, crazyhorse90210 said:

For some reason I had a problem with line 241 in the new script where you check if CA Appdata is backing up or restoring. I had to remove the outside square brackets on the conditional in order for it to work... weird:


# Check CA Appdata plugin not backing up or restoring
	if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]]; then

kept getting this error:


/tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error in conditional expression
/tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error near `]'
/tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: `   if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then'
Script Finished Nov 17, 2020  16:41.15

so i removed the outside brackets and it worked as such with no error:


# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]; then

 

Thanks - I've just added.

Link to comment

  

8 hours ago, crazyhorse90210 said:

For some reason I had a problem with line 241 in the new script where you check if CA Appdata is backing up or restoring. I had to remove the outside square brackets on the conditional in order for it to work... weird:


# Check CA Appdata plugin not backing up or restoring
	if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]]; then

kept getting this error:


/tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error in conditional expression
/tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error near `]'
/tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: `   if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then'
Script Finished Nov 17, 2020  16:41.15

so i removed the outside brackets and it worked as such with no error:


# Check CA Appdata plugin not backing up or restoring
	if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]; then

 

 

I also noticed this yesterday, it should either be with single brackets or double brackets.

# Check CA Appdata plugin not backing up or restoring
    if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ]] || [[ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]]; then

 

@DZMM Hope you didn't chug it all at once ;)

Link to comment
On 11/17/2020 at 9:10 AM, axeman said:

This might be a me thing - since I need the rclone mounts available to my Windows machines, I have it in /mnt/user . 

If I try to copy a file into unioned mount from inside of UnRaid via MC, it works exactly as you'd want... the file goes right to the local share. 

However, if I do the same from a windows machine - it fails. 

Interestingly this doesn't seem to be problem on an Android device (using SolidExplorer). Nor does it happen with the @DZMM mergerfs based scripts. 

That is strange, Im not sure why it would write to the _vfs upstream if you have your local path listed first in the union when writing to the union... for the record you can export /mnt/disks/some_dir as a share through SMB example:

 

#unassigned_devices_start
#Unassigned devices share includes
   include = /tmp/unassigned.devices/smb-settings.conf
#unassigned_devices_end


[some_dir]
   path = /mnt/disks/some_dir
   comment =
   browsable = yes
   # Public
   public = yes
   writeable = yes
   vfs object =

You simply add this to the SMB extras under Settings > SMB

Edited by MowMdown
Link to comment
1 hour ago, Bolagnaise said:

doing some more investigation as the problem has not disappeared, it looks like mergerfs is maxing my cpu. Screenshot attached. This wasn’t present in beta 30. Not sure where to go from here. image.thumb.png.5f8d8953307d9c540e003cc17b6f0f78.png

Can you file a report in the beta35 thread please.  I don't know if it's related, but beta30 and beta35 cause my machine to completely freeze/crash and I have to do a hard reset.

 

If anyone else is successfully using beta35 then please shout out! 

Link to comment
On 11/18/2020 at 10:50 AM, MowMdown said:

That is strange, Im not sure why it would write to the _vfs upstream if you have your local path listed first in the union when writing to the union... for the record you can export /mnt/disks/some_dir as a share through SMB example:

 


#unassigned_devices_start
#Unassigned devices share includes
   include = /tmp/unassigned.devices/smb-settings.conf
#unassigned_devices_end


[some_dir]
   path = /mnt/disks/some_dir
   comment =
   browsable = yes
   # Public
   public = yes
   writeable = yes
   vfs object =

You simply add this to the SMB extras under Settings > SMB

 

Thanks - I will try that. Didn't know it was possible. Incidentally, do you know where /mnt/disks/some_dir is physically located? like does it go on Cache drive? or is it in memory/ram? 

Link to comment
On 11/18/2020 at 10:50 AM, MowMdown said:

That is strange, Im not sure why it would write to the _vfs upstream if you have your local path listed first in the union when writing to the union... for the record you can export /mnt/disks/some_dir as a share through SMB example:

 


#unassigned_devices_start
#Unassigned devices share includes
   include = /tmp/unassigned.devices/smb-settings.conf
#unassigned_devices_end


[some_dir]
   path = /mnt/disks/some_dir
   comment =
   browsable = yes
   # Public
   public = yes
   writeable = yes
   vfs object =

You simply add this to the SMB extras under Settings > SMB

So even with this - Windows cannot seem to write to the share via SMB. Googled it, and seems like sharing an rclone vfs mount can work with some modifications. Don't know that I have the know how to do that. 

 

I'm grateful for your time - going through it like this with me. I may have to go back to merger fs. 

Link to comment
On 11/21/2020 at 6:29 PM, DZMM said:

Can you file a report in the beta35 thread please.  I don't know if it's related, but beta30 and beta35 cause my machine to completely freeze/crash and I have to do a hard reset.

 

If anyone else is successfully using beta35 then please shout out! 

 Yep done, still trying to track down the issue but cpu usage has dropped significantly after doing a complete power cycle instead of a reboot. I still see occasional 100% spikes that weren’t prevalent in beta30 so i’m not sure. 

 

Anyway, can you share your recommendations for the ‘use cache pool option’ for the mount_rclone, mount_unionfs and rclone_upload folder. I have updated the script you gave me to include the vfs cache option and its running well, just want to make sure its fully optimised but its working well.

 

Does changing the ‘rclonemaxcachesize’ automatically reduce the cache, or do you need to unmount/reboot to reduce size? I currently have it set to 400G as per your script but i’m considering buying another dedicated 2TB NVME drive just for vfs cache. 

Link to comment
3 hours ago, Bolagnaise said:

Anyway, can you share your recommendations for the ‘use cache pool option’ for the mount_rclone, mount_unionfs and rclone_upload folder. I have updated the script you gave me to include the vfs cache option and its running well, just want to make sure its fully optimised but its working well.

- mount_rclone and mount_mergerfs are virtual folder so it doesn't matter.  I've set mine to 'no' though

- /local - user choice if want to use a faster cache or pool drive, or use the array.  I've set mine to 'no' as I don't need fast access and files don't tend to hang around long before being uploaded.  I do use a separate /downloads for my nzbget intermediate files that are saved on a pool drive, with complete files moved to the array

 

3 hours ago, Bolagnaise said:

Does changing the ‘rclonemaxcachesize’ automatically reduce the cache, or do you need to unmount/reboot to reduce size? I currently have it set to 400G as per your script but i’m considering buying another dedicated 2TB NVME drive just for vfs cache. 

 

You need to remount as the size is set when you do the mount.  

 

I've gone for 400GB as that works well for me with the size of my array, as I've got about 7 mounts so it's 7x400=2.8TB of cached files in total out of my 16TB total storage.  My two array drives are spun up pretty much 24x7 and I don't have a parity drive to slow them down, so I don't think I would benefit from an SSD or NVME for the rclone cache.  Remember these files are separate to the plex meta files which are small and numerous, so benefit from a fast drive. 

 

If I were you, I'd just use a normal HDD outside of your array so your parity setup doesn't slow the drive down.

Link to comment

Up until recently I've been using the OG scripts (no team drive, unionfs etc). I've migrated over to the new way of doing things and had some difficulty, I cobbled together something that worked for a small specific use, but when I start from scratch using the scripts I can't get my whole gdrive mounted.

 

Inside mount_rclone I have /cache/ and /gdrive_media_vfs/. However, inside the gdrive folder I only have a mountcheck and one of the many folders I have in my gdrive. I'm sure I've just made an error somewhere, but I migrated my data from my drive to the team drive, and I'm unable to see those folders and files.

 

Any thoughts?

 

EDIT: I migrated files from "My Drive" to the Team Drive section and those files aren't showing up when I use rclone lsd gdrive_media_vfs. I only see an existing folder I had in the team drive and all the data/folders inside. I migrated using the move folder command on gdrive's web interface. When I use rclone gdrive lsd (unencrypted) it shows 4 encrypted folders...anyone know what's happening or could be happening to those 3 folders?

 

Edited by privateer
Link to comment
9 hours ago, DZMM said:

 

- /local - user choice if want to use a faster cache or pool drive, or use the array.  I've set mine to 'no' as I don't need fast access and files don't tend to hang around long before being uploaded.  I do use a separate /downloads for my nzbget intermediate files that are saved on a pool drive, with complete files moved to the array

If you remember i moved from the old script so my local is my rclone_upload folder. Heres my current script for reference.

 

 

#!/bin/bash

######################
#### Mount Script ####
######################
#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.2 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/rclone_upload" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="400G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_unionfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="plex sonarr sonarr4K radarr radarr4K" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\ {"downloads/completed,downloads/intermediate,downloads/seeds,Movies,TV Shows,4KMovies,4kTVShows"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
    mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
    echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
    echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    touch mountcheck
    rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
        if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
            echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        else
            echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
            ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
        fi
        echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    else
        RCloneMountIP=""
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    fi
# create rclone mount
    rclone mount \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --allow-other \
    --dir-cache-time $RcloneMountDirCacheTime \
    --log-level INFO \
    --poll-interval 15s \
    --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
    --vfs-cache-mode full \
    --vfs-cache-max-size $RcloneCacheMaxSize \
    --vfs-cache-max-age $RcloneCacheMaxAge \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
    echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
    sleep 5
    echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    else
        echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
        docker stop $DockerStart
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
        exit
    fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
    if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    else
# check if mergerfs already installed
        if [[ -f "/bin/mergerfs" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
        else
# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
            echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
            sleep 5
            if [[ -f "/bin/mergerfs" ]]; then
                echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
            else
                echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
                rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
                exit
            fi
        fi
# Create mergerfs mount
        echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
        if [[  $LocalFilesShare2 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare2=":$LocalFilesShare2"
        else
            LocalFilesShare2=""
        fi
        if [[  $LocalFilesShare3 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare3=":$LocalFilesShare3"
        else
            LocalFilesShare3=""
        fi
        if [[  $LocalFilesShare4 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare4=":$LocalFilesShare4"
        else
            LocalFilesShare4=""
        fi
# make sure mergerfs mount point is empty
        mv $MergerFSMountLocation $LocalFilesLocation
        mkdir -p $MergerFSMountLocation
# mergerfs mount command
        mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
        echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
        if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
        else
            echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
            docker stop $DockerStart
            rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
            exit
        fi
    fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
    if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
        echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
    else
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
        echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
        docker start $DockerStart
    fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit
Link to comment

Thanks - I don't see anything there I missed.

 

I think the issue was when I migrated my data from the My Drive to the Team Drive. I used rclone lsd to look through my old mounts and new mounts, and it looks like the issue might be when I moved the data using the gdrive web ui. Is there a "right" way to move data from the original location ("My Drive") to the new location (Team Drive)?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.