Jump to content
DZMM

Guide: How To Use Rclone To Mount Cloud Drives And Play Files

2157 posts in this topic Last Reply

Recommended Posts

6 hours ago, KeyBoardDabbler said:

if i delete a file from my mount_mergerfs folder, i see the change in my google drive within afew seconds. 

Check the folder activity on the right side.

Also you can "rename" a file locally, then see which encrypted name changes in your drive. Then rename it back or delete knowing you have identified the correct file/folder

thx for the info

Share this post


Link to post

Is anybody using the built in rclone union for their GDrive and Local mounts with vfs caching? I seem to be struggling with it in a strange way. 

 

I have three rclone remotes:

 

1. Gsuite:

2. Crypt: (this wraps Gsuite:)

3. Union: (this wraps crypt:media and /mnt/disks/media/) (this is also what I am caching w/ vfs shown below)

 

My issue seems to be when I go to issue 

rclone move /mnt/user/media/movies crypt:media/movies --delete-src-dirs

that it moves successfully but then Plex cannot play the file. It can see it just fine but I get an input/output error unless I issue a vfs/refresh rclone rc command. Should I be caching crypt before unionizing it?

rclone mount \ 
       --allow-other \
       --dir-cache-time 720h \
       --poll-interval 15s \
       --buffer-size 256M \
       --cache-dir=/mnt/disk3/system/rclone/cache \
       --vfs-cache-mode writes \
       --vfs-cache-max-size 100G \
       --vfs-cache-max-age 168h \
       --vfs-read-chunk-size 128M \
       --vfs-read-chunk-size-limit off \
       --rc \
       --rc-addr 192.168.1.200:5572 \
       --syslog \
       union: /mnt/disks/media &

 

 

Edited by MowMdown

Share this post


Link to post
4 hours ago, MowMdown said:

Is anybody using the built in rclone union for their GDrive and Local mounts with vfs caching? I seem to be struggling with it in a strange way. 

Nope - I was having different problems (I can't remember what right now) so I'm sticking with mergerfs.

Share this post


Link to post

@DZMM

 

I am having the same issue I think @HonkyKONG22  was having.

 

/local/gdrive_media_vfs

                        /gdrive_media_vfs

                       /tv

/mount_mergerfs/gdrive_media_vfs

                       /gdrive_media_vfs
                       /tv

/mount_rclone/gdrive_media_vfs

                       /gdrive_media_vfs
                       /tv

 

Any help would be appreciated. I can't figure out why I have the extra gdrive_media_vfs folder.

log.txt

rclone_config.txt

mount script.txt

Edited by lzrdking71

Share this post


Link to post
7 hours ago, lzrdking71 said:

@DZMM

 

I am having the same issue I think @HonkyKONG22  was having.

 

/local/gdrive_media_vfs

                        /gdrive_media_vfs

                       /tv

/mount_mergerfs/gdrive_media_vfs

                       /gdrive_media_vfs
                       /tv

/mount_rclone/gdrive_media_vfs

                       /gdrive_media_vfs
                       /tv

 

Any help would be appreciated. I can't figure out why I have the extra gdrive_media_vfs folder.

log.txt 2.54 kB · 0 downloads

rclone_config.txt 378 B · 0 downloads

mount script.txt 10.27 kB · 0 downloads

I think the script doesn't like it it when you have only one folder specified in:

 

MountFolders=\{"tv"\}

Try adding a second folder e.g movies

 

MountFolders=\{"movies,tv"\}

 

Share this post


Link to post
1 hour ago, DZMM said:

I think the script doesn't like it it when you have only one folder specified in:

 


MountFolders=\{"tv"\}

Try adding a second folder e.g movies

 


MountFolders=\{"movies,tv"\}

 

I went through and did:

 

fusermount -uz /mnt/user/mount_rclone/gdrive_media_vfs
fusermount -uz /mnt/user/mount_mergerfs/gdrive_media_vfs

looked at the folders and removed the remaining gdrive_media_vfs folder from /local, /mount_mergerfs, and /mount_rclone

made the script modification you suggested and added MountFolders=\{"movies,tv"}

re- ran the script and now I again have the extra /gdrive_media_vfs in all of the folders I listed I removed it from above + the added movies folder

Edited by lzrdking71

Share this post


Link to post

The extra folder could be because after being created the upload script added them to gdrive.  Delete and they shouldn't come back if my theory is correct.

Share this post


Link to post
58 minutes ago, DZMM said:

The extra folder could be because after being created the upload script added them to gdrive.  Delete and they shouldn't come back if my theory is correct.

I have not run the upload script? I deleted the extra gdrive_media_vfs folder from the /local location and it disappeared from the other two locations and did not re-appear after running the mount script again. Is this how it should be? It doesn't look like /mount_rclone/gdrive_media_vfs has the /tv or /movie folders now?

Edited by lzrdking71

Share this post


Link to post
57 minutes ago, lzrdking71 said:

I have not run the upload script? I deleted the extra gdrive_media_vfs folder from the /local location and it disappeared from the other two locations and did not re-appear after running the mount script again. Is this how it should be? It doesn't look like /mount_rclone/gdrive_media_vfs has the /tv or /movie folders now?

So, everything is ok now?

Share this post


Link to post

@DZMMI'm not sure.....lol. At this point I'm not exactly sure if what I am seeing is how it should be or wrong? What should be in the /mount_rclone/ folder and what should be in the /local_rclone and /mount_mergerfrs/ folders if I have /tv and /movies? I appreciate the help I was banging my head into the wall until very late last night trying to figure this out.

Edited by lzrdking71

Share this post


Link to post
1 hour ago, lzrdking71 said:

@DZMMI'm not sure.....lol. At this point I'm not exactly sure if what I am seeing is how it should be or wrong? What should be in the /mount_rclone/ folder and what should be in the /local_rclone and /mount_mergerfrs/ folders if I have /tv and /movies? I appreciate the help I was banging my head into the wall until very late last night trying to figure this out.

If you've got:

 

MountFolders=\{"movies,tv"}

then you should have this folder structure:

 

/mnt/user/local/gdrive_media_vfs/movies

/mnt/user/local/gdrive_media_vfs/tv

/mnt/user/mount_rclone/gdrive_media_vfs/movies

/mnt/user/mount_rclone/gdrive_media_vfs/tv

/mnt/user/mount_mergerfs/gdrive_media_vfs/movies

/mnt/user/mount_mergerfs/gdrive_media_vfs/tv

 

If you have any other folders at that level then delete them, unmount, and then remount to check that the script isn't creating any folders by mistake.

 

Once sorted, then and the mergerfs folders as the source folders for plex, sonarr etc - not the local or the mount_rclone equivalents.  If you need any more folders e.g. /4k_movies, this add them to the /mnt/user/mount_mergerfs/gdrive_media_vfs/ - you can do manually of course, but the script is there to help, and I think helps people understand what's really happening.

Share this post


Link to post
44 minutes ago, DZMM said:

If you've got:

 


MountFolders=\{"movies,tv"}

then you should have this folder structure:

 

/mnt/user/local/gdrive_media_vfs/movies

/mnt/user/local/gdrive_media_vfs/tv

/mnt/user/mount_rclone/gdrive_media_vfs/movies

/mnt/user/mount_rclone/gdrive_media_vfs/tv

/mnt/user/mount_mergerfs/gdrive_media_vfs/movies

/mnt/user/mount_mergerfs/gdrive_media_vfs/tv

 

If you have any other folders at that level then delete them, unmount, and then remount to check that the script isn't creating any folders by mistake.

 

Once sorted, then and the mergerfs folders as the source folders for plex, sonarr etc - not the local or the mount_rclone equivalents.  If you need any more folders e.g. /4k_movies, this add them to the /mnt/user/mount_mergerfs/gdrive_media_vfs/ - you can do manually of course, but the script is there to help, and I think helps people understand what's really happening.

I have managed to get rid of the extra folder but for whatever reason now I have the following:

 

/mnt/user/local/gdrive_media_vfs/movies

/mnt/user/local/gdrive_media_vfs/tv

/mnt/user/mount_rclone/gdrive_media_vfs/ (no /movies and no /tv subfolders) just a mountcheck 0 B file

/mnt/user/mount_mergerfs/gdrive_media_vfs/movies

/mnt/user/mount_mergerfs/gdrive_media_vfs/tv

Share this post


Link to post

That's correct - just remembered you said you haven't done an upload yet, so there won't be any folders yet in mount_rclone

Share this post


Link to post

I believe I solved my issue using the rclone union mount. It seems to be working as expected now. Instead of caching the union: mount (which was how I had it configured) I instead cached the crypt: mount, mounted it as a volume crypt_vfs and then unioned the local dir w/ the vfs dir. 

 

Now when a file is downloaded to the local, it isn't cached and when it's then moved to the cloud through the rclone move crypt: plex has no problem playing it from the crypt_vfs

 

I also mounted crypt_vfs as read only so when sonarr/radarr move files from /mnt/user/download to /mnt/disks/media (this is the union mount) it only writes data to the local mount which avoids the caching.

 

[gsuite]
type = drive
client_id = 
client_secret = 
scope = drive
token = 
root_folder_id = 

[crypt]
type = crypt
remote = gsuite:media
filename_encryption = standard
directory_name_encryption = true
password = 
password2 = 

[local]
type = local
nounc = true

[union]
type = union
upstreams = /mnt/disks/media_vfs:ro /mnt/user/media/
action_policy = epall
create_policy = eplfs
search_policy = all
cache_time = 120

----
mkdir -p /mnt/disks/media
mkdir -p /mnt/disks/media_vfs

rclone mount \
		--allow-other \
		--dir-cache-time 720h \
		--poll-interval 15s \
		--buffer-size 256M \
		--cache-dir=/mnt/disk3/system/rclone/cache \
		--vfs-cache-mode full \
		--vfs-cache-max-size 200G \
		--vfs-cache-max-age 168h \
		--vfs-read-chunk-size 128M \
		--vfs-read-chunk-size-limit off \
		--syslog \
		crypt: /mnt/disks/media_vfs &

rclone mount --allow-other union: /mnt/disks/media &

 

Edited by MowMdown

Share this post


Link to post

Is the only downside of not setting /user-->/mnt/user just that hardlinks don't work or does it cause other issues w/ mergerfs? I am struggling on how to work around my current setup. I would still be using mappings in my dockers like /downloads or /media for my dockers. It would be like the process below I believe.

  1. torrent gets download (located in a /mnt/disks/ unassigned device location)
  2. torrent gets copied to mergerfs folder (cache only share?)
  3. copied torrent gets uploaded whilst orig is seeding
  4. delete seed whenever

 

I want to keep select content local and then upload other content to gdrive. It was my understanding that you shouldn't torrent off the array and that setting that up on unassigned devices was preferred which is why I went that route originally.

Edited by lzrdking71

Share this post


Link to post
5 hours ago, lzrdking71 said:

Is the only downside of not setting /user-->/mnt/user just that hardlinks don't work or does it cause other issues w/ mergerfs? I am struggling on how to work around my current setup. I would still be using mappings in my dockers like /downloads or /media for my dockers. It would be like the process below I believe.

  1. torrent gets download (located in a /mnt/disks/ unassigned device location)
  2. torrent gets copied to mergerfs folder (cache only share?)
  3. copied torrent gets uploaded whilst orig is seeding
  4. delete seed whenever

 

I want to keep select content local and then upload other content to gdrive. It was my understanding that you shouldn't torrent off the array and that setting that up on unassigned devices was preferred which is why I went that route originally.

Yes, if your dockers are all using sub-folders of differebt mappings, you will lose I/O benefits, including hardlinks.

Share this post


Link to post

Is it possible to modify the scripts and create everything on /mnt/disks? I apologize for all the questions, I am just trying to learn and work through this. I have skimmed through about the first 38 pages or so of this topic so forgive me if the info is in there somewhere.

Edited by lzrdking71

Share this post


Link to post
15 minutes ago, lzrdking71 said:

Is it possible to modify the scripts and create everything on /mnt/disks? I apologize for all the questions, I am just trying to learn and work through this. I have skimmed through about the first 38 pages or so of this topic so forgive me if the info is in there somewhere.

yes you add your mounts to /mnt/disks.  I had problems with /mnt/disks when I was learning how to do all this, so I've steered clear of it since - others have managed it successfully.

Share this post


Link to post

I'm having issues whenever I try to reboot my unRaid server in that on shutdown, the array gets hung on "Retry unmounting user share(s)...".  Below are my current mount, unmount, and upload scripts.  Do you see any issues.  Anything else I can check to see why it's hanging?

 

rclone_mount (runs at startup)

Quote

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.7 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="binhex-deluge binhex-nzbget binhex-nzbhydra2 binhex-radarr binhex-sonarr Plex-Media-Server" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"movies,tvshows"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files
if [[  $LocalFileShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation
mkdir -p $MergerFSMountLocation

#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
    echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
    echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    touch mountcheck
    rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
        if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
            echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        else
            echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
            ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
        fi
        echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    else
        RCloneMountIP=""
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    fi
# create rclone mount
    rclone mount \
    --allow-other \
    --buffer-size 256M \
    --dir-cache-time 720h \
    --drive-chunk-size 512M \
    --log-level INFO \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit off \
    --vfs-cache-mode writes \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
    echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
    sleep 5
    echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    else
        echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
        docker stop $DockerStart
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
        exit
    fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
    if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    else
# check if mergerfs already installed
        if [[ -f "/bin/mergerfs" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
        else
# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
            echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
            sleep 5
            if [[ -f "/bin/mergerfs" ]]; then
                echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
            else
                echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
                rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
                exit
            fi
        fi
# Create mergerfs mount
        echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
        if [[  $LocalFilesShare2 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare2=":$LocalFilesShare2"
        else
            LocalFilesShare2=""
        fi
        if [[  $LocalFilesShare3 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare3=":$LocalFilesShare3"
        else
            LocalFilesShare3=""
        fi
        if [[  $LocalFilesShare4 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare4=":$LocalFilesShare4"
        else
            LocalFilesShare4=""
        fi
# make sure mergerfs mount point is empty
        mv $MergerFSMountLocation $LocalFilesLocation
        mkdir -p $MergerFSMountLocation
# mergerfs mount command
        mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
        echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
        if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
        else
            echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
            docker stop $DockerStart
            rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
            exit
        fi
    fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
    echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    docker start $DockerStart
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

rclone_unmount (runs at startup)

Quote

#!/bin/bash

#######################
### Cleanup Script ####
#######################
#### Version 0.9.2 ####
#######################

echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_cleanup script ***"

####### Cleanup Tracking Files #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Removing Tracking Files ***"

find /mnt/user/appdata/other/rclone/remotes -name dockers_started* -delete
find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete
find /mnt/user/appdata/other/rclone/remotes -name upload_running* -delete
echo "$(date "+%d.%m.%Y %T") INFO: ***Finished Cleanup! ***"

exit

rclone_upload (runs every 10 minutes)

Quote

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="15M"
BWLimit3Time="16:00"
BWLimit3="12M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

###############################################################################
#####    DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING   #####
###############################################################################

####### Preparing mount location variables #######
if [[  $BackupJob == 'Y' ]]; then
    LocalFilesLocation="$LocalFilesShare"
    echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
else
    LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
    echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
fi

RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount

####### create directory for script files #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files

#######  Check if script already running  ##########
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
fi

#######  check if rclone installed  ##########
echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
else
    echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
    exit
fi

####### Rotating serviceaccount.json file if using Service Accounts #######
if [[ $UseServiceAccountUpload == 'Y' ]]; then
    cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
    CounterNumber=$(find -name 'counter*' | cut -c 11,12)
    CounterCheck="1"
    if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
        echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
    else
        echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
        CounterNumber="1"
    fi
    ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
    echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
    ServiceAccount=""
fi

#######  Upload files  ##########

# Check bind option
if [[  $CreateBindMount == 'Y' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
        echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    else
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
        ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    fi
else
    RCloneMountIP=""
fi

#  Remove --delete-empty-src-dirs if rclone sync or copy
if [[  $RcloneCommand == 'move' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
    DeleteEmpty="--delete-empty-src-dirs "
else
    echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
    DeleteEmpty=""
fi

#  Check --backup-directory
if [[  $BackupJob == 'Y' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
    LocalFilesLocation="$LocalFilesShare"
    BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
else
    BackupRemoteLocation=""
    BackupRemoteDeletedLocation=""
    BackupRetention=""
    BackupDir=""
fi

# process files
    rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
    --user-agent="$RcloneUploadRemoteName" \
    -vv \
    --buffer-size 512M \
    --drive-chunk-size 512M \
    --tpslimit 8 \
    --checkers 8 \
    --transfers 4 \
    --order-by modtime,$ModSort \
    --min-age $MinimumAge \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --exclude *fuse_hidden* \
    --exclude *_HIDDEN \
    --exclude .recycle** \
    --exclude .Recycle.Bin/** \
    --exclude *.backup~* \
    --exclude *.partial~* \
    --drive-stop-on-upload-limit \
    --bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
    --bind=$RCloneMountIP $DeleteEmpty

# Delete old files from mount
if [[  $BackupJob == 'Y' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
    rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
fi

#######  Remove Control Files  ##########

# update counter and remove other control files
if [[  $UseServiceAccountUpload == 'Y' ]]; then
    if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
        echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
    else
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
        CounterNumber=$((CounterNumber+1))
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
        echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
    fi
else
    echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
fi

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

Appreciate any help!

Share this post


Link to post

Wanna start with saying thanks for all the contributors to this thread! 

I've tried searching this topic and going through a bunch of pages, but I'm starting to get sloppy when reading since its 82 pages atm. 
If I wanna use the rclone scripts just for syncing my entire onedrive/gdrive to my share on my unRAID server, which of the settings in the mount script should I use?
I want to have a local copy of everything on my drive, and have changes done localy to update my drive, and updates done in my drive from another source to update my local copy. 

I tried using only "rcloneMountShare" from the mountscript provided by "BinsonBuzz" https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount, but it doesnt seem to keep it updated, neither does it seem to actually download the copy, just creates a browsable drive localy, and as soon as I access a file it's downloaded into cache. Or it just might be that I'm to impatient, I can see that I'm using a lot of bandwith on my unRAID server, but I haven't figured out how to check what process is using the bandwith :)

Thanks in advance! 

 

Share this post


Link to post

I don't know where to start. I have had little to no issues since I used the guide to start using rclone and mount_mergerfs but today things just went to crap. I am getting issues where everything is showing as unavailable within plex. At first I tried to simply run the mount script again which said things were fine. Things seemed to be fine but within minutes there was nothing available again. I went ahead and shut down my server used the cleanup script and restarted everything from scratch. This time it took roughly an hour before I started seeing media unavailable again. I have tried stopping the upload script from running at all and just using the cleanup script and mount script. No errors on either but nothing is mounting or showing within plex. 

 

Anyway I know logs are needed I just cant for the life of me figure out where those are. 

Here is the most recent log info when I run the mount_script
 

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Mount script/log.txt

2020/09/23 20:13:30 INFO : vfs cache: cleaned: objects 6 (was 6) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:14:30 INFO : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item 4kmovies/The Greatest Showman (2017)/emd-thegreatestshowman.2160p.mkv was removed, freed 0 bytes
2020/09/23 20:14:30 INFO : vfs cache: cleaned: objects 5 (was 6) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:15:30 INFO : vfs cache: cleaned: objects 6 (was 6) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:16:30 INFO : vfs cache: cleaned: objects 6 (was 6) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:17:30 INFO : vfs cache: cleaned: objects 6 (was 6) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:18:30 INFO : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item movies/My Hero Academia Heroes Rising (2019)/My Hero Academia Heroes Rising (2019) Bluray-1080p.mkv was removed, freed 0 bytes
2020/09/23 20:18:30 INFO : vfs cache: cleaned: objects 5 (was 6) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:19:30 INFO : vfs cache: cleaned: objects 5 (was 5) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:20:30 INFO : vfs cache: cleaned: objects 5 (was 5) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:21:30 INFO : vfs cache: cleaned: objects 5 (was 5) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:22:30 INFO : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item movies/Guardians of the Galaxy (2014)/Guardians of the Galaxy (2014) Bluray-1080p.mp4 was removed, freed 0 bytes
2020/09/23 20:22:30 INFO : vfs cache: cleaned: objects 4 (was 5) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2020/09/23 20:23:30 INFO : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item movies/Yoga Hosers (2016)/Yoga Hosers (2016) Bluray-1080p.mp4 was removed, freed 0 bytes
2020/09/23 20:23:30 INFO : vfs cache: cleaned: objects 4 (was 5) in use 0, to upload 0, uploading 0, total size 0 (was 0)
Script Starting Sep 23, 2020 20:23.51

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Mount script/log.txt

23.09.2020 20:23:51 INFO: Creating local folders.
23.09.2020 20:23:51 INFO: *** Starting mount of remote gdrive_vfs
23.09.2020 20:23:51 INFO: Checking if this script is already running.
23.09.2020 20:23:51 INFO: Script not running - proceeding.
23.09.2020 20:23:51 INFO: *** Checking if online
23.09.2020 20:23:52 PASSED: *** Internet online
23.09.2020 20:23:52 INFO: Success gdrive_vfs remote is already mounted.
23.09.2020 20:23:52 INFO: Check successful, gdrive_vfs mergerfs mount in place.
23.09.2020 20:23:52 INFO: dockers already started.
23.09.2020 20:23:52 INFO: Script complete
Script Finished Sep 23, 2020 20:23.52

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Mount script/log.txt

2020/09/23 20:24:30 INFO : vfs cache: cleaned: objects 4 (was 4) in use 0, to upload 0, uploading 0, total size 0 (was 0)

Anyway currently titles are showing up but eventually they show unavailable. Can someone help me make sense of this? 

 

Thank you

Share this post


Link to post
14 hours ago, BigMal said:

I'm having issues whenever I try to reboot my unRaid server in that on shutdown, the array gets hung on "Retry unmounting user share(s)...".  Below are my current mount, unmount, and upload scripts.  Do you see any issues.  Anything else I can check to see why it's hanging?

I rarely have problems now.  I'm not sure what the solution was - possibly upgrading unRAID or being on a newer version of rclone.

Share this post


Link to post
9 hours ago, martikainen said:

If I wanna use the rclone scripts just for syncing my entire onedrive/gdrive to my share on my unRAID server, which of the settings in the mount script should I use?

 

If you just want to sync, then there's no point mounting as you've already got a local copy.  I would just use the upload script but set it to sync not move:

 

RcloneCommand="sync" # choose your rclone command e.g. move, copy, sync

 

Share this post


Link to post
4 hours ago, Hypner said:

Anyway currently titles are showing up but eventually they show unavailable. Can someone help me make sense of this?

Have you tried looking at the folders to see if the files are actually there?  Maybe your mount is dropping

Share this post


Link to post

So I did try that last night and the folders are there locally but not the files. It seems like the mount is dropping. Any solution or idea why? It did it repeatedly last night and after I wrote my post it was fine and is currently fine. Go figure. 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.