Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

@francrouge,
Short answer. Supports hard links, actively being developed and fixed, easier to cleanup products of merging dir (less scripts for the end-user). Typically agreed on being better for our general rclone case.
 
Complex answer from trapexit's GitHub: UnionFS is more like aufs than mergerfs in that it offers overlay / CoW features. If you're just looking to create a union of drives and want flexibility in file/directory placement then mergerfs offers that whereas unionfs is more for overlaying RW filesystems over RO ones.
Thx i will then try it

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment
13 hours ago, watchmeexplode5 said:

@DZMM @Bjur. I often unpack and write to the local mount due to a minor decrease in performance on a fuse filesystem. But that decrease is very minor. It's easiest to follow DZMM's advise and do most your work in the mergerfs mount. 

 

@DMZZ @watchmeexplode5 Thanks for the answer. So should I create my download folder in the root of /mnt/mergerfs/ since I have 2 separate mounts like I did on local, since I don't think it would make sense to make in movies and afterwards move the completed files to the other drive if it's not a movie.

Can you follow?

Link to comment

I love the idea of this plugin and the simplicity to get it all setup and running smoothly. I greatly appreciate the work that has gone into it!

 

One quick question: I have the upload script copying (vs moving) my libraries to a team share now. If I want to give the remote drive playback a trial, could I just delete a sample file from the /mnt/user/local/gcrypt folder? This should leave the file in the mergefs and then playback would occur via the team drive, right?
 

I am thinking of writing a simple age-off script where files older than 30 days are removed locally (via /mnt/user/local), then living only in the team share. Thanks again!

Edited by pgbtech
Typo
Link to comment
53 minutes ago, pgbtech said:

If I want to give the remote drive playback a trial, could I just delete a sample file from the /mnt/user/local/gcrypt folder?

Yes.  Mergerfs looks at the local location first so if you want to play the cloud copy, you need to delete the local copy.

 

53 minutes ago, pgbtech said:

I am thinking of writing a simple age-off script where files older than 30 days are removed locally (via /mnt/user/local), then living only in the team share. Thanks again!

The script already does this - just set the upload script to 'move' and then the MinimumAge to 30d.

Edited by DZMM
Link to comment

Hi everyone.

 

I am looking for some assistance. I have very slow scan speeds on plex, via SMB on another computer. Plex running in docker on my unraid machine is acceptable.

 

I have quite a few users that have slow connections so I have a second i7 machine running Windows 10 and Plex that I send them too, leaving the Unraid server with some headspace to do everything else it does.

 

The Win PC has a read only mapped network drive to the gdrive folder in mount_mergerfs share from unraid. Browsing this share can be slow, sometimes its fairly fast. Copy from this share can be fast, but it is very intermittent. Most of the time I get full 200meg copy speed though, so this is acceptable.

 

When running the scan, network activity in the Win PC is as expected, fairly low. However, network activity on Unraid and my router is going nuts. What seems to be happening is that Plex on the windows PC is scanning the directory, asking for just a bit of the file, and rclone/unraid is attempting to serve much more of the file, meaning each file takes a long time to scan.

 

I have tested the Win PC with RaiDrive and mounted a drive, and the scans through there are VERY fast and only 1-3meg of my line is used.

 

I think windows and unraid are not playing well in this configuration.

 

Can anyone offer some settings or advise? My mount settings are stock.

 

 

Link to comment

Using 

### Upload Script ####

######################

### Version 0.95.5 ###

 

Script starts fine but doesn't complete i get this on the read out

/usr/sbin/rclone: line 3: 18008 Killed rcloneorig --config $config "$@"
21.05.2020 15:53:14 INFO: Created counter_20 for next upload run.
21.05.2020 15:53:14 INFO: Script complete
Script Finished May 21, 2020 15:53.14

Link to comment

@markrudling

When you tested the Windows+Raidrive plex scan did you use the same settings? IE: same media analyze, chapter/preview thumbnail settings: that can generate lots of traffic. Did you ever use windows with a rclone mount in your tests??

 

Also, I've never used Raidrive but rclone likely functions differently with regards to caching and file access. With the default mount settings, rclone should serve portions of the requested file. If you/plex requests more it will grab more of the file. It could be that you are seeing expected behavior on an initial scan which will settle down afterwards. 

 

I don't use (plex on windows) + (smb to unraid) . I keep plex on unraid itself so I can't be 100% on my statements. Maybe somebody with a unraid+windows-plex combo can chime in with more info. 

Link to comment
9 hours ago, watchmeexplode5 said:

@JohnJay829

That looks like an error with the rclone plugin and not the scripts. 

 

What version of rclone plugin are you running? Try updating and/or running the beta rclone plugin. 

 

Author:Waseh

Repository:Waseh's Repository  Author Profile

Categories:Backup, Cloud, Tools:Utilities, Plugins

Added to CA:August 25, 2018

Date Updated:November 1, 2019

Current Version:2019.11.01

 

Is there a different one to use

Link to comment
6 hours ago, JohnJay829 said:

Author:Waseh

Repository:Waseh's Repository  Author Profile

Categories:Backup, Cloud, Tools:Utilities, Plugins

Added to CA:August 25, 2018

Date Updated:November 1, 2019

Current Version:2019.11.01

 

Is there a different one to use

Run the command "rclone version" as the plugin might not have installed the latest version of rclone.if it's less than v1.51 then uninstall and reinstall the plugin to get the latest.

 

 

Link to comment

So I have been at this for a couple hours now and initially I followed SpaceInvaders video on using rclone which was great but I want my other dockers to use my encrypted google drive and that led me to this post.  BTW these scripts look amazing and it seems a lot of people are benefiting from them which is why I want to get this working. 

 

So where I am at is I put each script in user scripts (mount, unmount, and upload.)

I have created 2 remotes in rclone a gdrive and a gdrive_media_vfs that is encrypted. 

 

 

rclone.conf

 

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = REMOVED
password2 = REMOVED

[gdrive]
type = drive
client_id = REMOVED
client_secret = REMOVED
scope = drive
token = REMOVED

I ended up creating the remotes with the same name as the scripts so that I don't hit any other snags along the way. 

 

So after running the mount script in the background I get this error in my logs

Script Starting May 23, 2020 10:52.42

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_plugin/log.txt

23.05.2020 10:52:42 INFO: Creating local folders.
23.05.2020 10:52:42 INFO: *** Starting mount of remote gdrive_vfs
23.05.2020 10:52:42 INFO: Checking if this script is already running.
23.05.2020 10:52:42 INFO: Script not running - proceeding.
23.05.2020 10:52:42 INFO: Mount not running. Will now mount gdrive_vfs remote.
23.05.2020 10:52:42 INFO: Recreating mountcheck file for gdrive_vfs remote.
2020/05/23 10:52:42 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "gdrive_vfs:" "-vv" "--no-traverse"]
2020/05/23 10:52:42 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/05/23 10:52:42 Failed to create file system for "gdrive_vfs:": didn't find section in config file
23.05.2020 10:52:42 INFO: *** Creating mount for remote gdrive_vfs
23.05.2020 10:52:42 INFO: sleeping for 5 seconds
2020/05/23 10:52:42 Failed to create file system for "gdrive_vfs:": didn't find section in config file
23.05.2020 10:52:47 INFO: continuing...
23.05.2020 10:52:47 CRITICAL: gdrive_vfs mount failed - please check for problems.
Script Finished May 23, 2020 10:52.47

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_plugin/log.txt

I'm hoping someone here can help me with this. 

Link to comment

@watchmeexplode5

 

Thanks for taking the time to read my ramble. 

 

Looks like two things are at play. 1, my Ethernet card is broken. Very intermittent behaviour, slow then fast then slow. Really frustrating to deal with because it shows no real signs of fault, no errors. Just really bad behaviour. Using a USB3 Gigabit adaptor and things are a LOT better.

 

2. Windows and Plex and remote shares do not seem to play well. I think that Plex asks for the first few MB of the file, but windows tries to be smart and asks for more. Perhaps because rclone is slower so Plex/Windows asks for more. Not sure.

 

Anyway, I reinstalled Ubuntu on the second machine and with the new Ethernet adaptor, things are running well. Scan is not fast, but its acceptable. 3k movies takes about 6 hours.

 

When looking at network activity on Unraid, I see many small bursts during the scan. Assuming its fetching a few MB of the file to scan.

 

What im confused about is the mount settings and Rclone. Does the buffer or chunk size determine how much of the file to bring? IE, if plex asks for the first 10mb but its configured for 256mb, will all 256mb come through before Rclone delivers the 10mb to Plex? When Plex scans running locally, there is very little network activity, so im assuming it only brings a smaller portion of the file?

 

The reason I ask is to try optimise the scanning process. I have 1gb internet, so 256mb comes through pretty fast, but 32mb or so may be a lot better. Changing any of the values in the mount script dont really make any difference to scan speeds.

 

ANyway, thanks for reading

Link to comment
2 hours ago, markrudling said:

@watchmeexplode5

 

Thanks for taking the time to read my ramble. 

 

Looks like two things are at play. 1, my Ethernet card is broken. Very intermittent behaviour, slow then fast then slow. Really frustrating to deal with because it shows no real signs of fault, no errors. Just really bad behaviour. Using a USB3 Gigabit adaptor and things are a LOT better.

 

2. Windows and Plex and remote shares do not seem to play well. I think that Plex asks for the first few MB of the file, but windows tries to be smart and asks for more. Perhaps because rclone is slower so Plex/Windows asks for more. Not sure.

 

Anyway, I reinstalled Ubuntu on the second machine and with the new Ethernet adaptor, things are running well. Scan is not fast, but its acceptable. 3k movies takes about 6 hours.

 

When looking at network activity on Unraid, I see many small bursts during the scan. Assuming its fetching a few MB of the file to scan.

 

What im confused about is the mount settings and Rclone. Does the buffer or chunk size determine how much of the file to bring? IE, if plex asks for the first 10mb but its configured for 256mb, will all 256mb come through before Rclone delivers the 10mb to Plex? When Plex scans running locally, there is very little network activity, so im assuming it only brings a smaller portion of the file?

 

The reason I ask is to try optimise the scanning process. I have 1gb internet, so 256mb comes through pretty fast, but 32mb or so may be a lot better. Changing any of the values in the mount script dont really make any difference to scan speeds.

 

ANyway, thanks for reading

I don't know how much of a file Plex has to scan to profile a file.  If you want to experiement, I think reducing --drive-chunk-size: might help.  This controls how big the first chunk is that Plex requests - 128M in my settings.  Try 64M and 256M and share how you get on.  I chose 128M as in my testings this was the best chunk on my setup at the time to get the fastest playback launch times i.e. I wasn't trying to optimise scans.

 

Once you've done the first scan it gets a lot faster - almost normal speeds.

 

--buffer-size: is just how much of each stream is held in memory.  it shouldn't affect scanning.  E.g. 8 streams would be 8 x256M max for this script.

Link to comment

I'm still working on getting this setup.. just realized my service_accounts folder doesn't have a sa_gdrive.json file... do i just put the path in rclone? also, confused about team_drive .. I thought the service accounts mean we don't need a team drive?

 

sorry for the newb questions. 

Link to comment
4 hours ago, axeman said:

I'm still working on getting this setup.. just realized my service_accounts folder doesn't have a sa_gdrive.json file... do i just put the path in rclone? also, confused about team_drive .. I thought the service accounts mean we don't need a team drive?

 

sorry for the newb questions. 

You don't have to use teamdrives.  But, it's recommended you take the extra steps to get them up and running because after a while most users come across one or more of the following problems:

 

- want to upload more than 750GB/day by adding more users

- want to share the remote not just Plex access, or use on another PC

- fixes performance issues when a lot of content has been loaded, by splitting into multiple teamdrives that are merged locally

 

Read the GitHub post which describes best how to use SA files

Edited by DZMM
Link to comment
7 hours ago, DZMM said:

You don't have to use teamdrives.  But, it's recommended you take the extra steps to get them up and running because after a while most users come across one or more of the following problems:

 

- want to upload more than 750GB/day by adding more users

- want to share the remote not just Plex access, or use on another PC

- fixes performance issues when a lot of content has been loaded, by splitting into multiple teamdrives that are merged locally

 

Read the GitHub post which describes best how to use SA files

Thanks. I keep starting this and stopping it. I'm going to go back and re-read everything to make sure I'm not missing anything. 

 

Meanwhile... I see a "backup" option what's the difference between that and Copy/Sync mode? My goal is to end up with a mirrored copy of some of my unraid shares (not all of them) on GDrive, and then use that first found option you'd mentioned, to primarily serve files from cloud drive. 

Link to comment
4 hours ago, axeman said:

 

Meanwhile... I see a "backup" option what's the difference between that and Copy/Sync

Backup moves files deleted from the local folder to another folder on gdrive for a chosen number of days, so if you accidentally delete you can restore from gdrive

Link to comment
1 hour ago, DZMM said:

Backup moves files deleted from the local folder to another folder on gdrive for a chosen number of days, so if you accidentally delete you can restore from gdrive

Thanks!

 

i'm still missing something. I went through everything twice. When I tried to configure my remote as a team drive, I get an error:

"No team drives found in your account"

 

What could I be missing? When I login to drive.google.com, I have a shared drive that I've created. It's shared with the group that was created during step 2 of the service account setup. I ran "python3 add_to_team_drive.py -d XXXXXX" and added the shared drive ID there. 

Link to comment
1 hour ago, axeman said:

Thanks!

 

i'm still missing something. I went through everything twice. When I tried to configure my remote as a team drive, I get an error:

"No team drives found in your account"

 

What could I be missing? When I login to drive.google.com, I have a shared drive that I've created. It's shared with the group that was created during step 2 of the service account setup. I ran "python3 add_to_team_drive.py -d XXXXXX" and added the shared drive ID there. 

Never come across that before.  Maybe try editing the config file manually to look something like this:

[tdrive]
type = drive
scope = drive
service_account_file = /mnt/cache/appdata/other/rclone/service_accounts/sa_tdrive.json
team_drive = xxxxxxxxxxxxxxxxxx # look at the url in gdrive for the ID
server_side_across_configs = true

 

Link to comment

Thanks ... moving right along ...

 

I had a failure because Docker wasn't enabled. Did that, and now, I've got an error:

 

FUSE library version: 2.9.7-mergerfs_2.29.0
using FUSE kernel interface version 7.31
'build/mergerfs' -> '/build/mergerfs'
24.05.2020 23:39:55 INFO: *sleeping for 5 seconds
24.05.2020 23:40:00 INFO: Mergerfs installed successfully, proceeding to create mergerfs mount.
24.05.2020 23:40:00 INFO: Creating gdrive_media_vfs mergerfs mount.
fuse: bad mount point `movies/gdrive_media_vfs:/mnt/user/mount_rclone/gdrive_media_vfs': No such file or directory
24.05.2020 23:40:00 INFO: Checking if gdrive_media_vfs mergerfs mount created.
24.05.2020 23:40:00 CRITICAL: gdrive_media_vfs mergerfs mount failed.

 

This if from rclone_mount script. 

Link to comment
3 hours ago, axeman said:

Thanks ... moving right along ...

 

I had a failure because Docker wasn't enabled. Did that, and now, I've got an error:

 

FUSE library version: 2.9.7-mergerfs_2.29.0
using FUSE kernel interface version 7.31
'build/mergerfs' -> '/build/mergerfs'
24.05.2020 23:39:55 INFO: *sleeping for 5 seconds
24.05.2020 23:40:00 INFO: Mergerfs installed successfully, proceeding to create mergerfs mount.
24.05.2020 23:40:00 INFO: Creating gdrive_media_vfs mergerfs mount.
fuse: bad mount point `movies/gdrive_media_vfs:/mnt/user/mount_rclone/gdrive_media_vfs': No such file or directory
24.05.2020 23:40:00 INFO: Checking if gdrive_media_vfs mergerfs mount created.
24.05.2020 23:40:00 CRITICAL: gdrive_media_vfs mergerfs mount failed.

 

This if from rclone_mount script. 

You've entered your paths wrong.  Post your rclone mount settings.

 

Edit: I just reread your post about wanting to mirror but have cloud found first so I think you messed up the script when you switched the mergerfs paths.  Post your whole mount scriot please.

Edited by DZMM
Link to comment
On 5/20/2020 at 2:52 PM, Bjur said:

@DMZZ @watchmeexplode5 Thanks for the answer. So should I create my download folder in the root of /mnt/mergerfs/ since I have 2 separate mounts like I did on local, since I don't think it would make sense to make in movies and afterwards move the completed files to the other drive if it's not a movie.

Can you follow?

So will it be fine to create download folder in the root of /mnt/mergerfs/ and move completed into each of the crypt folders?

Link to comment
4 hours ago, DZMM said:

You've entered your paths wrong.  Post your rclone mount settings.

 

Edit: I just reread your post about wanting to mirror but have cloud found first so I think you messed up the script when you switched the mergerfs paths.  Post your whole mount scriot please.

Thanks, that's what I thought too, but double checked, and looked ok. Note, I'm only trying one subfolder in my share, perhaps that's the issue?  I have a videos share, and then folders like 3D Movies, Animation, HD Movies, TV Series. I started out with the smallest of those. 

 

rclone_mount:

 

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.6 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
LocalFilesShare="/mnt/user/Videos/3D movies" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files
if [[  $LocalFileShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation
mkdir -p $MergerFSMountLocation

#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
    echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    touch mountcheck
    rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
        if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
            echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        else
            echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
            ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
        fi
        echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    else
        RCloneMountIP=""
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    fi
# create rclone mount
    rclone mount \
    --allow-other \
    --buffer-size 256M \
    --dir-cache-time 720h \
    --drive-chunk-size 512M \
    --log-level INFO \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit off \
    --vfs-cache-mode writes \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
    echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
    sleep 5
    echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    else
        echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
        exit
    fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
    if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    else
# check if mergerfs already installed
        if [[ -f "/bin/mergerfs" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
        else
# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
            echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
            sleep 5
            if [[ -f "/bin/mergerfs" ]]; then
                echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
            else
                echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
                rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
                exit
            fi
        fi
# Create mergerfs mount
        echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
        if [[  $LocalFilesShare2 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare2=":$LocalFilesShare2"
        else
            LocalFilesShare2=""
        fi
        if [[  $LocalFilesShare3 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare3=":$LocalFilesShare3"
        else
            LocalFilesShare3=""
        fi
        if [[  $LocalFilesShare4 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare4=":$LocalFilesShare4"
        else
            LocalFilesShare4=""
        fi
# mergerfs mount command
        mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
        echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
        if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
        else
            echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed."
            rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
            exit
        fi
    fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
    echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    docker start $DockerStart
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.