Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

@markrudling

When you tested the Windows+Raidrive plex scan did you use the same settings? IE: same media analyze, chapter/preview thumbnail settings: that can generate lots of traffic. Did you ever use windows with a rclone mount in your tests??

 

Also, I've never used Raidrive but rclone likely functions differently with regards to caching and file access. With the default mount settings, rclone should serve portions of the requested file. If you/plex requests more it will grab more of the file. It could be that you are seeing expected behavior on an initial scan which will settle down afterwards. 

 

I don't use (plex on windows) + (smb to unraid) . I keep plex on unraid itself so I can't be 100% on my statements. Maybe somebody with a unraid+windows-plex combo can chime in with more info. 

Link to comment
9 hours ago, watchmeexplode5 said:

@JohnJay829

That looks like an error with the rclone plugin and not the scripts. 

 

What version of rclone plugin are you running? Try updating and/or running the beta rclone plugin. 

 

Author:Waseh

Repository:Waseh's Repository  Author Profile

Categories:Backup, Cloud, Tools:Utilities, Plugins

Added to CA:August 25, 2018

Date Updated:November 1, 2019

Current Version:2019.11.01

 

Is there a different one to use

Link to comment
6 hours ago, JohnJay829 said:

Author:Waseh

Repository:Waseh's Repository  Author Profile

Categories:Backup, Cloud, Tools:Utilities, Plugins

Added to CA:August 25, 2018

Date Updated:November 1, 2019

Current Version:2019.11.01

 

Is there a different one to use

Run the command "rclone version" as the plugin might not have installed the latest version of rclone.if it's less than v1.51 then uninstall and reinstall the plugin to get the latest.

 

 

Link to comment

So I have been at this for a couple hours now and initially I followed SpaceInvaders video on using rclone which was great but I want my other dockers to use my encrypted google drive and that led me to this post.  BTW these scripts look amazing and it seems a lot of people are benefiting from them which is why I want to get this working. 

 

So where I am at is I put each script in user scripts (mount, unmount, and upload.)

I have created 2 remotes in rclone a gdrive and a gdrive_media_vfs that is encrypted. 

 

 

rclone.conf

 

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = REMOVED
password2 = REMOVED

[gdrive]
type = drive
client_id = REMOVED
client_secret = REMOVED
scope = drive
token = REMOVED

I ended up creating the remotes with the same name as the scripts so that I don't hit any other snags along the way. 

 

So after running the mount script in the background I get this error in my logs

Script Starting May 23, 2020 10:52.42

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_plugin/log.txt

23.05.2020 10:52:42 INFO: Creating local folders.
23.05.2020 10:52:42 INFO: *** Starting mount of remote gdrive_vfs
23.05.2020 10:52:42 INFO: Checking if this script is already running.
23.05.2020 10:52:42 INFO: Script not running - proceeding.
23.05.2020 10:52:42 INFO: Mount not running. Will now mount gdrive_vfs remote.
23.05.2020 10:52:42 INFO: Recreating mountcheck file for gdrive_vfs remote.
2020/05/23 10:52:42 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "gdrive_vfs:" "-vv" "--no-traverse"]
2020/05/23 10:52:42 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/05/23 10:52:42 Failed to create file system for "gdrive_vfs:": didn't find section in config file
23.05.2020 10:52:42 INFO: *** Creating mount for remote gdrive_vfs
23.05.2020 10:52:42 INFO: sleeping for 5 seconds
2020/05/23 10:52:42 Failed to create file system for "gdrive_vfs:": didn't find section in config file
23.05.2020 10:52:47 INFO: continuing...
23.05.2020 10:52:47 CRITICAL: gdrive_vfs mount failed - please check for problems.
Script Finished May 23, 2020 10:52.47

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount_plugin/log.txt

I'm hoping someone here can help me with this. 

Link to comment

@watchmeexplode5

 

Thanks for taking the time to read my ramble. 

 

Looks like two things are at play. 1, my Ethernet card is broken. Very intermittent behaviour, slow then fast then slow. Really frustrating to deal with because it shows no real signs of fault, no errors. Just really bad behaviour. Using a USB3 Gigabit adaptor and things are a LOT better.

 

2. Windows and Plex and remote shares do not seem to play well. I think that Plex asks for the first few MB of the file, but windows tries to be smart and asks for more. Perhaps because rclone is slower so Plex/Windows asks for more. Not sure.

 

Anyway, I reinstalled Ubuntu on the second machine and with the new Ethernet adaptor, things are running well. Scan is not fast, but its acceptable. 3k movies takes about 6 hours.

 

When looking at network activity on Unraid, I see many small bursts during the scan. Assuming its fetching a few MB of the file to scan.

 

What im confused about is the mount settings and Rclone. Does the buffer or chunk size determine how much of the file to bring? IE, if plex asks for the first 10mb but its configured for 256mb, will all 256mb come through before Rclone delivers the 10mb to Plex? When Plex scans running locally, there is very little network activity, so im assuming it only brings a smaller portion of the file?

 

The reason I ask is to try optimise the scanning process. I have 1gb internet, so 256mb comes through pretty fast, but 32mb or so may be a lot better. Changing any of the values in the mount script dont really make any difference to scan speeds.

 

ANyway, thanks for reading

Link to comment
2 hours ago, markrudling said:

@watchmeexplode5

 

Thanks for taking the time to read my ramble. 

 

Looks like two things are at play. 1, my Ethernet card is broken. Very intermittent behaviour, slow then fast then slow. Really frustrating to deal with because it shows no real signs of fault, no errors. Just really bad behaviour. Using a USB3 Gigabit adaptor and things are a LOT better.

 

2. Windows and Plex and remote shares do not seem to play well. I think that Plex asks for the first few MB of the file, but windows tries to be smart and asks for more. Perhaps because rclone is slower so Plex/Windows asks for more. Not sure.

 

Anyway, I reinstalled Ubuntu on the second machine and with the new Ethernet adaptor, things are running well. Scan is not fast, but its acceptable. 3k movies takes about 6 hours.

 

When looking at network activity on Unraid, I see many small bursts during the scan. Assuming its fetching a few MB of the file to scan.

 

What im confused about is the mount settings and Rclone. Does the buffer or chunk size determine how much of the file to bring? IE, if plex asks for the first 10mb but its configured for 256mb, will all 256mb come through before Rclone delivers the 10mb to Plex? When Plex scans running locally, there is very little network activity, so im assuming it only brings a smaller portion of the file?

 

The reason I ask is to try optimise the scanning process. I have 1gb internet, so 256mb comes through pretty fast, but 32mb or so may be a lot better. Changing any of the values in the mount script dont really make any difference to scan speeds.

 

ANyway, thanks for reading

I don't know how much of a file Plex has to scan to profile a file.  If you want to experiement, I think reducing --drive-chunk-size: might help.  This controls how big the first chunk is that Plex requests - 128M in my settings.  Try 64M and 256M and share how you get on.  I chose 128M as in my testings this was the best chunk on my setup at the time to get the fastest playback launch times i.e. I wasn't trying to optimise scans.

 

Once you've done the first scan it gets a lot faster - almost normal speeds.

 

--buffer-size: is just how much of each stream is held in memory.  it shouldn't affect scanning.  E.g. 8 streams would be 8 x256M max for this script.

Link to comment

I'm still working on getting this setup.. just realized my service_accounts folder doesn't have a sa_gdrive.json file... do i just put the path in rclone? also, confused about team_drive .. I thought the service accounts mean we don't need a team drive?

 

sorry for the newb questions. 

Link to comment
4 hours ago, axeman said:

I'm still working on getting this setup.. just realized my service_accounts folder doesn't have a sa_gdrive.json file... do i just put the path in rclone? also, confused about team_drive .. I thought the service accounts mean we don't need a team drive?

 

sorry for the newb questions. 

You don't have to use teamdrives.  But, it's recommended you take the extra steps to get them up and running because after a while most users come across one or more of the following problems:

 

- want to upload more than 750GB/day by adding more users

- want to share the remote not just Plex access, or use on another PC

- fixes performance issues when a lot of content has been loaded, by splitting into multiple teamdrives that are merged locally

 

Read the GitHub post which describes best how to use SA files

Edited by DZMM
Link to comment
7 hours ago, DZMM said:

You don't have to use teamdrives.  But, it's recommended you take the extra steps to get them up and running because after a while most users come across one or more of the following problems:

 

- want to upload more than 750GB/day by adding more users

- want to share the remote not just Plex access, or use on another PC

- fixes performance issues when a lot of content has been loaded, by splitting into multiple teamdrives that are merged locally

 

Read the GitHub post which describes best how to use SA files

Thanks. I keep starting this and stopping it. I'm going to go back and re-read everything to make sure I'm not missing anything. 

 

Meanwhile... I see a "backup" option what's the difference between that and Copy/Sync mode? My goal is to end up with a mirrored copy of some of my unraid shares (not all of them) on GDrive, and then use that first found option you'd mentioned, to primarily serve files from cloud drive. 

Link to comment
4 hours ago, axeman said:

 

Meanwhile... I see a "backup" option what's the difference between that and Copy/Sync

Backup moves files deleted from the local folder to another folder on gdrive for a chosen number of days, so if you accidentally delete you can restore from gdrive

Link to comment
1 hour ago, DZMM said:

Backup moves files deleted from the local folder to another folder on gdrive for a chosen number of days, so if you accidentally delete you can restore from gdrive

Thanks!

 

i'm still missing something. I went through everything twice. When I tried to configure my remote as a team drive, I get an error:

"No team drives found in your account"

 

What could I be missing? When I login to drive.google.com, I have a shared drive that I've created. It's shared with the group that was created during step 2 of the service account setup. I ran "python3 add_to_team_drive.py -d XXXXXX" and added the shared drive ID there. 

Link to comment
1 hour ago, axeman said:

Thanks!

 

i'm still missing something. I went through everything twice. When I tried to configure my remote as a team drive, I get an error:

"No team drives found in your account"

 

What could I be missing? When I login to drive.google.com, I have a shared drive that I've created. It's shared with the group that was created during step 2 of the service account setup. I ran "python3 add_to_team_drive.py -d XXXXXX" and added the shared drive ID there. 

Never come across that before.  Maybe try editing the config file manually to look something like this:

[tdrive]
type = drive
scope = drive
service_account_file = /mnt/cache/appdata/other/rclone/service_accounts/sa_tdrive.json
team_drive = xxxxxxxxxxxxxxxxxx # look at the url in gdrive for the ID
server_side_across_configs = true

 

Link to comment

Thanks ... moving right along ...

 

I had a failure because Docker wasn't enabled. Did that, and now, I've got an error:

 

FUSE library version: 2.9.7-mergerfs_2.29.0
using FUSE kernel interface version 7.31
'build/mergerfs' -> '/build/mergerfs'
24.05.2020 23:39:55 INFO: *sleeping for 5 seconds
24.05.2020 23:40:00 INFO: Mergerfs installed successfully, proceeding to create mergerfs mount.
24.05.2020 23:40:00 INFO: Creating gdrive_media_vfs mergerfs mount.
fuse: bad mount point `movies/gdrive_media_vfs:/mnt/user/mount_rclone/gdrive_media_vfs': No such file or directory
24.05.2020 23:40:00 INFO: Checking if gdrive_media_vfs mergerfs mount created.
24.05.2020 23:40:00 CRITICAL: gdrive_media_vfs mergerfs mount failed.

 

This if from rclone_mount script. 

Link to comment
3 hours ago, axeman said:

Thanks ... moving right along ...

 

I had a failure because Docker wasn't enabled. Did that, and now, I've got an error:

 

FUSE library version: 2.9.7-mergerfs_2.29.0
using FUSE kernel interface version 7.31
'build/mergerfs' -> '/build/mergerfs'
24.05.2020 23:39:55 INFO: *sleeping for 5 seconds
24.05.2020 23:40:00 INFO: Mergerfs installed successfully, proceeding to create mergerfs mount.
24.05.2020 23:40:00 INFO: Creating gdrive_media_vfs mergerfs mount.
fuse: bad mount point `movies/gdrive_media_vfs:/mnt/user/mount_rclone/gdrive_media_vfs': No such file or directory
24.05.2020 23:40:00 INFO: Checking if gdrive_media_vfs mergerfs mount created.
24.05.2020 23:40:00 CRITICAL: gdrive_media_vfs mergerfs mount failed.

 

This if from rclone_mount script. 

You've entered your paths wrong.  Post your rclone mount settings.

 

Edit: I just reread your post about wanting to mirror but have cloud found first so I think you messed up the script when you switched the mergerfs paths.  Post your whole mount scriot please.

Edited by DZMM
Link to comment
On 5/20/2020 at 2:52 PM, Bjur said:

@DMZZ @watchmeexplode5 Thanks for the answer. So should I create my download folder in the root of /mnt/mergerfs/ since I have 2 separate mounts like I did on local, since I don't think it would make sense to make in movies and afterwards move the completed files to the other drive if it's not a movie.

Can you follow?

So will it be fine to create download folder in the root of /mnt/mergerfs/ and move completed into each of the crypt folders?

Link to comment
4 hours ago, DZMM said:

You've entered your paths wrong.  Post your rclone mount settings.

 

Edit: I just reread your post about wanting to mirror but have cloud found first so I think you messed up the script when you switched the mergerfs paths.  Post your whole mount scriot please.

Thanks, that's what I thought too, but double checked, and looked ok. Note, I'm only trying one subfolder in my share, perhaps that's the issue?  I have a videos share, and then folders like 3D Movies, Animation, HD Movies, TV Series. I started out with the smallest of those. 

 

rclone_mount:

 

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.6 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
LocalFilesShare="/mnt/user/Videos/3D movies" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files
if [[  $LocalFileShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation
mkdir -p $MergerFSMountLocation

#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
    echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    touch mountcheck
    rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
        if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
            echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        else
            echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
            ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
        fi
        echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    else
        RCloneMountIP=""
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    fi
# create rclone mount
    rclone mount \
    --allow-other \
    --buffer-size 256M \
    --dir-cache-time 720h \
    --drive-chunk-size 512M \
    --log-level INFO \
    --vfs-read-chunk-size 128M \
    --vfs-read-chunk-size-limit off \
    --vfs-cache-mode writes \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
    echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
    sleep 5
    echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    else
        echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems."
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
        exit
    fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
    if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    else
# check if mergerfs already installed
        if [[ -f "/bin/mergerfs" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
        else
# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
            echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
            sleep 5
            if [[ -f "/bin/mergerfs" ]]; then
                echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
            else
                echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
                rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
                exit
            fi
        fi
# Create mergerfs mount
        echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
        if [[  $LocalFilesShare2 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare2=":$LocalFilesShare2"
        else
            LocalFilesShare2=""
        fi
        if [[  $LocalFilesShare3 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare3=":$LocalFilesShare3"
        else
            LocalFilesShare3=""
        fi
        if [[  $LocalFilesShare4 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare4=":$LocalFilesShare4"
        else
            LocalFilesShare4=""
        fi
# mergerfs mount command
        mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
        echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
        if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
        else
            echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed."
            rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
            exit
        fi
    fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
    echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
    docker start $DockerStart
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

Link to comment
29 minutes ago, axeman said:

LocalFilesShare="/mnt/user/Videos/3D movies"

 

It doesn't like the space in "3D movies"  hence the error:

 

5 hours ago, DZMM said:

fuse: bad mount point `movies/gdrive_media_vfs:/mnt/user/mount_rclone/gdrive_media_vfs': No such file or directory

 

My script and me aren't smart enough to account for this - I try not to have paths with spaces to avoid these problems and always use underscores etc.

 

Change the path and you'll be fine

Link to comment
1 hour ago, DZMM said:

It doesn't like the space in "3D movies"  hence the error:

 

My script and me aren't smart enough to account for this - I try not to have paths with spaces to avoid these problems and always use underscores etc.

 

Change the path and you'll be fine

Thanks - I was afraid of that... but figured, hey it's in quotes so it should be fine. But just so I understand, the individual movie folders underneath that are fine to have spaces, right? 

 

Also, assuming I do go full tilt a year from now... and want to do the full "videos" share... will that end up creating duplicates of the 3D Movies folder? 

Link to comment
26 minutes ago, DZMM said:

@axeman paths not referenced in the script will be fine.

 

Don't really understand the 2nd part.  If you move the files you won't get duplicates.

So my UnRaid Array looks like this:

 

\\server\Videos\3D_Movies

\\server\Videos\Animation

\\server\Videos\TV_Shows

 

 

For now, I just started with /videos/3D_Movies to see how this all works. Say a month from now, I say, this is all great, I want to do all of my array. Can I just change the path on rclone_mount to the /videos ? 

 

So I ran the script through UnRaid gui... i see it reporting 14.874 MBytes/s ... i'm guessing that's mbits? My upload speed is only 40mbit max, which translates to really 5MBytes/sec 

 

 

Edited by axeman
Link to comment
1 hour ago, axeman said:

Can I just change the path on rclone_mount to the /videos ? 

 

If you want to and if it won't messup say your plex scans.  Or, you can add the other paths to the mergerfs mount as LocalPath2 (I think it is).  I honestly think you're approaching this wrong.  What I would do is just add /mnt/user/Videos to your mergerfs mount and then exclude the paths you don't want uploading yet e.g. /mnt/user/Videos/Animation and remove the exclusion if you change your mind.  Otherwise, you'll probably end up wasting a lot of time and creating hassle with rescanning paths.

 

1 hour ago, axeman said:

So I ran the script through UnRaid gui... i see it reporting 14.874 MBytes/s

 
 
 

It can't exceed your physical upload speed.  Maybe it's buffering the first min or so of files to RAM, but after a few mins if you haven't set a bwlimit it will drop to less than 5MB/s

Edited by DZMM
Link to comment
2 minutes ago, DZMM said:

If you want to and if it won't messup say your plex scans.  Or, you can add the other paths to the mergerfs mount as LocalPath2 (I think it is).  I honestly think you're approaching this wrong.  What I would do is just add /mnt/user/Videos to your mergerfs mount and then exclude the paths you don't want uploading yet e.g. /mnt/user/Videos/Animation and remove the exclusion if you change your mind.  Otherwise, you'll probably end up wasting a lot of time and creating hassle with rescanning paths.

 

It can't exceed your physical upload speed.  Maybe it's buffering the first min or so of files to RAM, but after a few mins if you haven't set a bwlimit it will drop to less than 5MB/s

Thank you for your continued patience with this... I agree - the approach was wrong. I'm going to restart this. What's the best way to stop the upload that's currently running? should I run the clean up script, or is there another way?

Link to comment
49 minutes ago, axeman said:

What's the best way to stop the upload that's currently running?

That's one I don't how to stop - when I've messed stuff up in the post I've rebooted.  Or, you could temporarily change the name of the path to force the upload to finish - messy I know.

Link to comment
29 minutes ago, DZMM said:

That's one I don't how to stop - when I've messed stuff up in the post I've rebooted.  Or, you could temporarily change the name of the path to force the upload to finish - messy I know.

heh, okay - I was afraid of rebooting. Will try that.

 

Can I get stupid(er) for a minute? This could be a killer setup for a camera DVR, if we can set the upload time separately based on share. Like for a DVR setup, push up to cloud as quickly as possible, in case of a robbery, data has already been pushed to cloud. 

 

Also wondering if it's OK to point the mount to a cache drive. So that when we're streaming a movie, it doesn't write to an array share? 

Edited by axeman
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.