Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

1 hour ago, crazyhorse90210 said:

DZMM, am I wrong in thinking the CommandX lines are not used anywhere, I can't find those variable anywhere else in the script. What is the supposed function of adding anything into the CommandX lines?


# Add extra commands or filters
Command1="--rc"

 

Hmm they used to work, I must have deleted by accident.  I'm going to do some work on the script soon so I'll add this to the list of things to fix.

Link to comment

I am trying to use the rclone_mount, rclone_unmount and rclone_upload scripts but I am having difficulties to see which values I have to adjust to my personal setup.

 

My setup is as following:

 

- remote gdrive1 with encrypted remote secure1 mounted under /mnt/disks/gdrive1 & /mnt/disks/secure1 for movies

- remote gdrive2 with encrypted remote secure2 mounted under /mnt/disks/gdrive2 & /mnt/disks/secure2 for tvshows

- Completed movie downloads are located in /mnt/disks/192.168.178.38_Downloads/complete-downloads/movies

- Completed tvshows downloads are located in /mnt/disks/192.168.178.38_Downloads/complete-downloads/tv

 

So far I have changed the below settings.

 

# REQUIRED SETTINGS
RcloneRemoteName=“secure1”
RcloneMountShare="/mnt/disks/secure1”
LocalFilesShare="/mnt/disks/192.168.178.38_Downloads/complete-downloads/”
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="Plex-Media-Server sonarr radarr sabnzbd" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"??"\} # comma separated list of folders to create within the mount

 

Can anyone point me in the right direction? And is there anything additional I need the change?

Link to comment
On 10/31/2020 at 8:43 AM, MowMdown said:

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

i would like to try your setup please

Link to comment

@DZMM im trying to get the upload to work but i get a error. i am trying to get my team drive to upload. i have my gdrive (plex_vfs) as the main mergefs and my team drive (shedbox_vfs) set to "ignore" on mergefs. so my upload set up is as follows: 

Quote

#!/bin/bash

RcloneCommand="move"
RcloneRemoteName="shedbox_vfs"
RcloneUploadRemoteName="plex_vfs"
LocalFilesShare="/mnt/user/local/plex_vfs"
RcloneMountShare="/mnt/user/mount_rclone/plex_vfs"
MinimumAge="15m"
ModSort="ascending"
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="15M"
BWLimit3Time="16:00"
BWLimit3="12M"

# Use Service Accounts.
UseServiceAccountUpload="N"
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service/"
ServiceAccountFile="sa_gdrive_upload.json"
CountServiceAccounts="15"

####### Preparing mount location variables #######
if [[  $BackupJob == 'Y' ]]; then
    LocalFilesLocation="$LocalFilesShare"
    echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
else
    LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
    echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
fi

RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount

####### create directory for script files #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files

#######  Check if script already running  ##########
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
fi

#######  check if rclone installed  ##########
echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
else
    echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
    exit
fi

####### Rotating serviceaccount.json file if using Service Accounts #######
if [[ $UseServiceAccountUpload == 'Y' ]]; then
    cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
    CounterNumber=$(find -name 'counter*' | cut -c 11,12)
    CounterCheck="1"
    if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
        echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
    else
        echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
        CounterNumber="1"
    fi
    ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
    echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
    ServiceAccount=""
fi

#######  Upload files  ##########

# Check bind option
if [[  $CreateBindMount == 'Y' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
        echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    else
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
        ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    fi
else
    RCloneMountIP=""
fi

#  Remove --delete-empty-src-dirs if rclone sync or copy
if [[  $RcloneCommand == 'move' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
    DeleteEmpty="--delete-empty-src-dirs "
else
    echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
    DeleteEmpty=""
fi

#  Check --backup-directory
if [[  $BackupJob == 'Y' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
    LocalFilesLocation="$LocalFilesShare"
    BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
else
    BackupRemoteLocation=""
    BackupRemoteDeletedLocation=""
    BackupRetention=""
    BackupDir=""
fi

# process files
    rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
    --user-agent="$RcloneUploadRemoteName" \
    -vv \
    --buffer-size 512M \
    --drive-chunk-size 512M \
    --tpslimit 8 \
    --checkers 8 \
    --transfers 4 \
    --order-by modtime,$ModSort \
    --min-age $MinimumAge \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --exclude *fuse_hidden* \
    --exclude *_HIDDEN \
    --exclude .recycle** \
    --exclude .Recycle.Bin/** \
    --exclude *.backup~* \
    --exclude *.partial~* \
    --drive-stop-on-upload-limit \
    --bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
    --bind=$RCloneMountIP $DeleteEmpty

# Delete old files from mount
if [[  $BackupJob == 'Y' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
    rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
fi

#######  Remove Control Files  ##########

# update counter and remove other control files
if [[  $UseServiceAccountUpload == 'Y' ]]; then
    if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
        echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
    else
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
        CounterNumber=$((CounterNumber+1))
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
        echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
    fi
else
    echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
fi

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit
 

the error i get is: 

Quote


Script Starting Nov 01, 2020 15:58.34

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

01.11.2020 15:58:34 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/plex_vfs/plex_vfs for shedbox_vfs ***
01.11.2020 15:58:34 INFO: *** Starting rclone_upload script for shedbox_vfs ***
01.11.2020 15:58:34 INFO: Script not running - proceeding.
01.11.2020 15:58:34 INFO: Checking if rclone installed successfully.
01.11.2020 15:58:34 INFO: rclone not installed - will try again later.
Script Finished Nov 01, 2020 15:58.34

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

Script Starting Nov 01, 2020 16:00.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

01.11.2020 16:00:01 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/plex_vfs/plex_vfs for shedbox_vfs ***
01.11.2020 16:00:01 INFO: *** Starting rclone_upload script for shedbox_vfs ***
01.11.2020 16:00:01 INFO: Script not running - proceeding.
01.11.2020 16:00:01 INFO: Checking if rclone installed successfully.
01.11.2020 16:00:01 INFO: rclone not installed - will try again later.
Script Finished Nov 01, 2020 16:00.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

Script Starting Nov 01, 2020 16:01.03

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

01.11.2020 16:01:03 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/plex_vfs/shedbox_vfs for plex_vfs ***
01.11.2020 16:01:03 INFO: *** Starting rclone_upload script for plex_vfs ***
01.11.2020 16:01:03 INFO: Script not running - proceeding.
01.11.2020 16:01:03 INFO: Checking if rclone installed successfully.
01.11.2020 16:01:03 INFO: rclone not installed - will try again later.
Script Finished Nov 01, 2020 16:01.03

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt
 

 

Edited by animeking
Link to comment

I'm very time limited this week but here is a crude setup of what you need to do.

 

Using rclone union instead of MergerFS setup (this setup will assume you already are familiar with rclone mounts and are using the latest version 1.53.2)

 

For my mount script is pretty straight forward, I create two directories needed, the first one is needed to mount the cloud which will utilize rclone's VFS caching and the second mount will unionize the VFS mount with the local media.

 

#!/bin/bash

mkdir -p /mnt/disks/media_vfs
mkdir -p /mnt/disks/media

rclone mount --allow-other \
--dir-cache-time 100h \
--fast-list \
--poll-interval 15s \
--cache-dir=/mnt/user/system/rclone \
--vfs-cache-mode full \
--vfs-cache-max-size 500G \
--vfs-cache-max-age 168h \
--vfs-read-chunk-size 128M \
--vfs-read-chunk-size-limit off \
--vfs-read-ahead 128M \
crypt: /mnt/disks/media_vfs &

rclone mount --allow-other union: /mnt/disks/media &

In the first rclone mount command im using my "crypt:" rclone mount, you will need to replace --> crypt: <-- with your mount.

You must edit the "--cache-dir=" variable to where you want rclone to cache your media on your local unraid machine as well as the "--vfs-cache-max-size" to the largest size you are willing to cache on your disk. All the other vfs flags should remain the same.

 

Now the next step is using rclone to configure the "union" mount needed to union the VFS mount with the local media directory. 

 

Enter rclone config and select "n" for a new remote and name it union, then select the union option.

 

It's going to ask for the "upstreams" you're going to first type the local path to your media, put in a space, and then you're going to put in the path to the mount location we just made /mnt/disks/media_vfs and then I personally add the :nc modifier to avoid accidentally creating files to the cloud mount. 

 

Next rclone will ask for the action_policy, enter ff

Next will be the create_policy, enter ff

Next will be the search policy, enter all

Last will be the cache time, leave 120 default

 

Once it's done it should look something like this:

[union]
type = union
upstreams = /mnt/user/plexmedia/ /mnt/disks/media_vfs:nc
action_policy = ff
create_policy = ff
search_policy = all
cache_time = 120

Remember to replace /plexmedia/ with root of your media location. (your remote should follow the same directory structure or this may cause issues)

 

Once you actually mount the mounts after union is created, you should be able to browse "/mnt/disks/media" (the non _vfs media) and see a compete list of all your media whether it be in the cloud or locally)

 

One last thing, you will need to change your dockers paths from /mnt/user/etc/etc/etc/ to /mnt/disks/media/ so it can read from this mount. you will also need to change them from Read/Write to the R/W Slave.

 

To unmount at array shutdown:

 

#!/bin/bash

fusermount -uz /mnt/disks/media
fusermount -uz /mnt/disks/media_vfs

 

Edited by MowMdown
  • Thanks 3
Link to comment
On 11/2/2020 at 5:15 PM, MowMdown said:

 

Thanks for typing this up  @MowMdown with your limited time. I got it started, but have a couple of questions (for you or anyone else here). 

 

Quote

mkdir -p /mnt/disks/media_vfs
mkdir -p /mnt/disks/media

I don't have any unassigned disks, so I put mine in /mnt/user/media_vfs and /mnt/user/media. I can't seem to see it as a "share" in unraid. Is that why? If i go into MC, I see everything like I'd want to see it. And MC even recognizes the cloud file (it's in green). 

 

Quote

I personally add the :nc modifier to avoid accidentally creating files to the cloud mount. 

When I add that, I don't see the cloud files. I too want to avoid accidentally creating files there, because I'm guessing Emby/Plex would write unencrypted metadata, defeating the whole point of encrypting. Can you clarify a bit ?

 

Quote

you will also need to change them from Read/Write to the R/W Slave.

 

I'm not sure I understand where to do this. 

 

Thanks again! 

Link to comment
Quote

I don't have any unassigned disks, so I put mine in /mnt/user/media_vfs and /mnt/user/media. I can't seem to see it as a "share" in unraid. Is that why? If i go into MC, I see everything like I'd want to see it. And MC even recognizes the cloud file (it's in green).

 

You don't need any physical unassigned devices, I don't have any. It's just where I mounted my cloud mounts.

 

1 hour ago, axeman said:

When I add that, I don't see the cloud files. I too want to avoid accidentally creating files there, because I'm guessing Emby/Plex would write unencrypted metadata, defeating the whole point of encrypting. Can you clarify a bit ?

Post your "Union" rclone config like I did above.

 

1 hour ago, axeman said:
Quote

you will also need to change them from Read/Write to the R/W Slave.

 

I'm not sure I understand where to do this. 

When using the /mnt/disks/ as your path in the docker configs for each docker, unraid will throw an warning that the path is not using the Slave options. If you edit the docker container config and you go to edit one of the path variables you will see "Access Mode" that will need to be changed from Read/Write to RW Slave. Super easy to change. 

 

image.png.5a89df13e47a607407c7d9dd4cac2a83.png

Edited by MowMdown
Link to comment
29 minutes ago, MowMdown said:

 

You don't need any physical unassigned devices, I don't have any. It's just where I mounted my cloud mounts.

 

Post your "Union" rclone config like I did above.

 

When using the /mnt/disks/ as your path in the docker configs for each docker, unraid will throw an warning that the path is not using the Slave options. If you edit the docker container config and you go to edit one of the path variables you will see "Access Mode" that will need to be changed from Read/Write to RW Slave. Super easy to change. 

 

 

Thanks!

 

Here's my union:

[union]
type = union
upstreams = /mnt/user/Videos /mnt/user/media_vfs
action_policy = ff
create_policy = ff
search_policy = all
cache_time = 120

 

That one works OK. If it says /mnt/user/media_vfs:nc I don't see the cloud files. 

 

Okay - I am not using it with Dockers, my media import and presentation tools currently run on a VM. 

 

Thanks for your time!

Link to comment

Might be related to your mount command you're using. 

 

the ":nc" suffix is simply "No Create" and shouldn't really affect the reading of files so I assume the mount you're using to mount to "media_vfs" directory is possibly the culprit.

 

Edit: No Plex/Emby would not be able to write unencrypted data to the mount since rclone is the one encrypting anything that gets written to it. I simply want to avoid writing NEW files to it to avoid corruption because writing to a mount is not best practice. you can also use the :ro suffix to essentially mount it "read only" however thats also not what I want because with :nc I am able to upgrade media using sonarr/radarr which requires those programs to be able to delete files. can't do that when it's read only. (Im not actually sure :nc or :ro is neccessay since we are using the "ff" policy which essentially only deals with the first listed upstream which is our local array drives)

 

When those programs do upgrade the media, they actually delete them off the cloud mount, and then write the new file/data to the local array drives where my upload script will essentially write it back to the cloud. It's actually kinda clever the way I set it up.

Edited by MowMdown
Link to comment
4 hours ago, MowMdown said:

Might be related to your mount command you're using. 

 

the ":nc" suffix is simply "No Create" and shouldn't really affect the reading of files so I assume the mount you're using to mount to "media_vfs" directory is possibly the culprit.

 

Edit: No Plex/Emby would not be able to write unencrypted data to the mount since rclone is the one encrypting anything that gets written to it. I simply want to avoid writing NEW files to it to avoid corruption because writing to a mount is not best practice. you can also use the :ro suffix to essentially mount it "read only" however thats also not what I want because with :nc I am able to upgrade media using sonarr/radarr which requires those programs to be able to delete files. can't do that when it's read only. (Im not actually sure :nc or :ro is neccessay since we are using the "ff" policy which essentially only deals with the first listed upstream which is our local array drives)

 

When those programs do upgrade the media, they actually delete them off the cloud mount, and then write the new file/data to the local array drives where my upload script will essentially write it back to the cloud. It's actually kinda clever the way I set it up.

Thanks - this does seem cleaner than the whole mergerFS route. I'm partially set up now. your setup does sound pretty clever. I can't be 100% cloud, as my internet is awful. So for now, I'm just putting the 4K stuff up there, and keeping lower quality locally. 

 

I created Unraid Shares, and then used the mount script you have, and it works. 

 

Do you have a different/modified upload script, or use the one from this thread?

Link to comment
6 minutes ago, axeman said:

Do you have a different/modified upload script, or use the one from this thread?

 

I just run this nightly at 3am using user scripts, super simple. (obviously I don't have a folder named "files" but you can use your imagination)

rclone move /mnt/user/media/files crypt:files -v --delete-empty-src-dirs --fast-list --drive-stop-on-upload-limit --order-by size,desc

I have a single 500GB drive that I fill up with whatever I want to be moved to the cloud and that small script does it.

Edited by MowMdown
Link to comment
11 minutes ago, MowMdown said:

 

I just run this nightly at 3am using user scripts, super simple. (obviously I don't have a folder named "files" but you can use your imagination)


rclone move /mnt/user/media/files crypt:files -v --delete-empty-src-dirs --fast-list --drive-stop-on-upload-limit --order-by size,desc

I have a single 500GB drive that I fill up with whatever I want to be moved to the cloud and that small script does it.

whoa - that's nice indeed. I like the service account rotation - but maybe not needed given my upload bandwidth is 40mbit anyway. 

Link to comment

@DZMM I finally managed to get the mount script and upload script working. Great work on the scripts! :)

 

One questions I have: Will the upload script use all available upload speed? Or do I have to adjust some settings to my personal Internet connection?

I only have 35Mbit/s upload speed. So if my calculation is correct, in theory I could reach 4.375 Mbytes/s. But I am seeing the upload speed with the upload script below 3 Mbytes/s.

 

Should I make any changes to drive-chunk-size or buffer-size? And are the settings in the scripts overriding the once that are in rclone.conf?

 

Thanks for your help!

Link to comment
36 minutes ago, Ericsson said:

@DZMM I finally managed to get the mount script and upload script working. Great work on the scripts! :)

 

One questions I have: Will the upload script use all available upload speed? Or do I have to adjust some settings to my personal Internet connection?

I only have 35Mbit/s upload speed. So if my calculation is correct, in theory I could reach 4.375 Mbytes/s. But I am seeing the upload speed with the upload script below 3 Mbytes/s.

 

Should I make any changes to drive-chunk-size or buffer-size? And are the settings in the scripts overriding the once that are in rclone.conf?

 

Thanks for your help!

Good work!  In the scripts you can set BWLimits and schedules to fit your connection/usage.  If you've 4.375MB/s, I would recommend only scheduling this speed for overnight.

 

You can try playing around with drive-chunk-size etc to see if that helps if you're really trying to squeeze out a few more MB/s.

 

 

Link to comment
On 11/4/2020 at 1:40 PM, francrouge said:

Hi all

Can someone explain me how i can link my downloaded files to be uploaded in drive and be able to seed it

I'm a bit lost with all the configs.

For now my upload script seems to work and my mounting script also

Thx

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

 

If you use the mergerfs versions of the scripts that supports hardlinks, this is all taken care of i.e. the files stay local until removed from your torrent client

Link to comment
On 11/1/2020 at 10:07 PM, animeking said:

01.11.2020 15:58:34 INFO: rclone not installed - will try again later.

 

The upload script checks for the presence of the mountcheck file in the right place that is created by the mount script.  The check is failing - check you've mounted correctly and/or that your remote names match.

Link to comment
1 hour ago, DZMM said:

Good work!  In the scripts you can set BWLimits and schedules to fit your connection/usage.  If you've 4.375MB/s, I would recommend only scheduling this speed for overnight.

 

You can try playing around with drive-chunk-size etc to see if that helps if you're really trying to squeeze out a few more MB/s.

 

 

So if I would like to adjust the drive-chunk-size, I do this in the rclone_mount and rclone_upload scripts?

This will then override what I have in the rclone config when I created the gdrive remotes?

Link to comment
1 hour ago, Ericsson said:

So if I would like to adjust the drive-chunk-size, I do this in the rclone_mount and rclone_upload scripts?

This will then override what I have in the rclone config when I created the gdrive remotes?

 
 

Just the upload script if it's upload speed you're adjusting.  Whatever you run in a command i.e. the scripts, overrides the settings in the rclone config file

Link to comment
On 10/14/2020 at 2:07 AM, DZMM said:

We've never been able to pinpoint the problem as it seems intermittent.  I haven't had problems for a few months now.

I've been having this issue now. I goes away if I manually kill the rclone script, the rcloneorig process, and mergerFS process. Not really sure why, or how safe it is to just kill those processes

Edited by M1kep_
Link to comment

@DZMM Just wanted to say thanks for those scripts! They been working perfectly for 2 month on my cloud server with Radarr, Sonarr etc... I run the upload script every 2 minutes so I don't need to wait when a movie is being imported! I filled my gdrive with 20tb (~1300 movies) worth of movies thanks to you, all automated with Traktarr.

Edited by Lucka
  • Like 1
Link to comment
On 11/4/2020 at 3:11 PM, MowMdown said:

 

I just run this nightly at 3am using user scripts, super simple. (obviously I don't have a folder named "files" but you can use your imagination)


rclone move /mnt/user/media/files crypt:files -v --delete-empty-src-dirs --fast-list --drive-stop-on-upload-limit --order-by size,desc

I have a single 500GB drive that I fill up with whatever I want to be moved to the cloud and that small script does it.

sorry if this is a dumb question (maybe my imagination isn't working) ... but isn't that /media the mount where your union is? So would rclone know where the "local" version is? Or does it basically traverse the entire folder to see if there's differences between crypt:files and /mnt/user/media/files ? 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.