Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Okay I have this all set up, first it was not working but I was trying to bypass unraid's shfs and path everything directly to /mnt/cache and while the rclone mount works the mergerfs does not.

 

Regardless I do have one dumb question as the mergerfs is only mounted as long as I do not close the script window. Therefore most people must be running this in the background, or launching with a cron or CA User Scripts on a schedule. This is expected behaviour I assume? I'm unfamiliar with mergerfs but it looks like the script exits fully and the reporting seems to indicate it's all done can could be closed. The rclone mount is persistent but it seems the mergerfs mount is not, at least when run in the CA User Scripts GUI in the fg.

Link to comment
5 hours ago, crazyhorse90210 said:

Okay I have this all set up, first it was not working but I was trying to bypass unraid's shfs and path everything directly to /mnt/cache and while the rclone mount works the mergerfs does not.

 

Regardless I do have one dumb question as the mergerfs is only mounted as long as I do not close the script window. Therefore most people must be running this in the background, or launching with a cron or CA User Scripts on a schedule. This is expected behaviour I assume? I'm unfamiliar with mergerfs but it looks like the script exits fully and the reporting seems to indicate it's all done can could be closed. The rclone mount is persistent but it seems the mergerfs mount is not, at least when run in the CA User Scripts GUI in the fg.

Are you running it using the "Run Script in Background" option?

 

If not, do it that way. 

  • Like 1
Link to comment

ok i keep getting this error when my rclone is mounted and working 

Quote

29.10.2020 23:10:49 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/plex_vfs for superplex_vfs ***
29.10.2020 23:10:49 INFO: *** Starting rclone_upload script for superplex_vfs ***
29.10.2020 23:10:49 INFO: Script not running - proceeding.
29.10.2020 23:10:49 INFO: Checking if rclone installed successfully.
29.10.2020 23:10:49 INFO: rclone not installed - will try again later.
Script Finished Oct 29, 2020 23:10.49

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

 

Link to comment
On 12/15/2018 at 7:36 AM, DZMM said:

I just made a very useful change to my scripts that has solved my problem with the limit of only being able to upload 750GB/day, which was creating bottlenecks on my local server as I couldn't upload fast enough to keep up with new pending content. 

 

I've added a Teamdrive remote to my setup, that allows me to upload another 750GB/day in addition to the 750GB/day to my existing remote.  This is because the 750GB/day limit is per account - by sharing the teamdrive created by my google apps account with another google account I can upload more.  Theoretically I could repeat for n extra accounts (each one would need a separate token team drive), but 1 is enough for me.

 

Steps:

  1. create new team drive with main google apps account
  2. share with 2nd google account
  3. create new team drive remotes (see first post) - remember to get token from account in #2 not account in #1 otherwise you won't get 2nd upload quota
  4. amend mount script (see first post) to mount new tdrive and change unionfs mount from 2-way union to 3-way including tdrive
  5. new upload script to upload to tdrive - my first upload script moves files from the array, and the 2nd from the cache.  Another way to 'load-balance' the uploads could be to run one script against disks 1-3 and the other against 4-x
  6. add tdrive line to cleanup script
  7. add tdrive line to unmount script
  8. Optional repeat if need more upload capacity e.g. change 3-way union to 4-way

im trying this method but im not understanding changing 2 way to 3way

Link to comment

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

Link to comment
20 minutes ago, MowMdown said:

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

Yes please.  This thread was setup so we could improve the setup together.  Fingers crossed we can implement a one-provider solution using rclone union.

Link to comment
30 minutes ago, MowMdown said:

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

Yes Please! I had too many other things going on and am finally getting back into this thread... Almost feel like holding off again until this gets done, so that I don't have to go back and re-do it. 

Link to comment
3 hours ago, MowMdown said:

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

Another interested person here. I don't actually have a need to upload and my local mounts are 100% separate so I don't actually have need for mergerfs but I'm not sure if it offers better performance on top of raw rclone w/ vfs caching so I would like to see all options!

 

One more question as well in general is this: is the general consensus that rclone's built-in vfs caching is better than using a separate rclone cache mount? Is that cache mount function outdated now?

Link to comment
1 hour ago, crazyhorse90210 said:

DZMM, am I wrong in thinking the CommandX lines are not used anywhere, I can't find those variable anywhere else in the script. What is the supposed function of adding anything into the CommandX lines?


# Add extra commands or filters
Command1="--rc"

 

Hmm they used to work, I must have deleted by accident.  I'm going to do some work on the script soon so I'll add this to the list of things to fix.

Link to comment

I am trying to use the rclone_mount, rclone_unmount and rclone_upload scripts but I am having difficulties to see which values I have to adjust to my personal setup.

 

My setup is as following:

 

- remote gdrive1 with encrypted remote secure1 mounted under /mnt/disks/gdrive1 & /mnt/disks/secure1 for movies

- remote gdrive2 with encrypted remote secure2 mounted under /mnt/disks/gdrive2 & /mnt/disks/secure2 for tvshows

- Completed movie downloads are located in /mnt/disks/192.168.178.38_Downloads/complete-downloads/movies

- Completed tvshows downloads are located in /mnt/disks/192.168.178.38_Downloads/complete-downloads/tv

 

So far I have changed the below settings.

 

# REQUIRED SETTINGS
RcloneRemoteName=“secure1”
RcloneMountShare="/mnt/disks/secure1”
LocalFilesShare="/mnt/disks/192.168.178.38_Downloads/complete-downloads/”
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="Plex-Media-Server sonarr radarr sabnzbd" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"??"\} # comma separated list of folders to create within the mount

 

Can anyone point me in the right direction? And is there anything additional I need the change?

Link to comment
On 10/31/2020 at 8:43 AM, MowMdown said:

Just giving an update to one of my comments from a month or so ago:

 

I just wanted to say that the built in rclone union backend/mount is quite good and much less complicated than using the mergerFS setup.

I found a way that allows you to use the rclone VFS caching w/ the cloud mount and using the local storage in tandem. Movies play instantly and it's wonderful.

 

If anybody is interested let me know and I'll share my setup.

i would like to try your setup please

Link to comment

@DZMM im trying to get the upload to work but i get a error. i am trying to get my team drive to upload. i have my gdrive (plex_vfs) as the main mergefs and my team drive (shedbox_vfs) set to "ignore" on mergefs. so my upload set up is as follows: 

Quote

#!/bin/bash

RcloneCommand="move"
RcloneRemoteName="shedbox_vfs"
RcloneUploadRemoteName="plex_vfs"
LocalFilesShare="/mnt/user/local/plex_vfs"
RcloneMountShare="/mnt/user/mount_rclone/plex_vfs"
MinimumAge="15m"
ModSort="ascending"
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="15M"
BWLimit3Time="16:00"
BWLimit3="12M"

# Use Service Accounts.
UseServiceAccountUpload="N"
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service/"
ServiceAccountFile="sa_gdrive_upload.json"
CountServiceAccounts="15"

####### Preparing mount location variables #######
if [[  $BackupJob == 'Y' ]]; then
    LocalFilesLocation="$LocalFilesShare"
    echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
else
    LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
    echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
fi

RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount

####### create directory for script files #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files

#######  Check if script already running  ##########
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
fi

#######  check if rclone installed  ##########
echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
else
    echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
    exit
fi

####### Rotating serviceaccount.json file if using Service Accounts #######
if [[ $UseServiceAccountUpload == 'Y' ]]; then
    cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
    CounterNumber=$(find -name 'counter*' | cut -c 11,12)
    CounterCheck="1"
    if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
        echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
    else
        echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
        CounterNumber="1"
    fi
    ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
    echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
    ServiceAccount=""
fi

#######  Upload files  ##########

# Check bind option
if [[  $CreateBindMount == 'Y' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
    if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
        echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
    else
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
        ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
    fi
else
    RCloneMountIP=""
fi

#  Remove --delete-empty-src-dirs if rclone sync or copy
if [[  $RcloneCommand == 'move' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
    DeleteEmpty="--delete-empty-src-dirs "
else
    echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
    DeleteEmpty=""
fi

#  Check --backup-directory
if [[  $BackupJob == 'Y' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
    LocalFilesLocation="$LocalFilesShare"
    BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
else
    BackupRemoteLocation=""
    BackupRemoteDeletedLocation=""
    BackupRetention=""
    BackupDir=""
fi

# process files
    rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
    --user-agent="$RcloneUploadRemoteName" \
    -vv \
    --buffer-size 512M \
    --drive-chunk-size 512M \
    --tpslimit 8 \
    --checkers 8 \
    --transfers 4 \
    --order-by modtime,$ModSort \
    --min-age $MinimumAge \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --exclude *fuse_hidden* \
    --exclude *_HIDDEN \
    --exclude .recycle** \
    --exclude .Recycle.Bin/** \
    --exclude *.backup~* \
    --exclude *.partial~* \
    --drive-stop-on-upload-limit \
    --bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
    --bind=$RCloneMountIP $DeleteEmpty

# Delete old files from mount
if [[  $BackupJob == 'Y' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
    rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
fi

#######  Remove Control Files  ##########

# update counter and remove other control files
if [[  $UseServiceAccountUpload == 'Y' ]]; then
    if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
        echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
    else
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
        CounterNumber=$((CounterNumber+1))
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
        echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
    fi
else
    echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
fi

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit
 

the error i get is: 

Quote


Script Starting Nov 01, 2020 15:58.34

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

01.11.2020 15:58:34 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/plex_vfs/plex_vfs for shedbox_vfs ***
01.11.2020 15:58:34 INFO: *** Starting rclone_upload script for shedbox_vfs ***
01.11.2020 15:58:34 INFO: Script not running - proceeding.
01.11.2020 15:58:34 INFO: Checking if rclone installed successfully.
01.11.2020 15:58:34 INFO: rclone not installed - will try again later.
Script Finished Nov 01, 2020 15:58.34

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

Script Starting Nov 01, 2020 16:00.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

01.11.2020 16:00:01 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/plex_vfs/plex_vfs for shedbox_vfs ***
01.11.2020 16:00:01 INFO: *** Starting rclone_upload script for shedbox_vfs ***
01.11.2020 16:00:01 INFO: Script not running - proceeding.
01.11.2020 16:00:01 INFO: Checking if rclone installed successfully.
01.11.2020 16:00:01 INFO: rclone not installed - will try again later.
Script Finished Nov 01, 2020 16:00.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

Script Starting Nov 01, 2020 16:01.03

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt

01.11.2020 16:01:03 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/plex_vfs/shedbox_vfs for plex_vfs ***
01.11.2020 16:01:03 INFO: *** Starting rclone_upload script for plex_vfs ***
01.11.2020 16:01:03 INFO: Script not running - proceeding.
01.11.2020 16:01:03 INFO: Checking if rclone installed successfully.
01.11.2020 16:01:03 INFO: rclone not installed - will try again later.
Script Finished Nov 01, 2020 16:01.03

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_upload/log.txt
 

 

Edited by animeking
Link to comment

I'm very time limited this week but here is a crude setup of what you need to do.

 

Using rclone union instead of MergerFS setup (this setup will assume you already are familiar with rclone mounts and are using the latest version 1.53.2)

 

For my mount script is pretty straight forward, I create two directories needed, the first one is needed to mount the cloud which will utilize rclone's VFS caching and the second mount will unionize the VFS mount with the local media.

 

#!/bin/bash

mkdir -p /mnt/disks/media_vfs
mkdir -p /mnt/disks/media

rclone mount --allow-other \
--dir-cache-time 100h \
--fast-list \
--poll-interval 15s \
--cache-dir=/mnt/user/system/rclone \
--vfs-cache-mode full \
--vfs-cache-max-size 500G \
--vfs-cache-max-age 168h \
--vfs-read-chunk-size 128M \
--vfs-read-chunk-size-limit off \
--vfs-read-ahead 128M \
crypt: /mnt/disks/media_vfs &

rclone mount --allow-other union: /mnt/disks/media &

In the first rclone mount command im using my "crypt:" rclone mount, you will need to replace --> crypt: <-- with your mount.

You must edit the "--cache-dir=" variable to where you want rclone to cache your media on your local unraid machine as well as the "--vfs-cache-max-size" to the largest size you are willing to cache on your disk. All the other vfs flags should remain the same.

 

Now the next step is using rclone to configure the "union" mount needed to union the VFS mount with the local media directory. 

 

Enter rclone config and select "n" for a new remote and name it union, then select the union option.

 

It's going to ask for the "upstreams" you're going to first type the local path to your media, put in a space, and then you're going to put in the path to the mount location we just made /mnt/disks/media_vfs and then I personally add the :nc modifier to avoid accidentally creating files to the cloud mount. 

 

Next rclone will ask for the action_policy, enter ff

Next will be the create_policy, enter ff

Next will be the search policy, enter all

Last will be the cache time, leave 120 default

 

Once it's done it should look something like this:

[union]
type = union
upstreams = /mnt/user/plexmedia/ /mnt/disks/media_vfs:nc
action_policy = ff
create_policy = ff
search_policy = all
cache_time = 120

Remember to replace /plexmedia/ with root of your media location. (your remote should follow the same directory structure or this may cause issues)

 

Once you actually mount the mounts after union is created, you should be able to browse "/mnt/disks/media" (the non _vfs media) and see a compete list of all your media whether it be in the cloud or locally)

 

One last thing, you will need to change your dockers paths from /mnt/user/etc/etc/etc/ to /mnt/disks/media/ so it can read from this mount. you will also need to change them from Read/Write to the R/W Slave.

 

To unmount at array shutdown:

 

#!/bin/bash

fusermount -uz /mnt/disks/media
fusermount -uz /mnt/disks/media_vfs

 

Edited by MowMdown
  • Thanks 2
Link to comment
On 11/2/2020 at 5:15 PM, MowMdown said:

 

Thanks for typing this up  @MowMdown with your limited time. I got it started, but have a couple of questions (for you or anyone else here). 

 

Quote

mkdir -p /mnt/disks/media_vfs
mkdir -p /mnt/disks/media

I don't have any unassigned disks, so I put mine in /mnt/user/media_vfs and /mnt/user/media. I can't seem to see it as a "share" in unraid. Is that why? If i go into MC, I see everything like I'd want to see it. And MC even recognizes the cloud file (it's in green). 

 

Quote

I personally add the :nc modifier to avoid accidentally creating files to the cloud mount. 

When I add that, I don't see the cloud files. I too want to avoid accidentally creating files there, because I'm guessing Emby/Plex would write unencrypted metadata, defeating the whole point of encrypting. Can you clarify a bit ?

 

Quote

you will also need to change them from Read/Write to the R/W Slave.

 

I'm not sure I understand where to do this. 

 

Thanks again! 

Link to comment
Quote

I don't have any unassigned disks, so I put mine in /mnt/user/media_vfs and /mnt/user/media. I can't seem to see it as a "share" in unraid. Is that why? If i go into MC, I see everything like I'd want to see it. And MC even recognizes the cloud file (it's in green).

 

You don't need any physical unassigned devices, I don't have any. It's just where I mounted my cloud mounts.

 

1 hour ago, axeman said:

When I add that, I don't see the cloud files. I too want to avoid accidentally creating files there, because I'm guessing Emby/Plex would write unencrypted metadata, defeating the whole point of encrypting. Can you clarify a bit ?

Post your "Union" rclone config like I did above.

 

1 hour ago, axeman said:
Quote

you will also need to change them from Read/Write to the R/W Slave.

 

I'm not sure I understand where to do this. 

When using the /mnt/disks/ as your path in the docker configs for each docker, unraid will throw an warning that the path is not using the Slave options. If you edit the docker container config and you go to edit one of the path variables you will see "Access Mode" that will need to be changed from Read/Write to RW Slave. Super easy to change. 

 

image.png.5a89df13e47a607407c7d9dd4cac2a83.png

Edited by MowMdown
Link to comment
29 minutes ago, MowMdown said:

 

You don't need any physical unassigned devices, I don't have any. It's just where I mounted my cloud mounts.

 

Post your "Union" rclone config like I did above.

 

When using the /mnt/disks/ as your path in the docker configs for each docker, unraid will throw an warning that the path is not using the Slave options. If you edit the docker container config and you go to edit one of the path variables you will see "Access Mode" that will need to be changed from Read/Write to RW Slave. Super easy to change. 

 

 

Thanks!

 

Here's my union:

[union]
type = union
upstreams = /mnt/user/Videos /mnt/user/media_vfs
action_policy = ff
create_policy = ff
search_policy = all
cache_time = 120

 

That one works OK. If it says /mnt/user/media_vfs:nc I don't see the cloud files. 

 

Okay - I am not using it with Dockers, my media import and presentation tools currently run on a VM. 

 

Thanks for your time!

Link to comment

Might be related to your mount command you're using. 

 

the ":nc" suffix is simply "No Create" and shouldn't really affect the reading of files so I assume the mount you're using to mount to "media_vfs" directory is possibly the culprit.

 

Edit: No Plex/Emby would not be able to write unencrypted data to the mount since rclone is the one encrypting anything that gets written to it. I simply want to avoid writing NEW files to it to avoid corruption because writing to a mount is not best practice. you can also use the :ro suffix to essentially mount it "read only" however thats also not what I want because with :nc I am able to upgrade media using sonarr/radarr which requires those programs to be able to delete files. can't do that when it's read only. (Im not actually sure :nc or :ro is neccessay since we are using the "ff" policy which essentially only deals with the first listed upstream which is our local array drives)

 

When those programs do upgrade the media, they actually delete them off the cloud mount, and then write the new file/data to the local array drives where my upload script will essentially write it back to the cloud. It's actually kinda clever the way I set it up.

Edited by MowMdown
Link to comment
4 hours ago, MowMdown said:

Might be related to your mount command you're using. 

 

the ":nc" suffix is simply "No Create" and shouldn't really affect the reading of files so I assume the mount you're using to mount to "media_vfs" directory is possibly the culprit.

 

Edit: No Plex/Emby would not be able to write unencrypted data to the mount since rclone is the one encrypting anything that gets written to it. I simply want to avoid writing NEW files to it to avoid corruption because writing to a mount is not best practice. you can also use the :ro suffix to essentially mount it "read only" however thats also not what I want because with :nc I am able to upgrade media using sonarr/radarr which requires those programs to be able to delete files. can't do that when it's read only. (Im not actually sure :nc or :ro is neccessay since we are using the "ff" policy which essentially only deals with the first listed upstream which is our local array drives)

 

When those programs do upgrade the media, they actually delete them off the cloud mount, and then write the new file/data to the local array drives where my upload script will essentially write it back to the cloud. It's actually kinda clever the way I set it up.

Thanks - this does seem cleaner than the whole mergerFS route. I'm partially set up now. your setup does sound pretty clever. I can't be 100% cloud, as my internet is awful. So for now, I'm just putting the 4K stuff up there, and keeping lower quality locally. 

 

I created Unraid Shares, and then used the mount script you have, and it works. 

 

Do you have a different/modified upload script, or use the one from this thread?

Link to comment
6 minutes ago, axeman said:

Do you have a different/modified upload script, or use the one from this thread?

 

I just run this nightly at 3am using user scripts, super simple. (obviously I don't have a folder named "files" but you can use your imagination)

rclone move /mnt/user/media/files crypt:files -v --delete-empty-src-dirs --fast-list --drive-stop-on-upload-limit --order-by size,desc

I have a single 500GB drive that I fill up with whatever I want to be moved to the cloud and that small script does it.

Edited by MowMdown
Link to comment
11 minutes ago, MowMdown said:

 

I just run this nightly at 3am using user scripts, super simple. (obviously I don't have a folder named "files" but you can use your imagination)


rclone move /mnt/user/media/files crypt:files -v --delete-empty-src-dirs --fast-list --drive-stop-on-upload-limit --order-by size,desc

I have a single 500GB drive that I fill up with whatever I want to be moved to the cloud and that small script does it.

whoa - that's nice indeed. I like the service account rotation - but maybe not needed given my upload bandwidth is 40mbit anyway. 

Link to comment

@DZMM I finally managed to get the mount script and upload script working. Great work on the scripts! :)

 

One questions I have: Will the upload script use all available upload speed? Or do I have to adjust some settings to my personal Internet connection?

I only have 35Mbit/s upload speed. So if my calculation is correct, in theory I could reach 4.375 Mbytes/s. But I am seeing the upload speed with the upload script below 3 Mbytes/s.

 

Should I make any changes to drive-chunk-size or buffer-size? And are the settings in the scripts overriding the once that are in rclone.conf?

 

Thanks for your help!

Link to comment
36 minutes ago, Ericsson said:

@DZMM I finally managed to get the mount script and upload script working. Great work on the scripts! :)

 

One questions I have: Will the upload script use all available upload speed? Or do I have to adjust some settings to my personal Internet connection?

I only have 35Mbit/s upload speed. So if my calculation is correct, in theory I could reach 4.375 Mbytes/s. But I am seeing the upload speed with the upload script below 3 Mbytes/s.

 

Should I make any changes to drive-chunk-size or buffer-size? And are the settings in the scripts overriding the once that are in rclone.conf?

 

Thanks for your help!

Good work!  In the scripts you can set BWLimits and schedules to fit your connection/usage.  If you've 4.375MB/s, I would recommend only scheduling this speed for overnight.

 

You can try playing around with drive-chunk-size etc to see if that helps if you're really trying to squeeze out a few more MB/s.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.