Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

1 hour ago, JimmyFigus said:

1) When you say it doesnt poll local folder and remotes for changes, does it still pickup files added to the merged folder instantly? My current set up means I only ever add to the mount_mergerfs folder, so could I use union for this case?

Yes, but if you upload a file from the local folder the mount won't register the change.

 

1 hour ago, JimmyFigus said:

2) What are the advantages of using union over Mergerfs? I notice you say quicker load times but is that all? my current times dont seem too bad. 

 

and a one-stop solution.

Link to comment
7 minutes ago, DZMM said:

Yes, but if you upload a file from the local folder the mount won't register the change.

 

and a one-stop solution.

I'll stick to mergerfs then for now due to my hard linking issue. I'll stick around to see the developments.

 

Cheers for your scripts by the way they were an awesome base for me to set-up my server and add on top of them.

Link to comment

Trying to get a setup to combine cache and array usage for torrents/radarr/sonarr/plex. Anyone have this setup? I know this is complex, but:

 

Ideally, Radarr will send some temporarily seeded torrents to the cache directory to be seeded for 2 weeks and the renamed hardlinks moved to gdrive at some point. Similarly, Radarr will upgrade existing permanently seeded torrents on the array. This becomes problematic because radarr can't hardlink from the cache to the array (separate filesystems or something?).

 

My solution is to simply run two instances of mergerfs. One instance is currently working perfectly in my /mnt/user/downloads/ share.

Can I run a second instance of mergerfs on a cache only share? This would result in hardlinks being created within the cache only for that share and a separate rclone mover script to move the cache hardlinks to gdrive, in theory. I would just tell Radarr which directory to use, array share or cache share. Just don't know if the the scripts will conflict.

 

EDIT: Just wanted to clarify that a big reason I want to do this is to avoid repeatedly writing to the last bits of unfilled sections on my drives with files that are only temporarily there. My understanding is that its not good to repeatedly write to the same sections like this? Using the cache drive for these frequent writes would be much better.

 

Edited by oldsweatyman
Link to comment
5 hours ago, oldsweatyman said:

Trying to get a setup to combine cache and array usage for torrents/radarr/sonarr/plex. Anyone have this setup? I know this is complex, but:

 

Ideally, Radarr will send some temporarily seeded torrents to the cache directory to be seeded for 2 weeks and the renamed hardlinks moved to gdrive at some point. Similarly, Radarr will upgrade existing permanently seeded torrents on the array. This becomes problematic because radarr can't hardlink from the cache to the array (separate filesystems or something?).

 

My solution is to simply run two instances of mergerfs. One instance is currently working perfectly in my /mnt/user/downloads/ share.

Can I run a second instance of mergerfs on a cache only share? This would result in hardlinks being created within the cache only for that share and a separate rclone mover script to move the cache hardlinks to gdrive, in theory. I would just tell Radarr which directory to use, array share or cache share. Just don't know if the the scripts will conflict.

 

EDIT: Just wanted to clarify that a big reason I want to do this is to avoid repeatedly writing to the last bits of unfilled sections on my drives with files that are only temporarily there. My understanding is that its not good to repeatedly write to the same sections like this? Using the cache drive for these frequent writes would be much better.

 

 
 
 
 
 

I may be misunderstanding what you are trying to do, but wouldn't it be simplier to just use the mover or the custom mover script?  i.e. set your download/mergerfs local share to be prefer cache and then mover will move files to the array when the cache fills up?

 

If you need torrents to move off your cache faster than your mover settings or on a different schedule, you could do something like what I do.  I use diskmv to move certain shares and folders off my cache when the cache gets to a certain capacity - that way I can keep say work files on the cache longer (almost forever) and 'archive' files where I don't need the fast access.

 

Below you'll see I move .../downloads/complete (torrents that have completed but haven't been imported i.e. seeding) and ..../downloads/seeds (seeds that have been imported) to the array when my cache gets to a certain utilisation.

 

########################################
#######  Move Cache to Array  ##########
########################################

# check if mover running
if [ -f /var/run/mover.pid ]; then
	if ps h `cat /var/run/mover.pid` | grep mover ; then
		echo "$(date "+%d.%m.%Y %T") INFO: mover already running. Not moving files."
	else
# move files
		ReqSpace=150000000
		AvailSpace=$(df /mnt/cache | awk 'NR==2 { print $4 }')

		if [[ "$AvailSpace" -ge "$ReqSpace" ]];then
			echo "$(date "+%d.%m.%Y %T") INFO: Space ok - exiting"
		else
			echo "$(date "+%d.%m.%Y %T") INFO: Cache space low.  Moving Files."

#	/usr/local/sbin/mdcmd set md_write_method 1
#	echo "Turbo write mode now enabled"
			echo "$(date "+%d.%m.%Y %T") INFO: moving backup."
			bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/backup" cache disk2
			echo "$(date "+%d.%m.%Y %T") INFO: moving local."
			bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/local/tdrive_vfs/downloads/complete" cache disk2
			bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/local/tdrive_vfs/downloads/seeds" cache disk2
			echo "$(date "+%d.%m.%Y %T") INFO: moving media."
			bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/books" cache disk2
			bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/calibre" cache disk2
			bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/magazines" cache disk2
			bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/photos" cache disk2
			bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/videos" cache disk2
			echo "$(date "+%d.%m.%Y %T") INFO: moving software."
			bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/software" cache disk2
#	/usr/local/sbin/mdcmd set md_write_method 0
#	echo "Turbo write mode now disabled"
		fi
	fi
fi

Edit: I disable turbo write as I don't have a parity drive anymore

Edited by DZMM
Link to comment

I might not be understanding some pathing here.

 

Last time I set my downloads share to use the cache, radarr wouldn't hardlink properly because the cache and array are separate obviously. Instead, it would copy the file, renamed, to the array.

 

So, using your setup, would this be the flow?

 

1. Radarr tells torrent client to save movie to /mnt/cache/downloads/local_storage/gdrive/seed

2. Radarr renames & imports completed movie to /mnt/cache/downloads/local_storage/gdrive/movies

3. Mover transfers imported movie to /mnt/user/downloads/local_storage/gdrive/movies

4. Rclone upload script moves from local_storage/gdrive/movies local to rclone, preserved by mount_mergerfs path

In this case, I'd point Plex to /mnt/user/downloads/mount_mergerfs/gdrive/movies?

 

Does the mover affect how plex sees the file?

 

I think there might be some extra confusion because my three mounts (mount_mergerfs, local_storage, and mount_rclone) are inside of a single share called "downloads" instead of three separate shares.

 

Edited by oldsweatyman
Link to comment
1 hour ago, oldsweatyman said:

I might not be understanding some pathing here.

 

Last time I set my downloads share to use the cache, radarr wouldn't hardlink properly because the cache and array are separate obviously. Instead, it would copy the file, renamed, to the array.

 

So, using your setup, would this be the flow?

 


1. Radarr tells torrent client to save movie to /mnt/cache/downloads/local_storage/gdrive/seed

2. Radarr renames & imports completed movie to /mnt/cache/downloads/local_storage/gdrive/movies

3. Mover transfers imported movie to /mnt/user/downloads/local_storage/gdrive/movies

4. Rclone upload script moves from local_storage/gdrive/movies local to rclone, preserved by mount_mergerfs path

In this case, I'd point Plex to /mnt/user/downloads/mount_mergerfs/gdrive/movies?

 

Does the mover affect how plex sees the file?

 

I think there might be some extra confusion because my three mounts (mount_mergerfs, local_storage, and mount_rclone) are inside of a single share called "downloads" instead of three separate shares.

 

If want full hardlink support map all docker paths to /user --> /mnt/user then within the docker set all locations to a sub path of /user/mount-mergerfs.   Then behind the scenes unraid and rclone will behave as normal and manage where the files really reside

Link to comment
1 hour ago, DZMM said:

If want full hardlink support map all docker paths to /user --> /mnt/user then within the docker set all locations to a sub path of /user/mount-mergerfs.   Then behind the scenes unraid and rclone will behave as normal and manage where the files really reside

Yeah thats my current setup with one instance of mergerfs, I just can't use the cache with this because you can't hardlink from cache to array it duplicates the file instead :/ thought I'd just make another mergerfs instance on a separate share thats cache only to solve that..

Nevermind I think I'm complicating it too much lol. But, just to know, do you happen to know if your scripts will conflict with two separate instances running?

EDIT: Dumb question I guess. There's clearly some conflicting stuff. I'll work on it some more and see.

Edited by oldsweatyman
Link to comment
56 minutes ago, oldsweatyman said:

Yeah thats my current setup with one instance of mergerfs, I just can't use the cache with this because you can't hardlink from cache to array it duplicates the file instead :/ thought I'd just make another mergerfs instance on a separate share thats cache only to solve that..

Nevermind I think I'm complicating it too much lol. But, just to know, do you happen to know if your scripts will conflict with two separate instances running?

EDIT: Dumb question I guess. There's clearly some conflicting stuff. I'll work on it some more and see.

You're messing up your mappings somewhere as hardlinks work

Link to comment
18 hours ago, DZMM said:

If want full hardlink support map all docker paths to /user --> /mnt/user then within the docker set all locations to a sub path of /user/mount-mergerfs.   Then behind the scenes unraid and rclone will behave as normal and manage where the files really reside

First @DZMM , thanks for sharing this and thank to everyone that has contributed to make it better. I've been using for a few months now with mostly no issues (had that odd orphaned image issue a while back that a few of us had with mergerfs). That was really the only hiccup. I'm really considering moving everything offsite...

 

As for hardlinks, this is timely, as I've finally decided to get around to making hardlinks and get seeding to work properly. When i originally setup things I had:

/mnt/user/media <with sub directories for tv/movies/music/audiobooks

/mnt/user/downloads <with sub directories for complete/incomplete

 

Your script came along and I then added:

/mnt/user/gdrive_local

/mnt/user/gdrive_mergerfs

/mnt/user/gdrive_rclone

 

I know just mapping all containers to /mnt/user would solve this...but I'm a little apprehensive about giving all the necessary applications read/write to the entire array. I don't have any of this data going to cache...so is there anything stopping me, or a good reason not to, stuff everything into /mnt/user/media and then map everything to that? 

 

Link to comment
1 hour ago, Spatial Disorder said:

I know just mapping all containers to /mnt/user would solve this...but I'm a little apprehensive about giving all the necessary applications read/write to the entire array.

 
 

If you want some partitioning, you could do /mergerfs --> /mnt/user/gdrive_mergerfs and then within your dockers use the following paths:

 

/mergerfs/downloads for /mnt/user/gdrive_mergerfs/downloads/  
/mergerfs/media/tv for /mnt/user/gdrive_mergerfs/tv/  

 

 

The trick is your torrent, radarr, sonarr etc dockers have to be moving files around the mergerfs mount i.e. /mergerfs.

 

If you map:

/mergerfs --> /mnt/user/gdrive_mergerfs
/downloads --> /mnt/user/gdrive_mergerfs/downloads
/downloads_local (adding for another example) --> /mnt/user/gdrive_local/downloads

 

when you ask the docker to hardlink a file from /downloads or /downloads_local to /mergerfs it won't work.  It has to be from /mergerfs/downloads to /mergerfs/media/tv - within /mergerfs.

 

To be clear, when I say I do /user --> /mnt/user it's because it just makes my life easier when I'm setting up all dockers to talk to each other (I'm lazy) - within my media dockers I still only use paths within /user/mount_mergerfs e.g.  /user/mount_mergerfs/downloads and /user/mount_mergerfs/tv_shows

  • Thanks 1
Link to comment
1 hour ago, Spatial Disorder said:

I'm really considering moving everything offsite...

It's the logical next step.  I've ditched my parity drive (I backup to gdrive using duplicati), sold all but 2 of my HDDs that store seeds, pending uploads and my work/personal documents.  I don't really use the mass storage functionality anymore other than pooling the 2 HDDs - kinda impossible and would be mega expensive to store 0.5PB+ of content.....

 

My unRAID server main purpose is to power VMs (3xW10 VMs for me and the kids + pfsense VM) and Dockers (plex server with remote storage, Home Assistant, unifi, minecraft server, nextcloud, radarr etc). 

Edited by DZMM
Link to comment
4 hours ago, DZMM said:

If you want some partitioning, you could do /mergerfs --> /mnt/user/gdrive_mergerfs and then within your dockers use the following paths:

 


/mergerfs/downloads for /mnt/user/gdrive_mergerfs/downloads/  
/mergerfs/media/tv for /mnt/user/gdrive_mergerfs/tv/  

 

 

The trick is your torrent, radarr, sonarr etc dockers have to be moving files around the mergerfs mount i.e. /mergerfs.

 

If you map:


/mergerfs --> /mnt/user/gdrive_mergerfs
/downloads --> /mnt/user/gdrive_mergerfs/downloads
/downloads_local (adding for another example) --> /mnt/user/gdrive_local/downloads

 

when you ask the docker to hardlink a file from /downloads or /downloads_local to /mergerfs it won't work.  It has to be from /mergerfs/downloads to /mergerfs/media/tv - within /mergerfs.

 

To be clear, when I say I do /user --> /mnt/user it's because it just makes my life easier when I'm setting up all dockers to talk to each other (I'm lazy) - within my media dockers I still only use paths within /user/mount_mergerfs e.g.  /user/mount_mergerfs/downloads and /user/mount_mergerfs/tv_shows

Thanks, that makes sense. The more I think about it the more I'm leaning toward going all in on this in order to simplify everything. Right now I have a mix of data local and in gdrive. I'm with you on being lazy...I work in IT and the older I get the less I want to mess with certain aspects of it...just want reliability. I really only have one share that would even be a concern...and now that I think about it...it should probably live in an encrypted vault...

Link to comment
5 hours ago, DZMM said:

It's the logical next step.  I've ditched my parity drive (I backup to gdrive using duplicati), sold all but 2 of my HDDs that store seeds, pending uploads and my work/personal documents.  I don't really use the mass storage functionality anymore other than pooling the 2 HDDs - kinda impossible and would be mega expensive to store 0.5PB+ of content.....

 

My unRAID server main purpose is to power VMs (3xW10 VMs for me and the kids + pfsense VM) and Dockers (plex server with remote storage, Home Assistant, unifi, minecraft server, nextcloud, radarr etc). 

Wow...0.5PB...that's pretty impressive. Any concerns with monthly bandwidth utilization? No issues from your ISP?

 

I've also been using duplicati for the last few years, been very happy with it overall. Do you do anything different with your offsite mount to ensure you could recover in the event of...say an accidental deletion?

 

 

Link to comment
30 minutes ago, Spatial Disorder said:

Wow...0.5PB...that's pretty impressive. Any concerns with monthly bandwidth utilization? No issues from your ISP?

My current 360/180 ISP sent me an email saying my upload was going at 100% for a few months and they wondered if I'd been hacked.  I thanked them for their concern and said I was ok and knew what the traffic was.  I use bwlimits so my upload runs now at about 80Mbps average over course of a day as my big upload days are over.  My previous 1000/1000 ISP didn't say anything despite my upload running at about 60-70% for over a year.  

30 minutes ago, Spatial Disorder said:

Do you do anything different with your offsite mount to ensure you could recover in the event of...say an accidental deletion?

 

 

I keep a copy of the most recent vm and appdata backups locally. If there was ....an accidental deletion I probably would just write off all the content as 1) I can't build a 0.5PB array and 2) it'd probably be easier to ....replace the content I want than spend weeks/months downloading it from gdrive.

 

I did look into backing up my tdrives on another user's setup (he currently backs his up to mine), but I stopped as the actual process of downloading off his tdrive would face the same problems.

Edited by DZMM
  • Thanks 1
Link to comment
15 minutes ago, Bolagnaise said:

@DZMM a script enhancement might be to add this, this will kill the upload if the 750GB limit is reached. 

 

--drive-stop-on-upload-limit

 

 

https://forum.rclone.org/t/new-flag-for-google-drive-drive-stop-on-upload-limit-to-stop-at-750-gb-limit/13800

 

43 minutes ago, Bolagnaise said:

For what its worth, i’m still on unionFS. Its been rock solid stable for me for over 12 months now, and since getting gigabit internet, load times have been instantaneous. 

 

The new mergerfs based scripts do this and much more...

Link to comment

I need a little help here ... Maybe a lot of help. So I have everything setup to the point where when I put a file and/or directory into the "local" share and run the "rclone_upload" script everything is uploaded to googledrive and deleted in the local share, as expected.

 

My plex library is located on an unassigned devices disk /mnt/disks/plex, I would like to transfer that whole library to google drive but can't figure out how to mount the /mnt/disks/plex via the rclone_mount script so rclone has access to it and uploads it to google drive when I run "rclone_upload" script.

Link to comment
26 minutes ago, Ultra-Humanite said:

I need a little help here ... Maybe a lot of help. So I have everything setup to the point where when I put a file and/or directory into the "local" share and run the "rclone_upload" script everything is uploaded to googledrive and deleted in the local share, as expected.

 

My plex library is located on an unassigned devices disk /mnt/disks/plex, I would like to transfer that whole library to google drive but can't figure out how to mount the /mnt/disks/plex via the rclone_mount script so rclone has access to it and uploads it to google drive when I run "rclone_upload" script.

Set /mnt/disks/Plex as your local location in the mount and upload scripts

Link to comment

Thanks for a fast response. In my initial question I miss spoke when I said local as by local I meant to say that any thing I placed in "/mnt/user/local/gdrive_media_vfs" was uploaded without any issue.

 

If I set LocalFilesShare="/mnt/disks/Plex" in mount and upload scripts I get and an empty gdrive_media_vfs folder in the plex folder and this error message when I run the upload script:

 

2020/06/20 14:22:11 INFO : Starting bandwidth limiter at 20MBytes/s
2020/06/20 14:22:11 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/06/20 14:22:12 DEBUG : mountcheck: Excluded
2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish
2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish
2020/06/20 14:22:12 DEBUG : {}: Removing directory
2020/06/20 14:22:12 DEBUG : Local file system at /mnt/disks/Plex/gdrive_media_vfs: deleted 1 directories
2020/06/20 14:22:12 INFO : There was nothing to transfer

 

Here is my config:

[gdrive]
type = drive
client_id =
client_secret =
scope = drive
token =
server_side_across_configs = true
root_folder_id =

 

[gdrive_media_vfs]
type = crypt
remote = gdrive:gdrive_media_vfs
filename_encryption = standard
directory_name_encryption = true
password =
password2 =

 

Here is the mount script:

 

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.7 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
LocalFilesShare="/mnt/disks/Plex" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{""\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

 

 

Here is the upload script:

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/disks/Plex" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="20M"
BWLimit3Time="16:00"
BWLimit3="20M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

 

Link to comment
12 minutes ago, Ultra-Humanite said:

Thanks for a fast response. In my initial question I miss spoke when I said local as by local I meant to say that any thing I placed in "/mnt/user/local/gdrive_media_vfs" was uploaded without any issue.

 

If I set LocalFilesShare="/mnt/disks/Plex" in mount and upload scripts I get and an empty gdrive_media_vfs folder in the plex folder and this error message when I run the upload script:

 

2020/06/20 14:22:11 INFO : Starting bandwidth limiter at 20MBytes/s
2020/06/20 14:22:11 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2020/06/20 14:22:12 DEBUG : mountcheck: Excluded
2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for checks to finish
2020/06/20 14:22:12 DEBUG : Encrypted drive 'gdrive_media_vfs:': Waiting for transfers to finish
2020/06/20 14:22:12 DEBUG : {}: Removing directory
2020/06/20 14:22:12 DEBUG : Local file system at /mnt/disks/Plex/gdrive_media_vfs: deleted 1 directories
2020/06/20 14:22:12 INFO : There was nothing to transfer

 

Here is my config:

[gdrive]
type = drive
client_id =
client_secret =
scope = drive
token =
server_side_across_configs = true
root_folder_id =

 

[gdrive_media_vfs]
type = crypt
remote = gdrive:gdrive_media_vfs
filename_encryption = standard
directory_name_encryption = true
password =
password2 =

 

Here is the mount script:

 

#!/bin/bash

######################
#### Mount Script ####
######################
### Version 0.96.7 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
LocalFilesShare="/mnt/disks/Plex" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{""\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

 

 

Here is the upload script:

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/disks/Plex" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="20M"
BWLimit3Time="16:00"
BWLimit3="20M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

 

What folders are in  /mnt/disks/Plex ?  I've got mergerfs mounts that include UD and they work fine.

Link to comment
13 minutes ago, Ultra-Humanite said:

If I move anything to the /mnt/disks/plex/gdrive_media_vfs for example the tv folder from /mnt/disks/plex/tv it gets uploaded to googledrive no problem. I guess that's a solution.

that's what's supposed to happen!

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.