Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

6 hours ago, sannitig said:

Hi Folks! I've read the guide - https://rclone.org/onedrive/ and watched this video to try and deduce what needs to be done in a simple but granular step by step to get my "Pictures" folder in UNRAID to use OneDrive as a back up solution- https://www.youtube.com/watch?v=-b9Ow2iX2DQ&feature=emb_logo&ab_channel=SpaceinvaderOne 

 

The goal is to set up the pictures folder so that every little change that is made in "UNRAID Pictures" (add/delete/etc) syncs with the OneDrive folder. I've made a OneDrive account specifically for this task and have a full TB to use but for the life of me although I can get rclone to recognize the OneDrive account using 'rclone lsd XXXX'

 

What I can't get is the mounting part. When following the video at 8:20 I do not have a mount script, there's simply nothing really there. I guess what I want to do is mount my UNRAID Pictures folder as my OneDrive folder?? There seems to be nothing in my /mnt folder though...

 

I do not want two way syncing, only syncing from UNRAID to OneDrive account - so this should be fairly easy right?

 

 

EDIT - I just realized I posted earlier this week, sorry for the double post, but at least this one has more explanation . Wow...Sorry about that guys

As @BRiT pointed out, everything you need can be found at the start of this thread - although I think this solution is overkill if you are just backing up or syncing a few photos, as the solution in this thread is to optimise Plex playback from Google Drive. 

 

It probably can be re-used for OneDrive, but if you want to learn how to backup a photos folder using rclone I'd read the rclone sync page https://rclone.org/commands/rclone_sync/ as I don't see why you need to even mount.  If you need help, please create a new thread.

Link to comment
3 hours ago, axeman said:

Interstingly - it looks like mergerFS files aren't going to the "local" but are being uploaded directly... 

 

Seeing lots of these from a Sonarr refresh. 

Are you sure this is 'live' ?  I think the log is showing what happened when your upload script kicked in

Link to comment
2 hours ago, DZMM said:

Are you sure this is 'live' ?  I think the log is showing what happened when your upload script kicked in

Yeah, it's live. I just tried watching a show, and Emby must've updated the .nfo file, because log shows similar (upload script hasn't been run since a reboot).

 

Maybe smaller files that are already in the rclone cache get upload directly?

 

When I copy a large (new) file to the mergerFs mount, it does exactly as expected. The file goes to the corresponding folder on the "local" share. 

 

I'll test some more in a few hours. 

Link to comment

So it seems I've bit hit with a ban - "Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded" when trying to acess my files although I can't figure out why exactly.

 

Looking at the Quota stats on https://console.developers.google.com/apis/api/drive.googleapis.com/quotas I don't see me even getting close to the quota. I've also tried creating a new client id/secret to by-pass this, but I'm still getting the same error back.

 

image.thumb.png.3a400405d5d51e2949f1b4d9a426624e.png

 

image.thumb.png.495f95454bffceb9a441ca738212be04.png

I also have a completely different Team Drive using different credentials, and that seems to have been hit with a ban as well.

 

Any ideas?

Edited by teh0wner
Link to comment
So it seems I've bit hit with a ban - "Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded" when trying to acess my files although I can't figure out why exactly.
 
Looking at the Quota stats on https://console.developers.google.com/apis/api/drive.googleapis.com/quotas I don't see me even getting close to the quota. I've also tried creating a new client id/secret to by-pass this, but I'm still getting the same error back.
 
image.thumb.png.3a400405d5d51e2949f1b4d9a426624e.png
 
image.thumb.png.495f95454bffceb9a441ca738212be04.png
 
Any ideas?
You got 750gb per day of upload i don't know if its include download also

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment

First off, I want to say the setup for this is super easy and I appreciate your hard work, DZMM. I started from scratch on a smaller server this weekend and I was able to have this up and running in no time. 

 

I'm seeing that my cache drive is being eaten up by the "mount_rclone" share, even after reset. My cache size is set to 400GB, which I copied from the scripts. Is this normal/expected? Is there a need for it to be so large? What exactly is cache doing here? 

Screen Shot 2020-12-21 at 9.15.45 AM.png

Link to comment
1 hour ago, drogg said:

First off, I want to say the setup for this is super easy and I appreciate your hard work, DZMM. I started from scratch on a smaller server this weekend and I was able to have this up and running in no time. 

 

I'm seeing that my cache drive is being eaten up by the "mount_rclone" share, even after reset. My cache size is set to 400GB, which I copied from the scripts. Is this normal/expected? Is there a need for it to be so large? What exactly is cache doing here? 

Screen Shot 2020-12-21 at 9.15.45 AM.png

 
 
 
 
 
 

Glad you got it all up and running (with no help!) easily.

 

The cache filling up quickly is something I'm keeping an eye on a bit on my server by manually browsing the cache every now and then to see what's in there.  My cache is getting populated mainly from Plex's overnight scheduled jobs i.e. analysing files that haven't been accessed by users. 

 

I'm trying to track how long something I've actually watched stays in the cache - if it's getting flushed within a day (or even hours), I'm probably going to turn the cache off.  E.g. I've just checked and some of the stuff I watched just last night isn't in the cache 17 hours later.....

 

I'm hesitant to increase the cache size to increase hit rate, as that's a lot of data (I have 7 teamdrives so I'm already caching over 2TB) to hold to get a slightly faster launch time and better seeking - every now and then.....  My server is doing a lot of scheduled work as I've decided to turn thumbnails back on, so maybe it'll settle down a bit in a month or two.

 

Edited by DZMM
Link to comment
4 minutes ago, DZMM said:

Probably the same weird ? problem

hmm the only thing that I did before it happened was move some files around. my local share had a /videos/TV_Classics and a /videos/cloud_Videos/TV_Classics I moved the entire folder (using a Windows machine) from /local/Videos/ to /local/cloud_Videos

 

That was basically the last thing I did (re-parent the directory). 

 

a quick reboot fixed it - just curious if the reparenting did it. 

Link to comment
12 hours ago, privateer said:

post a copy of your script

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/mount_rclone_upload" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="15M"
BWLimit3Time="16:00"
BWLimit3="12M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

###############################################################################
#####    DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING   #####
###############################################################################

####### Preparing mount location variables #######
if [[  $BackupJob == 'Y' ]]; then
	LocalFilesLocation="$LocalFilesShare"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
else
	LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
fi

RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount

####### create directory for script files #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files

#######  Check if script already running  ##########
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
fi

#######  check if rclone installed  ##########
echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
else
	echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
	exit
fi

####### Rotating serviceaccount.json file if using Service Accounts #######
if [[ $UseServiceAccountUpload == 'Y' ]]; then
	cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
	CounterNumber=$(find -name 'counter*' | cut -c 11,12)
	CounterCheck="1"
	if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
		echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
	else
		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		CounterNumber="1"
	fi
	ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
	echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
	ServiceAccount=""
fi

#######  Upload files  ##########

# Check bind option
if [[  $CreateBindMount == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
		echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	else
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
		ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
	fi
else
	RCloneMountIP=""
fi

#  Remove --delete-empty-src-dirs if rclone sync or copy
if [[  $RcloneCommand == 'move' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
	DeleteEmpty="--delete-empty-src-dirs "
else
	echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
	DeleteEmpty=""
fi

#  Check --backup-directory
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
	LocalFilesLocation="$LocalFilesShare"
	BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
else
	BackupRemoteLocation=""
	BackupRemoteDeletedLocation=""
	BackupRetention=""
	BackupDir=""
fi

# process files
	rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
	--user-agent="$RcloneUploadRemoteName" \
	-vv \
	--buffer-size 512M \
	--drive-chunk-size 512M \
	--tpslimit 8 \
	--checkers 8 \
	--transfers 4 \
	--order-by modtime,$ModSort \
	--min-age $MinimumAge \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--exclude *fuse_hidden* \
	--exclude *_HIDDEN \
	--exclude .recycle** \
	--exclude .Recycle.Bin/** \
	--exclude *.backup~* \
	--exclude *.partial~* \
	--drive-stop-on-upload-limit \
	--bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
	--bind=$RCloneMountIP $DeleteEmpty

# Delete old files from mount
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
	rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
fi

#######  Remove Control Files  ##########

# update counter and remove other control files
if [[  $UseServiceAccountUpload == 'Y' ]]; then
	if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
	else
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		CounterNumber=$((CounterNumber+1))
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
		echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
	fi
else
	echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
fi

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

thx

Link to comment
7 hours ago, francrouge said:

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited. The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight. BWLimit1Time="01:00" BWLimit1="off" BWLimit2Time="08:00" BWLimit2="15M" BWLimit3Time="16:00" BWLimit3="12M"

 

This script throttles your upload at 15MB/s from 0800 to 1600 and 12MB/s from 1600 to 0100. Other than that it is unlimited bandwidth.

 

I don't see anything in your script to cut it off at 750GB

 

EDIT: There's a line I missed that stops it at 750GB already in there. Your script it good, no need to change the BWlimits

Edited by privateer
Link to comment
5 hours ago, francrouge said:

For the moment yes

Thx

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

One way to do this is to change your BWlimits from unlimited, 12M, and 15M to 9500K for all 3. This will keep your upload running all day but will limit the total bandwidth to keep your upload under 9500K. You can also adjust the timing with higher and lower speeds, but remember if the script is still running when it crosses from one BWlimit to another, it doesn't recheck until the script runs again. This means that shifting across times like this can have you cross 750GB even if you have different limits if you have a lot of data to upload.

 

EDIT: There's a line I missed that stops it at 750GB already in there. Your script it good, no need to change the BWlimits

Edited by privateer
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.