Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

On 3/1/2023 at 9:00 AM, Playerz said:

 

 

I was wondering if there is a way to stop it from this, each time i have to stop unraid? i am using the unmount script?

 

image.thumb.png.58403eeef3f4b1608ab763465deab445.png

looking at those logs i'm pretty sure you're not supposed to unmount /mnt/user ... you might have to check the folder-ing you have for what to unmount properly. Also look at the pull request currently in the github. it includes some new changes for the unmount script that haven't been pulled into the main section but is vital for things to be properly unmounted. i had to incorporate some of those changes in order to get everything working properly. 

Link to comment
On 2/20/2023 at 4:38 AM, Nono@Server said:

Hello,

I use the script with the tags --uid 99 --gid 100 unfortunately the directories never get the right permissions;

# create rclone mount
	rclone mount \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--allow-other \
	--umask 002 \
	--uid 99 \
	--gid 100 \
	--dir-cache-time $RcloneMountDirCacheTime \
	--attr-timeout $RcloneMountDirCacheTime \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size $RcloneCacheMaxSize \
	--vfs-cache-max-age $RcloneCacheMaxAge \
	--vfs-read-ahead 1G \
	--bind=$RCloneMountIP \
	$RcloneRemoteName: $RcloneMountLocation &

I also used this script which works but it is not the right solution because the upload script delete the directory and when they are recreated they are in root

 

#!/bin/sh 
for dir in "/mnt/user/Local"
do echo $dir 
chmod -R ug+rw,ug+X,o-rwx $dir 
chown -R nobody:users $dir 
done

does anyone know why the "rclone mount" doesn't have the correct permissions?

I would check the permissions you have on your rclone mount as well as on the gdrive-vfs side to see if they match. add a random file into the rclone mount via the gdrive-vfs and see if the file is created and permissioned properly.

Link to comment

My mount was working fine, but since yesterday stops, are very slowly to play onedrive files, crashing all the time, anyone more???

 

My mount script:

 

rclone mount --daemon  --tpslimit 5 --tpslimit-burst 5 --onedrive-list-chunk=500 --onedrive-chunk-size=250M --drive-chunk-size=250M --fast-list --ignore-existing --ignore-checksum --ignore-size --checkers 5 --transfers 5 --allow-other --ignore-existing --default-permissions --umask 000 --uid 99 --gid 100 --log-level INFO --poll-interval 10s --drive-pacer-burst 1000 --vfs-cache-mode off --vfs-read-ahead 1M --ignore-checksum --ignore-size --ignore-existing OneDrive_Sharepoint_Secure: /mnt/user/OneDrive

Link to comment

I have a very weird issue

 

I've started using the upload script (mostly just using the mount one). Now what's happening, is that when I get lidarr to download a music file, Plex picks it up and sees it. Now when the uploader script runs (after the 15min age timeout), it uploads everything and shows up on gdrive (when i check on the actual website). After that, plex sees it gone and trashes it. 

 

The thing is, its still there. I can still access it via the webpage, and also my other rclone mount on a windows machine. Except the unraid server itself. Its like it skips it directly after that at all.

 

Can anyone help with this or has experienced this issue?

Link to comment
13 hours ago, fzligerzronz said:

I have a very weird issue

 

I've started using the upload script (mostly just using the mount one). Now what's happening, is that when I get lidarr to download a music file, Plex picks it up and sees it. Now when the uploader script runs (after the 15min age timeout), it uploads everything and shows up on gdrive (when i check on the actual website). After that, plex sees it gone and trashes it. 

 

The thing is, its still there. I can still access it via the webpage, and also my other rclone mount on a windows machine. Except the unraid server itself. Its like it skips it directly after that at all.

 

Can anyone help with this or has experienced this issue?

You're pointing your dockers to your local media folder instead of the merger folder (local + rclone mount).

Link to comment
7 hours ago, Kaizac said:

You're pointing your dockers to your local media folder instead of the merger folder (local + rclone mount).

nope. i followed (well hopefully i did all of it) the instructions as its meant to be. here are the screenshots of the following:

 

Mount Script showing the folders:

jkM20zn.png

 

Upload Script showing the folders:

jOcx2hb.png

 

Plex Docker settings:

LRgt9OU.png

 

Plex Folder Settings for Music:

23bxd97.png

 

LIdarr Docker Settings:

VJaL4Ba.png

 

Lidarr Folder Settings:

9sKijXb.png

 

What i've noticed is that the owner changed, from Nobody, to Root, folder permission are still the same, but everything is wonky after it does the upload. Its like Plex sees it before the upload, then after the upload, it doesn't at all. Yet the items are all still there, i can access them just by browsing through it on the file manager on Unraid, I can hope onto drive.google.com and it shows there, and also on another windows machine running rclone and stream it from there. Heck my old Plex server on that windows machine sees it and automatically adds it!

 

I'm at a loss for what is happening here.

Link to comment
1 hour ago, fzligerzronz said:

nope. i followed (well hopefully i did all of it) the instructions as its meant to be. here are the screenshots of the following:

 

Mount Script showing the folders:

jkM20zn.png

 

Upload Script showing the folders:

jOcx2hb.png

 

Plex Docker settings:

LRgt9OU.png

 

Plex Folder Settings for Music:

23bxd97.png

 

LIdarr Docker Settings:

VJaL4Ba.png

 

Lidarr Folder Settings:

9sKijXb.png

 

What i've noticed is that the owner changed, from Nobody, to Root, folder permission are still the same, but everything is wonky after it does the upload. Its like Plex sees it before the upload, then after the upload, it doesn't at all. Yet the items are all still there, i can access them just by browsing through it on the file manager on Unraid, I can hope onto drive.google.com and it shows there, and also on another windows machine running rclone and stream it from there. Heck my old Plex server on that windows machine sees it and automatically adds it!

 

I'm at a loss for what is happening here.

You're mixing data paths so the host system doesn't see it as 1 drive anymore thus making it seem files have been moved. The ownership changing could be a lidarr issue where you changed its settings to give the files a certain chown.

 

Fix the file paths first. You're using /data and /user mixed. You should only use /user in all your dockers using the mergers folder.

It's a common issue with mixing binhex and linuxserver dockers for example. So you need to get into the habit of using the same paths when dockers need to communicate to each other. Right now Lidarr is telling Plex that the files are located at /user but Plex only knows /data.

I would suggest to get plex on the /user mapping and then do a rescan.

Link to comment
2 hours ago, Kaizac said:

You're mixing data paths so the host system doesn't see it as 1 drive anymore thus making it seem files have been moved. The ownership changing could be a lidarr issue where you changed its settings to give the files a certain chown.

 

Fix the file paths first. You're using /data and /user mixed. You should only use /user in all your dockers using the mergers folder.

It's a common issue with mixing binhex and linuxserver dockers for example. So you need to get into the habit of using the same paths when dockers need to communicate to each other. Right now Lidarr is telling Plex that the files are located at /user but Plex only knows /data.

I would suggest to get plex on the /user mapping and then do a rescan.

So adding the paths for the container like this from now on?

J21joBu.png

 

Link to comment
42 minutes ago, Kaizac said:

Yep and then inside the docker like Plex you use /user/gdrive/audio for example.

Well eff me. It freaking works! I'll get to changing all paths to anything that corresponds to this! Thanks alot man!

 

I'll leave off my episodes for now as that has taken a week of just scanning the original folders on (nearly 200,000 worth) and do the smaller ones for now. Luckily i haven't added the other 4 big folders!

Edited by fzligerzronz
Link to comment
6 hours ago, fzligerzronz said:

Well eff me. It freaking works! I'll get to changing all paths to anything that corresponds to this! Thanks alot man!

 

I'll leave off my episodes for now as that has taken a week of just scanning the original folders on (nearly 200,000 worth) and do the smaller ones for now. Luckily i haven't added the other 4 big folders!

Glad it worked! And yes rebuilding those libraries is annoying. Also don't be surprised you get api banned after such big scans. It should still be able to scan the library fine, but playback won't work.

You could always start a new library for the 200k and once it's finished you delete the old one.

Link to comment
On 3/13/2023 at 6:09 PM, Kaizac said:

Glad it worked! And yes rebuilding those libraries is annoying. Also don't be surprised you get api banned after such big scans. It should still be able to scan the library fine, but playback won't work.

You could always start a new library for the 200k and once it's finished you delete the old one.

i've done quite abit of research and trial and error in regards to rclone settings, and I haven't been API banned in over a year now even with constant scanning and uploads :) 

 

Rebuilding is definitely annoying. I did try transferring my metadata over from Windows to Unraid, but there aren't many helpful tutorials, and the one for Plex Official isn't even clear. Im no expert so i was confused as hell

Link to comment
  • 2 weeks later...

Some of my files are not getting uploaded yet none of the filenames appear match the exclude flags so I'm at a bit of loss as to what is happening.

 

I also made sure the files were old enough.

 

DEBUG : rclone: Version "v1.62.1" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/RCLONE LOCAL FILES" "REMOTE:" "--user-agent=AGENT" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "5m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,80M 08:00,80M 16:00,80M" "--bind=" "--delete-empty-src-dirs"]
DEBUG : tv/Your Honor (US)/Season 1/Your Honor (US) - S01E05 - Part Five Bluray-1080p.mkv: Excluded

 

Link to comment
9 minutes ago, Arragon said:

I'm new to this topic and would like to know where to get unlimited storage for $10/month as it says in the first post. Google Workspace doesn't seem to offer it to this price any longer. 

It's not become Workspace Enterprise Standard which is around 17 euros per month. But what I've found online is that Google stopped offering the unlimited storage with 1 account. You'll need 5 now I think and even then it's only 5TB per user, and maybe they will give you more if you have a good business case. Onedrive and Dropbox are the only alternatives now I think.

Link to comment
Just now, Arragon said:

So maybe Dropbox Advanced is better but still requires 3 users at €18/mo: https://www.dropbox.com/business/plans-comparison

Correct, but with Dropbox you also are required to ask for additional storage. So if you are just starting out you could ask for a good amount of storage and then request more whenever you need it. So it really depends on you whether it is worth the price and hassle of getting the storage.

Link to comment

I have anything question

My server went down last night due to maintenance. Upon restarting, it took nearly 2 hours to mount the drive. When checking the logs, it had to remove every single folder on the vfs-cache i pressume?

 

2023/03/30 23:18:40 DEBUG : 4 go routines active
30.03.2023 23:18:40 INFO: *** Creating mount for remote gdrive
30.03.2023 23:18:40 INFO: sleeping for 5 seconds
2023/03/30 23:18:40 Failed to start remote control: failed to init server: listen tcp 127.0.0.1:5572: bind: address already in use
30.03.2023 23:18:45 INFO: continuing...
30.03.2023 23:18:45 CRITICAL: gdrive mount failed - please check for problems.  Stopping dockers
"docker stop" requires at least 1 argument.
See 'docker stop --help'.

Usage:  docker stop [OPTIONS] CONTAINER [CONTAINER...]

Stop one or more running containers
Script Finished Mar 30, 2023  23:18.45

Full logs for this script are available at /tmp/user.scripts/tmpScripts/gdrive/log.txt

2023/03/30 23:20:14 INFO  : Plex/Wrestling/WWE/Daily/WWE SMACKDOWN/Season 25: Removing directory
2023/03/30 23:20:14 INFO  : Plex/Wrestling/WWE/Daily/WWE SMACKDOWN: Removing directory
2023/03/30 23:20:14 INFO  : Plex/Wrestling/WWE/Daily/WWE RAW/Season 31: Removing directory

 

How do i browse to /tmp to see the log?

Link to comment

Installing a clean copy of Sonarr and I can't link up my cloud folder that was linked to the prior version. When I attempt to add the folder I get the error:

 

image.png.03745afa6e222a7c7b7dcc129a43456c.png

 

Inside Sonarr I see this:

 

2023-04-02 16:51:21.0|Trace|DiskProviderBase|Directory '/gdrive/mount_unionfs/gdrive_media_vfs/tv_shows/' isn't writable. Access to the path '/gdrive/mount_unionfs/gdrive_media_vfs/tv_shows/sonarr_write_test.txt' is denied.

 

Has anyone come across this or know any solutions? I'm stuck and can't re-add my cloud mount...

 

The mounted folder I'm having issues with appears to be mounted with permissions 755. Not sure if that could be causing problems (e.g. it needs to be 777).

Edited by privateer
Added folder permissions
Link to comment
24 minutes ago, privateer said:

Installing a clean copy of Sonarr and I can't link up my cloud folder that was linked to the prior version. When I attempt to add the folder I get the error:

 

image.png.03745afa6e222a7c7b7dcc129a43456c.png

 

Inside Sonarr I see this:

 

2023-04-02 16:51:21.0|Trace|DiskProviderBase|Directory '/gdrive/mount_unionfs/gdrive_media_vfs/tv_shows/' isn't writable. Access to the path '/gdrive/mount_unionfs/gdrive_media_vfs/tv_shows/sonarr_write_test.txt' is denied.

 

Has anyone come across this or know any solutions? I'm stuck and can't re-add my cloud mount...

 

The mounted folder I'm having issues with appears to be mounted with permissions 755. Not sure if that could be causing problems (e.g. it needs to be 777).

What's your settings for your docker?

Link to comment

Hi guys quick question i'm not able to see in live what my upload script is uploading.

 

Is it normal before it was working can someone tell me if my script is ok ?

 

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/mount_rclone_upload" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="10M"
BWLimit2Time="08:00"
BWLimit2="10M"
BWLimit3Time="16:00"
BWLimit3="10M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="5" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

###############################################################################
#####    DO NOT EDIT BELOW THIS LINE UNLESS YOU KNOW WHAT YOU ARE DOING   #####
###############################################################################

####### Preparing mount location variables #######
if [[  $BackupJob == 'Y' ]]; then
	LocalFilesLocation="$LocalFilesShare"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Backup selected.  Files will be copied or synced from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
else
	LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName"
	echo "$(date "+%d.%m.%Y %T") INFO: *** Rclone move selected.  Files will be moved from ${LocalFilesLocation} for ${RcloneUploadRemoteName} ***"
fi

RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location of rclone mount

####### create directory for script files #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName #for script files

#######  Check if script already running  ##########
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script for ${RcloneUploadRemoteName} ***"
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
fi

#######  check if rclone installed  ##########
echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
else
	echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
	rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
	exit
fi

####### Rotating serviceaccount.json file if using Service Accounts #######
if [[ $UseServiceAccountUpload == 'Y' ]]; then
	cd /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/
	CounterNumber=$(find -name 'counter*' | cut -c 11,12)
	CounterCheck="1"
	if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
		echo "$(date "+%d.%m.%Y %T") INFO: Counter file found for ${RcloneUploadRemoteName}."
	else
		echo "$(date "+%d.%m.%Y %T") INFO: No counter file found for ${RcloneUploadRemoteName}. Creating counter_1."
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		CounterNumber="1"
	fi
	ServiceAccount="--drive-service-account-file=$ServiceAccountDirectory/$ServiceAccountFile$CounterNumber.json"
	echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote ${RcloneUploadRemoteName} to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."
else
	echo "$(date "+%d.%m.%Y %T") INFO: Uploading using upload remote ${RcloneUploadRemoteName}"
	ServiceAccount=""
fi

#######  Upload files  ##########

# Check bind option
if [[  $CreateBindMount == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
	if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
		echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for upload to remote ${RcloneUploadRemoteName}"
	else
		echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for upload to remote ${RcloneUploadRemoteName}"
		ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
	fi
else
	RCloneMountIP=""
fi

#  Remove --delete-empty-src-dirs if rclone sync or copy
if [[  $RcloneCommand == 'move' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload."
	DeleteEmpty="--delete-empty-src-dirs "
else
	echo "$(date "+%d.%m.%Y %T") INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload."
	DeleteEmpty=""
fi

#  Check --backup-directory
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Will backup to ${BackupRemoteLocation} and use  ${BackupRemoteDeletedLocation} as --backup-directory with ${BackupRetention} retention for ${RcloneUploadRemoteName}."
	LocalFilesLocation="$LocalFilesShare"
	BackupDir="--backup-dir $RcloneUploadRemoteName:$BackupRemoteDeletedLocation"
else
	BackupRemoteLocation=""
	BackupRemoteDeletedLocation=""
	BackupRetention=""
	BackupDir=""
fi

# process files
	rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
	--user-agent="$RcloneUploadRemoteName" \
	--buffer-size 512M \
	--drive-chunk-size 512M \
	--max-transfer 725G
	--tpslimit 3 \
	--checkers 3 \
	--transfers 3 \
	-L/--copy-links
	--order-by modtime,$ModSort \
	--min-age $MinimumAge \
	$Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
	--exclude *fuse_hidden* \
	--exclude *_HIDDEN \
	--exclude .recycle** \
	--exclude .Recycle.Bin/** \
	--exclude *.backup~* \
	--exclude *.partial~* \
	--bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
	--bind=$RCloneMountIP $DeleteEmpty
	--delete-empty-src-dirs

# Delete old files from mount
if [[  $BackupJob == 'Y' ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: *** Removing files older than ${BackupRetention} from $BackupRemoteLocation for ${RcloneUploadRemoteName}."
	rclone delete --min-age $BackupRetention $RcloneUploadRemoteName:$BackupRemoteDeletedLocation
fi

#######  Remove Control Files  ##########

# update counter and remove other control files
if [[  $UseServiceAccountUpload == 'Y' ]]; then
	if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_1
		echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
	else
		rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_*
		CounterNumber=$((CounterNumber+1))
		touch /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/counter_$CounterNumber
		echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
	fi
else
	echo "$(date "+%d.%m.%Y %T") INFO: Not utilising service accounts."
fi

# remove dummy file
rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

Link to comment
On 4/4/2023 at 1:35 PM, francrouge said:

Hi guys quick question i'm not able to see in live what my upload script is uploading.

 

Is it normal before it was working can someone tell me if my script is ok ?

 

 

 

You need --vv in your rclone command

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.