Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Hello,

I've used this script before and its awesome. Thank you for sharing and helping everyone out. I am trying to get this setup again, I had to wipe my server and start over a few months ago. I am trying to have 4 Rclone remotes, all on the same team drive, just different folders within the drive. I have Rclone setup, I think I have the mount script right based off an example a couple pages back. My questions are for the upload script.

1. Do you just replicate the script for the number of mounts/remotes? basically the same as the mount script example? So for 2 mounts would you have the upload script have 2 sections, specifying the remotes and everything in "Required Settings" in each individual section of the upload script?

2. How do you handle service accounts with multiple uploads? Does Rclone know when it needs to cycle to the next account, even if it has multiple "uploads". I had it working fine before, but that was just 1 remote. Not sure if anything needs to be different for multiple remotes.

3. How does the RcloneMountIP work with multiple remotes? I was gonna set my 4 mounts up with 192.168.1.221 through 192.168.1.225. I see that the Rclone_Mount script has RcloneMountIP="192.168.1.252" and VirtualIPNumber="2", then the Rclone_Upload script has RCloneMountIP="192.168.1.253" and VirtualIPNumber="1". So I'm guessing that each Mount and each Upload needs its own individual RCloneMountIP and VirtualIPNumber? So for 4 remotes you would need 8 different RCloneMountIP= and 8 different VirtualIPNumber=, correct?

 

4. Is the VirtualIPNumber= similar to how unraid deals with VLANS? Will it interfere if a VLAN has the same number? Like if you have VLAN 3 then you would not want to use VirtualIPNumber="3"? Or are they different and it doesnt matter? Just want to make sure I understand.

Thanks again for this awesome solution.

 

Link to comment
3 hours ago, Megaman69 said:

Hello,

I've used this script before and its awesome. Thank you for sharing and helping everyone out. I am trying to get this setup again, I had to wipe my server and start over a few months ago. I am trying to have 4 Rclone remotes, all on the same team drive, just different folders within the drive. I have Rclone setup, I think I have the mount script right based off an example a couple pages back. My questions are for the upload script.

1. Do you just replicate the script for the number of mounts/remotes? basically the same as the mount script example? So for 2 mounts would you have the upload script have 2 sections, specifying the remotes and everything in "Required Settings" in each individual section of the upload script?

2. How do you handle service accounts with multiple uploads? Does Rclone know when it needs to cycle to the next account, even if it has multiple "uploads". I had it working fine before, but that was just 1 remote. Not sure if anything needs to be different for multiple remotes.

3. How does the RcloneMountIP work with multiple remotes? I was gonna set my 4 mounts up with 192.168.1.221 through 192.168.1.225. I see that the Rclone_Mount script has RcloneMountIP="192.168.1.252" and VirtualIPNumber="2", then the Rclone_Upload script has RCloneMountIP="192.168.1.253" and VirtualIPNumber="1". So I'm guessing that each Mount and each Upload needs its own individual RCloneMountIP and VirtualIPNumber? So for 4 remotes you would need 8 different RCloneMountIP= and 8 different VirtualIPNumber=, correct?

 

4. Is the VirtualIPNumber= similar to how unraid deals with VLANS? Will it interfere if a VLAN has the same number? Like if you have VLAN 3 then you would not want to use VirtualIPNumber="3"? Or are they different and it doesnt matter? Just want to make sure I understand.

Thanks again for this awesome solution.

 

1. If all your mounts are folders in the same team drive, I would have one upload remote that moves everything to the tdrive.  Because rclone is doing the move "it knows" so each rclone mount knows straightaway there are new files in the mount, so mergers is also happy.

 

2. Problem goes away 

 

3. Yes you need unique combos. 

 

4. They are different 

 

 

 

 

 

 

 

 

 

 

 

 

 

Edited by DZMM
Typo
Link to comment

I would however recommend if you are going to be uploading a lot, to create multiple tdrives as you are halfway there with the multiple mounts, to avoid running into future performance issues.  

 

I've covered doing this before in other posts, but roughly what you do is:

 

- create a rclone mount but not a mergerfs mount for each tdrive 

- create one mergerfs mount that combines all the rclone mounts 

- one rclone upload mount that moves all the files to one of your tdrives

- run another rclone script to move the files server side from the single tdrive to their right tdrive e.g. if you uploaded all your files to tdrive_movies, then move tdrive_movies/TV to tdrive_tv/TV 

 

Note - you can only do this if all mounts use the same encryption passwords

 

 

 

 

 

 

 

 

Edited by DZMM
Link to comment

I've had everything working for a while now but have recently started getting some permission errors. I've managed to trouble shoot a bit but just need a hand with how to fix so I don't wreck everything!

 

Upload script runs once daily and once finished deletes local/movies and local/tv

Mount script runs at 10 minute intervals and creates local/movies,tv with owner root but only has read only permissions for group and other

In turn that seems to change permissions on merged folder

I can then manually change permissions on local folder and everything's fine till the upload script deletes folders and mount script makes dirs again with the same permission problems

 

I guess what I need to know is how to get the upload script to not delete local/movies, tv or house to get the mount script to create these folders with read/write permissions?

 

 

 

 

Link to comment

I'm unfortunately also experiencing permissions issues after upgrading to one of the latest stable releases after years(!) of no issues and I'm not sure how to proceed.

 

Sonarr/Radarr are both reporting "Access to the path is denied". Any idea how to fix it?

Link to comment
On 5/7/2020 at 7:49 AM, teh0wner said:

 

 

 

I've written a small user script that reliably stops the array 100%. I'm not quite sure how 'safe' it is though - I haven't noticed any issues since I've been using it though.

Here it is :

 

#!/bin/bash

##########################
### fusermount Script ####
##########################

RcloneRemoteName="google_drive_encrypted_vfs"
RcloneMountShare="/mnt/user/data/remote_storage"
MergerfsMountShare="/mnt/user/data/merged_storage"

echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_fusermount script ***"

while [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; do
echo "$(date "+%d.%m.%Y %T") INFO: mount is running, sleeping"
sleep 5
done

while [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running" ]]; do
echo "$(date "+%d.%m.%Y %T") INFO: upload is running, sleeping"
sleep 15
done

fusermount -uz "$MergerfsMountShare/$RcloneRemoteName"
fusermount -uz "$RcloneMountShare/$RcloneRemoteName"

echo "$(date "+%d.%m.%Y %T") INFO: ***rclone_fusermount script finished ***"

exit

 

Essentially what I'm doing, is checking whether mount is currently running and/or upload is currently running.

 

If either of them are, then the script (and stopping the array) is paused for a few seconds and tries again. Once mount and upload has finished, it will proceed to fusermount -uz (both rclone and mergefs), and then the array stops just fine.

 

I've been using this for the past week with no issues and the array stops always.

Let me know what you think if you get to use it

  

On 5/8/2020 at 12:54 AM, teh0wner said:

Partly one of the issues I was having - not sure what the best way of checking there's an upload currently running, or whether the mount script is currently running (and attempting to mount a drive - i.e. rclone_mount).

 

I still have a problem stopping the array with the default script. Does this issue persist for anyone?


Even with the above script I still get a hang on the array.

 

Jun 19 09:54:55 Kiefhost emhttpd: shcmd (1320): /usr/local/sbin/mover stop
Jun 19 09:54:55 Kiefhost root: mover: not running
Jun 19 09:54:55 Kiefhost emhttpd: Sync filesystems...
Jun 19 09:54:55 Kiefhost emhttpd: shcmd (1321): sync
Jun 19 09:54:55 Kiefhost emhttpd: shcmd (1322): umount /mnt/user0
Jun 19 09:54:55 Kiefhost emhttpd: shcmd (1323): rmdir /mnt/user0
Jun 19 09:54:55 Kiefhost emhttpd: shcmd (1324): umount /mnt/user
Jun 19 09:54:55 Kiefhost root: umount: /mnt/user: target is busy.
Jun 19 09:54:55 Kiefhost emhttpd: shcmd (1324): exit status: 32
Jun 19 09:54:55 Kiefhost emhttpd: shcmd (1325): rmdir /mnt/user
Jun 19 09:54:55 Kiefhost root: rmdir: failed to remove '/mnt/user': Device or resource busy
Jun 19 09:54:55 Kiefhost emhttpd: shcmd (1325): exit status: 1
Jun 19 09:54:55 Kiefhost emhttpd: shcmd (1327): /usr/local/sbin/update_cron
Jun 19 09:54:55 Kiefhost emhttpd: Retry unmounting user share(s)...
Jun 19 09:55:00 Kiefhost emhttpd: shcmd (1328): umount /mnt/user
Jun 19 09:55:00 Kiefhost root: umount: /mnt/user: target is busy.
Jun 19 09:55:00 Kiefhost emhttpd: shcmd (1328): exit status: 32

 

and found this guide https://forums.unraid.net/topic/104780-cant-stop-array-failed-to-remove-mntuser-device-or-resource-busy/?do=findComment&comment=968144

 

Added this line to fix:

umount -l /mnt/user

 

Edited by thekiefs
Link to comment

I’ve had been experiencing permission issues since upgrading to 6.10 as well and i think i finally fixed all the issues.

 

RCLONE PERMISSION ISSUES:

Fix 1: prior to mounting the rclone folder using user scripts, run ‘docker safe new permissions’ from settings for all your folders. Then mount the rclone folders using the script.

 

 

I no longer recommend using the below information, using the docker safe new permissions should resolve most issues. 

 

Fix 2: if that doesnt fix your issues, in the mount script add the following BOLDED sections to the create rclone mount section of the script, or add them to the extra parameters section, this will mount rclone folders as user ROOT with a UMASK of 000.

Quote

# create rclone mount
    rclone mount \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --allow-other \
    --umask 000 \
    --uid 0 \
    --gid 0 \

 

Alternatively you could mount it as USER:NOBODY with the uid:99 gid:100

 

DOCKER CONTAINER PERMISSIONS ISSUES FIX (SONARR/RADARR/PLEX)

 

Fix 1: Change PUID and PGID to user ROOT 0:0 and add an environment variable for UMASK of 000 (NUCLEAR STRIKE OPTION)

CFD7E405-C2C9-422E-AB8C-9C983DE7545F.thumb.jpeg.ba77acf0fa837b6a9d9ef150179cd209.jpeg

 

Fix 2: Maintain PUID and PGID to 99:100 as USER:NOBODY and using the user scripts plugin, update the permissions of the docker containers permissions using the following script, change the /mnt/ path directory to reflect your Docker path setup. Rerun for each containers path after changing it.

#!/bin/bash
for dir in "/mnt/cache/appdata/Sonarr/"
do
`echo $dir` `chmod -R ug+rw,ug+X,o-rwx $dir`
chown -R nobody:users $dir
done

 

 

IMPORTANT PLEX UPDATE:

 

After running docker safe new permissions, if you experience EAC3 or audio transcoder errors where the video never starts to play, it is because your CODECS folder and/or your mapped /transcode path does not have the correct permissions. 

 

To rectify this issue, stop your plex container, navigate to your plex appdata folder path and delete the CODECS folder. Then navigate to your mapped /transcode folder if you are using one and also delete that. Restart your plex container and plex will redownload your codecs and recreate your mapped transcode folder with the correct permissions. 

Edited by Bolagnaise
  • Like 1
  • Thanks 3
Link to comment
On 6/21/2022 at 4:53 AM, Bolagnaise said:

I’ve had been experiencing permission issues since upgrading to 6.10 as well and i think i finally fixed all the issues.

 

RCLONE PERMISSION ISSUES:

Fix 1: prior to mounting the rclone folder using user scripts, run ‘docker safe new permissions’ from settings for all your folders. Then mount the rclone folders using the script.

 

Fix 2: if that doesnt fix your issues, in the mount script add the following BOLDED sections to the create rclone mount section of the script, or add them to the extra parameters section, this will mount rclone folders as user ROOT with a UMASK of 000.

 

Alternatively you could mount it as USER:NOBODY with the uid:99 gid:100

 

DOCKER CONTAINER PERMISSIONS ISSUES FIX (SONARR/RADARR/PLEX)

 

Fix 1: Change PUID and PGID to user ROOT 0:0 and add an environment variable for UMASK of 000 (NUCLEAR STRIKE OPTION)

CFD7E405-C2C9-422E-AB8C-9C983DE7545F.thumb.jpeg.ba77acf0fa837b6a9d9ef150179cd209.jpeg

 

Fix 2: Maintain PUID and PGID to 99:100 as USER:NOBODY and using the user scripts plugin, update the permissions of the docker containers permissions using the following script, change the /mnt/ path directory to reflect your Docker path setup. Rerun for each containers path after changing it.

#!/bin/bash
for dir in "/mnt/cache/appdata/Sonarr/"
do
`echo $dir` `chmod -R ug+rw,ug+X,o-rwx $dir`
chown -R nobody:users $dir
done

 

Thanks for this. I have been experimenting with using the 99/100 added lines to my mount scripts. That didn't seem to work. I removed it and added the umask. That alone was not enough either. So I think the safe permission did the trick. I added umask and 99/100 to my mount scripts now and it seems to work. Hopefully it stays like this.

 

I wonder if using 0/0 (so you mount as root) would actually cause problems. Cause you might not have access to those files and folders when you are not using a root account.

 

Anyways, I'm glad it seems to be solved now.

Link to comment
On 6/20/2022 at 10:53 PM, Bolagnaise said:

I’ve had been experiencing permission issues since upgrading to 6.10 as well and i think i finally fixed all the issues.

 

RCLONE PERMISSION ISSUES:

Fix 1: prior to mounting the rclone folder using user scripts, run ‘docker safe new permissions’ from settings for all your folders. Then mount the rclone folders using the script.

 

Fix 2: if that doesnt fix your issues, in the mount script add the following BOLDED sections to the create rclone mount section of the script, or add them to the extra parameters section, this will mount rclone folders as user ROOT with a UMASK of 000.

 

Alternatively you could mount it as USER:NOBODY with the uid:99 gid:100

 

DOCKER CONTAINER PERMISSIONS ISSUES FIX (SONARR/RADARR/PLEX)

 

Fix 1: Change PUID and PGID to user ROOT 0:0 and add an environment variable for UMASK of 000 (NUCLEAR STRIKE OPTION)

CFD7E405-C2C9-422E-AB8C-9C983DE7545F.thumb.jpeg.ba77acf0fa837b6a9d9ef150179cd209.jpeg

 

Fix 2: Maintain PUID and PGID to 99:100 as USER:NOBODY and using the user scripts plugin, update the permissions of the docker containers permissions using the following script, change the /mnt/ path directory to reflect your Docker path setup. Rerun for each containers path after changing it.

#!/bin/bash
for dir in "/mnt/cache/appdata/Sonarr/"
do
`echo $dir` `chmod -R ug+rw,ug+X,o-rwx $dir`
chown -R nobody:users $dir
done

 


I could hug you. This solved my issues. Looks like 6.10.x has issues with rclone mounts and this had me completely stumped. 

A couple of containers won't actually let me run it as a root user so I'll patiently wait for a fix on those. 

Edited by drogg
Link to comment
11 hours ago, drogg said:


I could hug you. This solved my issues. Looks like 6.10.x has issues with rclone mounts and this had me completely stumped. 

A couple of containers won't actually let me run it as a root user so I'll patiently wait for a fix on those. 

 Thats interesting, if you run 

id root

 

what comes up. 

Link to comment
  • 2 weeks later...
On 7/2/2022 at 7:41 AM, DZMM said:

interesting - I've noticed one of my mounts slowing, but not my main ones.  But, I haven't watched much the last couple of months as work has been hectic

Someone on the rclone forums has created a script to check your IPs to the google api and update your hosts file to the fastest one, unfortunately it doesnt yet work for Unraid due to differences in linux (and im not experienced enough to make it work) 

 

https://github.com/Nebarik/mediscripts-shared/blob/main/googleapis.sh

Link to comment
3 hours ago, Bolagnaise said:

Someone on the rclone forums has created a script to check your IPs to the google api and update your hosts file to the fastest one, unfortunately it doesnt yet work for Unraid due to differences in linux (and im not experienced enough to make it work) 

 

https://github.com/Nebarik/mediscripts-shared/blob/main/googleapis.sh

 

Nice! I noticed that when I ran the script that in my /etc/hosts file there is a \t before the address. Did you also have the same issues? If so, were you able to fix it?

Link to comment
10 hours ago, Akatsuki said:

 

Nice! I noticed that when I ran the script that in my /etc/hosts file there is a \t before the address. Did you also have the same issues? If so, were you able to fix it?

 

 

line 104 remove the /t and put a triple space in to represent tab, it should work correctly after that.

 

hostsline="$macs  $api"

 

Edited by Bolagnaise
Link to comment

Another rclone forum member has posted an even better script, ill post how to get it working for Unraid users.

 

I will try to lay this out as easily as possible for newer unraid users and im definitely no linux god so it took me a while to understand.

 

Link to original script

https://github.com/cgomesu/mediscripts-shared/blob/main/googleapis.sh

 

1. Open a terminal window and run 

fallocate -l 50M dummythicc

 

This will create a 50MB dummy file called dummythicc. This dummyfile will be created in /root (use midnight commander to view it).

 

2. Upload the dummy script to google drive

rclone copy dummythicc gdrive_vfs:/temp/

example:

66984750-58C8-42C2-873F-1DC74F9D2B8D.thumb.jpeg.0aa41490f2c21ef9410b4420384f77a9.jpeg

 

Please note, my crypt name is the same as @DZMM  (gdrive_vfs) so if you are using another crypt remote name then you will need to change this part to reflect the remote name.

 

To get you çrypt remote name type ‘’’rclone config’’in terminal:

93006979-171E-4B74-B8B6-2C47C5467120.jpeg.41f65076bf594140bd169e3a2547ce89.jpeg

 

3. Create a new script in the user plugins section. Find below my script which i think should work for 99% of unraid users using dzmm’s naming structure. if your crypt name is different to gdrive_vfs update it in the script.
 

MAKE SURE TO REMOVE THE /BIN/BASH COMMAND THAT IS AUTO CREATED WHEN YOU MAKE A NEW SCRIPT

 

#!/usr/bin/env sh

###################################################################################
# Google Endpoint Scanner (GES)
# - Use this script to blacklist GDrive endpoints that have slow connections
# - This is done by adding one or more Google servers available at the time of
#   testing to this host's /etc/hosts file.
# - Run this script as a cronjob or any other way of automation that you feel
#   comfortable with.
###################################################################################
# Installation and usage:
# - install 'dig' and 'git';
# - in a dir of your choice, clone the repo that contains this script:
#   'git clone https://github.com/cgomesu/mediscripts-shared.git'
#   'cd mediscripts-shared/'
# - go over the non-default variables at the top of the script (e.g., REMOTE,
#   REMOTE_TEST_DIR, REMOTE_TEST_FILE, etc.) and edit them to your liking:
#   'nano googleapis.sh'
# - if you have not selected or created a dummy file to test the download
#   speed from your remote, then do so now. a file between 50MB-100MB should
#   be fine;
# - manually run the script at least once to ensure it works. using the shebang:
#   './googleapis.sh' (or 'sudo ./googleapis.sh' if not root)
#   or by calling 'sh' (or bash or whatever POSIX shell) directly:
#   'sh googleapis.sh' (or 'sudo sh googleapis.sh' if not root)
###################################################################################
# Noteworthy requirements:
# - rclone;
# - dig: in apt-based distros, install it via 'apt install dnsutils';
# - a dummy file on the remote: you can point to an existing file or create an
#                              empty one via 'fallocate -l 50M dummyfile' and
#                              then copying it to your remote.
###################################################################################
# Author: @cgomesu (this version is a rework of the original script by @Nebarik)
# Repo: https://github.com/cgomesu/mediscripts-shared
###################################################################################
# This script is POSIX shell compliant. Keep it that way.
###################################################################################

# uncomment and edit to set a custom name for the remote.
REMOTE="gdrive_vfs"
DEFAULT_REMOTE="gcrypt"

# uncomment and edit to set a custom path to a config file. Default uses
# rclone's default ("$HOME/.config/rclone/rclone.conf").
CONFIG="/boot/config/plugins/rclone/.rclone.conf"

# uncomment to set the full path to the REMOTE directory containing a test file.
REMOTE_TEST_DIR="/temp/"
DEFAULT_REMOTE_TEST_DIR="/temp/"

# uncomment to set the name of a REMOTE file to test download speed.
REMOTE_TEST_FILE="dummythicc"
DEFAULT_REMOTE_TEST_FILE="dummyfile"

# Warning: be careful where you point the LOCAL_TMP dir because this script will
# delete it automatically before exiting!
# uncomment to set the LOCAL temporary root directory.
LOCAL_TMP_ROOT="/tmp/"
DEFAULT_LOCAL_TMP_ROOT="/tmp/"

# uncomment to set the LOCAL temporary application directory.
TMP_DIR="ges/"
DEFAULT_LOCAL_TMP_DIR="ges/"

# uncomment to set a default criterion. this refers to the integer (in mebibyte/s, MiB/s) of the download
# rate reported by rclone. lower or equal values are blacklisted, while higher values are whitelisted.
# by default, script whitelists any connection that reaches any MiB/s speed above 0 (e.g., 1, 2, 3, ...).
SPEED_CRITERION=5
DEFAULT_SPEED_CRITERION=0

# uncomment to append to the hosts file ONLY THE BEST whitelisted endpoint IP to the API address (single host entry).
# by default, the script appends ALL whitelisted IPs to the host file.
USE_ONLY_BEST_ENDPOINT="true"

# uncomment to indicate the application to store blacklisted ips PERMANENTLY and use them to filter
# future runs. by default, blacklisted ips are NOT permanently stored to allow the chance that a bad server
# might become good in the future.
USE_PERMANENT_BLACKLIST="true"

PERMANENT_BLACKLIST_DIR="/root/"
DEFAULT_PERMANENT_BLACKLIST_DIR="$HOME/"
PERMANENT_BLACKLIST_FILE="blacklisted_google_ips"
DEFAULT_PERMANENT_BLACKLIST_FILE="blacklisted_google_ips"

# uncomment to set a custom API address.
CUSTOM_API="www.googleapis.com"
DEFAULT_API="www.googleapis.com"

# full path to hosts file.
HOSTS_FILE="/etc/hosts"

# do NOT edit these variables.
TEST_FILE="${REMOTE:-$DEFAULT_REMOTE}:${REMOTE_TEST_DIR:-$DEFAULT_REMOTE_TEST_DIR}${REMOTE_TEST_FILE:-$DEFAULT_REMOTE_TEST_FILE}"
API="${CUSTOM_API:-$DEFAULT_API}"
LOCAL_TMP="${LOCAL_TMP_ROOT:-$DEFAULT_LOCAL_TMP_ROOT}${TMP_DIR:-$DEFAULT_LOCAL_TMP_DIR}"
PERMANENT_BLACKLIST="${PERMANENT_BLACKLIST_DIR:-$DEFAULT_PERMANENT_BLACKLIST_DIR}${PERMANENT_BLACKLIST_FILE:-$DEFAULT_PERMANENT_BLACKLIST_FILE}"


# takes a status ($1) as arg. used to indicate whether to restore hosts file from backup or not.
cleanup () {
  # restore hosts file from backup before exiting with error
  if [ "$1" -ne 0 ] && check_root && [ -f "$HOSTS_FILE_BACKUP" ]; then
    cp "$HOSTS_FILE_BACKUP" "$HOSTS_FILE" > /dev/null 2>&1
  fi
  # append new blacklisted IPs to permanent list if using it and exiting wo error
  if [ "$1" -eq 0 ] && [ "$USE_PERMANENT_BLACKLIST" = 'true' ] && [ -f "$BLACKLIST" ]; then
    if [ -f "$PERMANENT_BLACKLIST" ]; then tee -a "$PERMANENT_BLACKLIST" < "$BLACKLIST" > /dev/null 2>&1; fi
  fi
  # remove local tmp dir and its files if the dir exists
  if [ -d "$LOCAL_TMP" ]; then
    rm -rf "$LOCAL_TMP" > /dev/null 2>&1
  fi
}

# takes msg ($1) and status ($2) as args
end () {
  cleanup "$2"
  echo '***********************************************'
  echo '* Finished Google Endpoint Scanner (GES)'
  echo "* Message: $1"
  echo '***********************************************'
  exit "$2"
}

start () {
  echo '***********************************************'
  echo '******** Google Endpoint Scanner (GES) ********'
  echo '***********************************************'
  msg "The application started on $(date)." 'INFO'
}

# takes message ($1) and level ($2) as args
msg () {
  echo "[GES] [$2] $1"
}

# checks user is root
check_root () {
  if [ "$(id -u)" -eq 0 ]; then return 0; else return 1; fi
}

# create temporary dir and files
create_local_tmp () {
  LOCAL_TMP_SPEEDRESULTS_DIR="$LOCAL_TMP""speedresults/"
  LOCAL_TMP_TESTFILE_DIR="$LOCAL_TMP""testfile/"
  mkdir -p "$LOCAL_TMP_SPEEDRESULTS_DIR" "$LOCAL_TMP_TESTFILE_DIR" > /dev/null 2>&1
  BLACKLIST="$LOCAL_TMP"'blacklist_api_ips'
  API_IPS="$LOCAL_TMP"'api_ips'
  touch "$BLACKLIST" "$API_IPS"
}

# hosts file backup
hosts_backup () {
  if [ -f "$HOSTS_FILE" ]; then
    HOSTS_FILE_BACKUP="$HOSTS_FILE"'.backup'
    if [ -f "$HOSTS_FILE_BACKUP" ]; then
      msg "Hosts backup file found. Restoring it." 'INFO'
      if ! cp "$HOSTS_FILE_BACKUP" "$HOSTS_FILE"; then return 1; fi
    else
      msg "Hosts backup file not found. Backing it up." 'WARNING'
      if ! cp "$HOSTS_FILE" "$HOSTS_FILE_BACKUP"; then return 1; fi
    fi
    return 0;
  else
    msg "The hosts file at $HOSTS_FILE does not exist." 'ERROR'
    return 1;
  fi
}

# takes a command as arg ($1)
check_command () {
  if command -v "$1" > /dev/null 2>&1; then return 0; else return 1; fi
}

# add/parse bad IPs to/from a permanent blacklist
blacklisted_ips () {
  API_IPS_PROGRESS="$LOCAL_TMP"'api-ips-progress'
  mv "$API_IPS_FRESH" "$API_IPS_PROGRESS"
  if [ -f "$PERMANENT_BLACKLIST" ]; then
    msg "Found permanent blacklist. Parsing it." 'INFO'
    while IFS= read -r line; do
      if validate_ipv4 "$line"; then
        # grep with inverted match
        grep -v "$line" "$API_IPS_PROGRESS" > "$API_IPS" 2>/dev/null
        mv "$API_IPS" "$API_IPS_PROGRESS"
      fi
    done < "$PERMANENT_BLACKLIST"
  else
    msg "Did not find a permanent blacklist at $PERMANENT_BLACKLIST. Creating a new one." 'WARNING'
    mkdir -p "$PERMANENT_BLACKLIST_DIR" 2>/dev/null
    touch "$PERMANENT_BLACKLIST" 2>/dev/null
  fi
  mv "$API_IPS_PROGRESS" "$API_IPS"
}

# ip checker that tests Google endpoints for download speed.
# takes an IP addr ($1) and its name ($2) as args.
ip_checker () {
  IP="$1"
  NAME="$2"
  HOST="$IP $NAME"
  RCLONE_LOG="$LOCAL_TMP"'rclone.log'

  echo "$HOST" | tee -a "$HOSTS_FILE" > /dev/null 2>&1
  msg "Please wait. Downloading the test file from $IP... " 'INFO'

  # rclone download command
  if check_command "rclone"; then
    if [ -n "$CONFIG" ]; then
      rclone copy --config "$CONFIG" --log-file "$RCLONE_LOG" -v "${TEST_FILE}" "$LOCAL_TMP_TESTFILE_DIR"
    else
      rclone copy --log-file "$RCLONE_LOG" -v "${TEST_FILE}" "$LOCAL_TMP_TESTFILE_DIR"
    fi
  else
    msg "Rclone is not installed or is not reachable in this user's \$PATH." 'ERROR'
    end 'Cannot continue. Fix the rclone issue and try again.' 1
  fi

  # parse log file
  if [ -f "$RCLONE_LOG" ]; then
    if grep -qi "failed" "$RCLONE_LOG"; then
      msg "Unable to connect with $IP." 'WARNING'
    else
      msg "Parsing connection with $IP." 'INFO'
      # only whitelist MiB/s connections
      if grep -qi "MiB/s" "$RCLONE_LOG"; then
        SPEED=$(grep "MiB/s" "$RCLONE_LOG" | cut -d, -f3 | cut -c 2- | cut -c -5 | tail -1)
        # use speed criterion to decide whether to whilelist or not
        SPEED_INT="$(echo "$SPEED" | cut -f 1 -d '.')"
        if [ "$SPEED_INT" -gt "${SPEED_CRITERION:-$DEFAULT_SPEED_CRITERION}" ]; then
          # good endpoint
          msg "$SPEED MiB/s. Above criterion endpoint. Whitelisting IP '$IP'." 'INFO'
          echo "$IP" | tee -a "$LOCAL_TMP_SPEEDRESULTS_DIR$SPEED" > /dev/null
        else
          # below criterion endpoint
          msg "$SPEED MiB/s. Below criterion endpoint. Blacklisting IP '$IP'." 'INFO'
          echo "$IP" | tee -a "$BLACKLIST" > /dev/null
        fi
      elif grep -qi "KiB/s" "$RCLONE_LOG"; then
        SPEED=$(grep "KiB/s" "$RCLONE_LOG" | cut -d, -f3 | cut -c 2- | cut -c -5 | tail -1)
        msg "$SPEED KiB/s. Abnormal endpoint. Blacklisting IP '$IP'." 'WARNING'
        echo "$IP" | tee -a "$BLACKLIST" > /dev/null
      else
        # assuming it's either KiB/s or MiB/s, else parses as error and do nothing
        msg "Could not parse connection with IP '$IP'." 'WARNING'
      fi
    fi
    # local cleanup of tmp file and log
    rm "$LOCAL_TMP_TESTFILE_DIR${REMOTE_TEST_FILE:-$DEFAULT_REMOTE_TEST_FILE}" > /dev/null 2>&1
    rm "$RCLONE_LOG" > /dev/null 2>&1
  fi
  # restore hosts file from backup
  cp "$HOSTS_FILE_BACKUP" "$HOSTS_FILE" > /dev/null 2>&1
}

# returns the fastest IP from speedresults
fastest_host () {
  LOCAL_TMP_SPEEDRESULTS_COUNT="$LOCAL_TMP"'speedresults_count'
  ls "$LOCAL_TMP_SPEEDRESULTS_DIR" > "$LOCAL_TMP_SPEEDRESULTS_COUNT"
  MAX=$(sort -nr "$LOCAL_TMP_SPEEDRESULTS_COUNT" | head -1)
  # same speed file can contain multiple IPs, so get whatever is at the top
  MACS=$(head -1 "$LOCAL_TMP_SPEEDRESULTS_DIR$MAX" 2>/dev/null)
  echo "$MACS"
}

# takes an address as arg ($1)
validate_ipv4 () {
  # lack of match in grep should return an exit code other than 0
  if echo "$1" | grep -oE "[[:digit:]]{1,3}.[[:digit:]]{1,3}.[[:digit:]]{1,3}.[[:digit:]]{1,3}" > /dev/null 2>&1; then
    return 0
  else
    return 1
  fi
}

# parse results and append only the best whitelisted IP to hosts
append_best_whitelisted_ip () {
  BEST_IP=$(fastest_host)
  if validate_ipv4 "$BEST_IP"; then
    msg "The fastest IP is $BEST_IP. Putting into the hosts file." 'INFO'
    echo "$BEST_IP $API" | tee -a "$HOSTS_FILE" > /dev/null 2>&1
  else
    msg "The selected '$BEST_IP' address is not a valid IP number." 'ERROR'
    end "Unable to find the best IP address. Original hosts file will be restored." 1
  fi
}

# parse results and append all whitelisted IPs to hosts
append_all_whitelisted_ips () {
  for file in "$LOCAL_TMP_SPEEDRESULTS_DIR"*; do
    if [ -f "$file" ]; then
      # same speed file can contain multiple IPs
      while IFS= read -r line; do
        WHITELISTED_IP="$line"
        if validate_ipv4 "$WHITELISTED_IP"; then
          msg "The whitelisted IP '$WHITELISTED_IP' will be added to the hosts file." 'INFO'
          echo "$WHITELISTED_IP $API" | tee -a "$HOSTS_FILE" > /dev/null 2>&1
        else
          msg "The whitelisted IP '$WHITELISTED_IP' address is not a valid IP number. Skipping it." 'WARNING'
        fi
      done < "$file"
    else
      msg "Did not find any whitelisted IP at '$LOCAL_TMP_SPEEDRESULTS_DIR'." 'ERROR'
      end "Unable to find whitelisted IP addresses. Original hosts file will be restored." 1
    fi
  done
}

############
# main logic
start

trap "end 'Received a signal to stop' 1" INT HUP TERM

# need root permission to write hosts
if ! check_root; then end "User is not root but this script needs root permission. Run as root or append 'sudo'." 1; fi

# prepare local files
create_local_tmp
if ! hosts_backup; then end "Unable to backup the hosts file. Check its path and continue." 1; fi

# prepare remote file
# TODO: (cgomesu) add function to allocate a dummy file in the remote

# start running test
if check_command "dig"; then
  # redirect dig output to tmp file to be parsed later
  API_IPS_FRESH="$LOCAL_TMP"'api-ips-fresh'
  dig +answer "$API" +short 1> "$API_IPS_FRESH" 2>/dev/null
else
  msg "The command 'dig' is not installed or not reachable in this user's \$PATH." 'ERROR'
  end "Install dig or make sure its executable is reachable, then try again." 1
fi

if [ "$USE_PERMANENT_BLACKLIST" = 'true' ]; then
  # bad IPs are permanently blacklisted
  blacklisted_ips
else
  # bad IPs are blacklisted on a per-run basis
  mv "$API_IPS_FRESH" "$API_IPS"
fi

while IFS= read -r line; do
  # checking each ip in API_IPS
  if validate_ipv4 "$line"; then ip_checker "$line" "$API"; fi
done < "$API_IPS"

# parse whitelisted IPs and edit hosts file accordingly
if [ "$USE_ONLY_BEST_ENDPOINT" = 'true' ]; then
  append_best_whitelisted_ip
else
  append_all_whitelisted_ips
fi

# end the script wo errors
end "Reached EOF without errors" 0

 

 

4. Before running the script you need to ensure your /etc/hosts file is ‘clean’ so to say, as in it doesn't contain any references to www.googleapis.com

 

An unclean hosts file

735BB2E4-9DCA-40A6-809D-E38269DB0140.thumb.jpeg.db9036771a5af73b44c93d4987db1271.jpeg

 

to edit out the google host, use midnight commander or the config file editor plugin from CA

 

mc
#navigate to /etc/host
#press F4 to edit
#delete reference 'IP' www.googleapis.com
#press F2 to save

 

5. Press ‘’’run in background’’ in user scripts and let the script do its work.

C2ADD1CE-C149-44D6-AF1B-01B37485A1FD.thumb.jpeg.07f632a95dd429d24572567600347232.jpeg

 

your /etc/hosts file will now be updated to reflect the fastest server.

 

 

6. Set the script to run at an interval you feel comfortable with, probably daily will suffice as google may change their IPs and that would mean you would lose connection to the google drive servers.

I’ve set mine to run at 6am daily, use crontab guru to set your own custom times or use the built in ones.

https://crontab.guru

2652822F-8E8A-4B42-B7A6-E676496919FC.thumb.jpeg.d1c7952c4455af8b9b791adc17928f96.jpeg

 

7. Once you have run script once we can enhance the testing to get more accurate results for the fastest google server.

The reason you would want to do this is we are only initially testing downloading a 50MB file, and from user reports on rclone, this may not give the most accurate results, but it will allow the script to quickly identify the ‘bad’ google drive servers and add them to its -blacklist.api file for future reference so it doesnt test them.

 

To enhance testing:

 

-a)Using midnight commmander (mc) navigate to /root and delete ‘’dummythicc’’ by pressing F8

 

b) Using mc navigate to your ‘’/mnt/user/mount_rclone" user share and delete the /temp/ folder

 

c) Run the below to create a 200MB file and re-upload it

fallocate -l 200M dummythicc
rclone copy dummythicc gdrive_vfs:/temp/

 

d) rerun the script from user plugins for more accurate results.

 

 

Please let me know if i have missed anything or need help!

Edited by Bolagnaise
  • Thanks 2
Link to comment

Some test from Canada this morning:

 

******** Google Endpoint Scanner (GES) ********
***********************************************
[GES] [INFO] The application started on Thu Jul  7 10:02:04 EDT 2022.
[GES] [WARNING] Hosts backup file not found. Backing it up.
[GES] [WARNING] Did not find a permanent blacklist at /root/blacklisted_google_ips. Creating a new one.
[GES] [INFO] Please wait. Downloading the test file from 142.251.41.42...
[GES] [INFO] Parsing connection with 142.251.41.42.
[GES] [INFO] 1.538 MiB/s. Below criterion endpoint. Blacklisting IP '142.251.41.42'.
[GES] [INFO] Please wait. Downloading the test file from 142.251.32.74...
[GES] [INFO] Parsing connection with 142.251.32.74.
[GES] [INFO] 8.334 MiB/s. Above criterion endpoint. Whitelisting IP '142.251.32.74'.
[GES] [INFO] Please wait. Downloading the test file from 172.217.1.10...
[GES] [INFO] Parsing connection with 172.217.1.10.
[GES] [INFO] 1.564 MiB/s. Below criterion endpoint. Blacklisting IP '172.217.1.10'.
[GES] [INFO] Please wait. Downloading the test file from 172.217.165.10...
[GES] [INFO] Parsing connection with 172.217.165.10.
[GES] [INFO] 1.545 MiB/s. Below criterion endpoint. Blacklisting IP '172.217.165.10'.
[GES] [INFO] Please wait. Downloading the test file from 142.251.41.74...
[GES] [INFO] Parsing connection with 142.251.41.74.
[GES] [INFO] 10.32 MiB/s. Above criterion endpoint. Whitelisting IP '142.251.41.74'.
[GES] [INFO] Please wait. Downloading the test file from 142.251.33.170...
[GES] [INFO] Parsing connection with 142.251.33.170.
[GES] [INFO] 1.566 MiB/s. Below criterion endpoint. Blacklisting IP '142.251.33.170'.
[GES] [INFO] The fastest IP is 142.251.41.74. Putting into the hosts file.
***********************************************
* Finished Google Endpoint Scanner (GES)
* Message: Reached EOF without errors
***********************************************
Script Finished Jul 07, 2022  10:04.31

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Gdrive_api/log.txt

Script Starting Jul 07, 2022  10:11.25

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Gdrive_api/log.txt

***********************************************
******** Google Endpoint Scanner (GES) ********
***********************************************
[GES] [INFO] The application started on Thu Jul  7 10:11:25 EDT 2022.
[GES] [INFO] Hosts backup file found. Restoring it.
[GES] [INFO] Found permanent blacklist. Parsing it.
[GES] [INFO] Please wait. Downloading the test file from 142.251.32.74...
[GES] [INFO] Parsing connection with 142.251.32.74.
[GES] [INFO] 33.19 MiB/s. Above criterion endpoint. Whitelisting IP '142.251.32.74'.
[GES] [INFO] Please wait. Downloading the test file from 142.251.41.74...
[GES] [INFO] Parsing connection with 142.251.41.74.
[GES] [INFO] 39.40 MiB/s. Above criterion endpoint. Whitelisting IP '142.251.41.74'.
[GES] [INFO] The fastest IP is 142.251.41.74. Putting into the hosts file.
***********************************************
* Finished Google Endpoint Scanner (GES)
* Message: Reached EOF without errors
***********************************************
Script Finished Jul 07, 2022  10:11.39

 

So initially it was the 50MB file then the 200MB file (at 10:11.25) - been having some buffering the past couple weeks.

Link to comment

The above was on unRAID 6.10.3 - another server I have on 6.9.3 doesn't have "dig" (not in nerd tools either) so the GES script won't run (in case anyone tries on the 6.9.x release).

 

***********************************************
******** Google Endpoint Scanner (GES) ********
***********************************************
[GES] [INFO] The application started on Thu Jul 7 10:22:21 EDT 2022.
[GES] [WARNING] Hosts backup file not found. Backing it up.
[GES] [ERROR] The command 'dig' is not installed or not reachable in this user's $PATH.
***********************************************
* Finished Google Endpoint Scanner (GES)
* Message: Install dig or make sure its executable is reachable, then try again.
***********************************************
Script Finished Jul 07, 2022 10:22.21

 

Edited by live4ever
added <code>
Link to comment
13 hours ago, live4ever said:

So initially it was the 50MB file then the 200MB file (at 10:11.25) - been having some buffering the past couple weeks.

 

yep nice, just be aware in my script im using a permanent blacklist, which means that if those 2 remaining IPs ever become slow (which it seems they can do as some servers are becoming slow at random) then you will not be able to connect to google api. Change the variable to use a permanent blacklist by commenting it out if this happens. 

Link to comment

Hello,

Thanks for your script DZMM and guys here who discussed on issues.

 

I managed to mount my google shared drive and have plex docker read from my mergerfs folder successfully.

 

However I couldn't figure out why no file uploaded to drive.

 

These are the environment and what involved:

- I don't use sonarr radarr etc.. but use j2downloader (due to sources' language reason)

- I don't encrypt files since I want to read it directly from google drive web whenever wherever.

 

The procedure is, I have j2downloader downloads all files to a folder like: /mnt/user/downloads/j2downloader , 

then I move them manually to my mergerfs folder which is /mnt/user/mount_mergerfs/gdrive/Media/Movies (or /TV)

(because I don't want the script to upload unsuccesful downloaded files, which might hang forever, to drive, so I do it manually after using tinyMediaManager to do the works Radarr/Sonarr do, also because they do it badly than I do manually)

 

What I found in the upload script log file is:

  GNU nano 5.3                                                                                                                                               log.txt
Script Starting Jun 16, 2022  00:20.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone-upload/log.txt

16.06.2022 00:20:01 INFO: *** Rclone move selected.  Files will be moved from /mnt/user/mount_rclone_local/gdrive for gdrive ***
16.06.2022 00:20:01 INFO: *** Starting rclone_upload script for gdrive ***
16.06.2022 00:20:01 INFO: Exiting as script already running.
Script Finished Jun 16, 2022  00:20.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone-upload/log.txt

Script Starting Jun 16, 2022  00:30.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone-upload/log.txt

16.06.2022 00:30:01 INFO: *** Rclone move selected.  Files will be moved from /mnt/user/mount_rclone_local/gdrive for gdrive ***
16.06.2022 00:30:01 INFO: *** Starting rclone_upload script for gdrive ***
16.06.2022 00:30:01 INFO: Exiting as script already running.
Script Finished Jun 16, 2022  00:30.01

 

and so on. Don't know how to track what was going on and while new files that I moved in don't get uploaded.

 

Below is my mount (working) and upload script:

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/mount_rclone_local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="400G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="plex" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

 

 

 

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/mount_rclone_local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="0"
BWLimit2Time="08:00"
BWLimit2="3M"
BWLimit3Time="16:00"
BWLimit3="3M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1="--exclude downloads/**"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="N" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/binhex-rclone/services_account" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="9" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="backup_deleted" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

####### END SETTINGS #######

 

 

In total, after reading mergerfs successfully, I want files that I moved into merferfs manually get uploaded to my shared drive.

 

I used to have a baremetal server running Cloubox before with them working (plex reading from Google drive and a watch folder which will upload despite manuall or automatic all moved in files), however my ISP keep blocking port 80 so Cloudbox can't renew LetEnscrypt, I need to move to here but no success in uploading.

 

Please help me figure out how to trace the issue and solve it.

Thank you

Edited by tungndtt
spelling
Link to comment

Hello dear friends. For about an hour now I have not had any access to my directory via Plex. I have not changed anything.

I get the following error message (shortened):

 

(...)

2022/07/19 00:27:48 DEBUG : gdrive: Loaded invalid token from config file - ignoring
2022/07/19 00:27:48 DEBUG : gdrive: Token refresh failed try 1/5: oauth2: cannot fetch token: 400 Bad Request
Response: {
  "error": "invalid_grant",
  "error_description": "Token has been expired or revoked."
}
2022/07/19 00:27:49 DEBUG : gdrive: Loaded invalid token from config file - ignoring
2022/07/19 00:27:49 DEBUG : gdrive: Token refresh failed try 2/5: oauth2: cannot fetch token: 400 Bad Request
Response: {
  "error": "invalid_grant",
  "error_description": "Token has been expired or revoked."
}
2022/07/19 00:27:50 DEBUG : gdrive: Loaded invalid token from config file - ignoring
2022/07/19 00:27:50 DEBUG : gdrive: Token refresh failed try 3/5: oauth2: cannot fetch token: 400 Bad Request
Response: {
  "error": "invalid_grant",
  "error_description": "Token has been expired or revoked."

 

(...)


Update: RESOLVED today

 

Thank you very much for your kind support.

I used this command:

 

"rclone config reconnect gdrive:"

 

There I got the code "CODE" for creating a new token. I did not close the window because another CODE is needed.

 

On my iMac I installed clone (https://www.youtube.com/watch?v=xfRjo2q0xkw) and entered this command:

 

"./rclone authorize "drive" "CODE"

 

I then received another code, which I entered in the other window.

Edited by suender
RESOLVED
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.