Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

First, thank you for everything you've done with these scripts.  Maybe, I'm doing something wrong, but I believe I have everything configured correctly.  I run the script with my edits and can see my Google Drive folder in the Shares section, but its not decrypted.  When I "rclone lsd gdrive-media-crypt:" through terminal, everything is there and decrypted so I know it's not a password thing.  Is the script pulling a diffrent rclone.conf file or am I just missing something or doing something wrong?  Any help would be greatly appreciated.

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive-media-crypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/gmedia-cloud" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="1000h" # rclone dir cache time
LocalFilesShare="/mnt/user/plex-pool" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="150G" # Maximum size of rclone cache
RcloneCacheMaxAge="12h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/gmedia" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="sabnzbd plex sonarr radarr overseerr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"movies,tv shows,kid shows,sports"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2="vfs-cache-mode=full"
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

 

Edited by MonsterBandit04
code block
Link to comment
11 hours ago, MonsterBandit04 said:

First, thank you for everything you've done with these scripts.  Maybe, I'm doing something wrong, but I believe I have everything configured correctly.  I run the script with my edits and can see my Google Drive folder in the Shares section, but its not decrypted.  When I "rclone lsd gdrive-media-crypt:" through terminal, everything is there and decrypted so I know it's not a password thing.  Is the script pulling a diffrent rclone.conf file or am I just missing something or doing something wrong?  Any help would be greatly appreciated.

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive-media-crypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/gmedia-cloud" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="1000h" # rclone dir cache time
LocalFilesShare="/mnt/user/plex-pool" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="150G" # Maximum size of rclone cache
RcloneCacheMaxAge="12h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/gmedia" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="sabnzbd plex sonarr radarr overseerr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"movies,tv shows,kid shows,sports"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2="vfs-cache-mode=full"
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

 

 

Quote

Command2="vfs-cache-mode=full"

 

There is your problem. It's missing "--". But I don't know why you are adding it, since it's already part of the default mount scripts from DZMM?

Link to comment

So this is over now? I got an email from gdrive that states that I have ran out of storage space and all my files will be put in read only in 60 days. I setup a business account and pay for it and it has worked great in couple of years now. I only have 15.6TB on my gdrive and it is full. Can I do something to fix this or is it just bite the bullet and move on?

image.png.44f818e630562b6a60683251e8bc3cdb.png

 

Seems indeed this will be enforced on everyone eventually:

https://blog.muffn.io/posts/unlimited-plex-storage-via-google-drive-and-rclone/

 

This is the new price for me for 10TB haha:

image.thumb.png.0335a9340bbf16d633a3a96238abd488.png

Edited by Michel Amberg
Link to comment
21 minutes ago, Michel Amberg said:

So this is over now? I got an email from gdrive that states that I have ran out of storage space and all my files will be put in read only in 60 days. I setup a business account and pay for it and it has worked great in couple of years now. I only have 15.6TB on my gdrive and it is full. Can I do something to fix this or is it just bite the bullet and move on?

image.png.44f818e630562b6a60683251e8bc3cdb.png

 

Seems indeed this will be enforced on everyone eventually:

https://blog.muffn.io/posts/unlimited-plex-storage-via-google-drive-and-rclone/

 

This is the new price for me for 10TB haha:

image.thumb.png.0335a9340bbf16d633a3a96238abd488.png

Love that guide by Muffin.  It was simple enough for an idiot like me to follow.  And yeah it appears to be over.  I have 112TB sitting there at the moment.  Long story, but as soon as I figure out this script issue I'm having, I can start the copy to Dropbox.  I have a buddy who has over 300TB up there and just dropped a ridiculous amount to have 200TB local storage and will just let Drive go into Read Only

Edited by MonsterBandit04
Link to comment
28 minutes ago, Michel Amberg said:

So this is over now? I got an email from gdrive that states that I have ran out of storage space and all my files will be put in read only in 60 days. I setup a business account and pay for it and it has worked great in couple of years now. I only have 15.6TB on my gdrive and it is full. Can I do something to fix this or is it just bite the bullet and move on?

 

I've posted this before:

 

They already started limiting new Workspace accounts a few months ago to just 5TB per user. But recently they also started limiting older Workspace accounts with less than 5 users, sending them a message that they either had to pay for the used storage or the account would go into read-only within 30 days. Paying would be ridiculous, because it would be thousands per month. And even when you purchase more users, so you get the minimum 5, there isn't a guarantee you will actually get your storage limit lifted. People would have to chat with support to request 5TB additional storage, and would even be asked to explain the necessity, often refusing the request.

 

So yes, not much you can do at this point. And with 16TB, you're better off just buying the drives yourself. If it's for backup purposes, then you can look at Backblaze and other offerings. Don't expect to media stream from those though. You'll need Dropbox Advanced for that, with 3 accounts minimum.

Link to comment
37 minutes ago, Kaizac said:

Post your rclone config, but anonymize the important stuff before posting.

 

To be clear. When you go to both /mnt/user/gmedia and /mnt/user/gmedia-cloud you get shown encrypted files?

[gdrive-media]
type = drive
client_id = xxxxxxxxxxxx
client_secret = xxxxxxxxxx
scope = drive
root_folder_id = XXXXXXXXXXXXXX
token = XXXXXXXXXXXX
team_drive = 

[gdrive-media-crypt]
type = crypt
remote = gdrive-media:/media
password = xxxxxxxxxx
password2 = xxxxxxxxxxx

[dropbox-media]
type = dropbox
client_id = xxxxxxxx
client_secret = xxxxxxxxxxxx
token = xxxxxxxxxxx

[dropbox-media-crypt]
type = crypt
remote = dropbox-media:/media
password = xxxxxxxxxxxxx
password2 = xxxxxxxxxxxxx

Yes when I go to /mnt/user/gmedia and /mnt/user/gmedia-cloud i see encrypted stuff not 'Movies' 'TV Shows' etc. But if I rclone lsd gdrive-media-crypt: its unencrypted.

Edited by MonsterBandit04
Link to comment
28 minutes ago, Kaizac said:

I've posted this before:

 

They already started limiting new Workspace accounts a few months ago to just 5TB per user. But recently they also started limiting older Workspace accounts with less than 5 users, sending them a message that they either had to pay for the used storage or the account would go into read-only within 30 days. Paying would be ridiculous, because it would be thousands per month. And even when you purchase more users, so you get the minimum 5, there isn't a guarantee you will actually get your storage limit lifted. People would have to chat with support to request 5TB additional storage, and would even be asked to explain the necessity, often refusing the request.

 

So yes, not much you can do at this point. And with 16TB, you're better off just buying the drives yourself. If it's for backup purposes, then you can look at Backblaze and other offerings. Don't expect to media stream from those though. You'll need Dropbox Advanced for that, with 3 accounts minimum.

Yeah Dropbox Advanced for 90/mo is better than just about any other offering out there at the moment.

Link to comment
13 minutes ago, MonsterBandit04 said:
[gdrive-media]
type = drive
client_id = xxxxxxxxxxxx
client_secret = xxxxxxxxxx
scope = drive
root_folder_id = XXXXXXXXXXXXXX
token = XXXXXXXXXXXX
team_drive = 

[gdrive-media-crypt]
type = crypt
remote = gdrive-media:/media
password = xxxxxxxxxx
password2 = xxxxxxxxxxx

[dropbox-media]
type = dropbox
client_id = xxxxxxxx
client_secret = xxxxxxxxxxxx
token = xxxxxxxxxxx

[dropbox-media-crypt]
type = crypt
remote = dropbox-media:/media
password = xxxxxxxxxxxxx
password2 = xxxxxxxxxxxxx

Yes when I go to /mnt/user/gmedia and /mnt/user/gmedia-cloud i see encrypted stuff not 'Movies' 'TV Shows' etc. But if I rclone lsd gdrive-media-crypt: its unencrypted.

For the google drive config remove the "/" with the crypt. Just gdrive-media:media. After that, reboot and run the mount script again, see if that helps.

Link to comment
20 minutes ago, MonsterBandit04 said:

well that worked for all of 2 seconds, then docker didn't mount.  second reboot back to not seeing anything at all.  so lost

 

What do you mean with Docker didn't mount?

 

You could also try to use my simple mount script from 1 or 2 pages back. Just mount the cloud share first, see if that works. Only then continue with the merger part.

Link to comment
5 hours ago, Kaizac said:

 

What do you mean with Docker didn't mount?

 

You could also try to use my simple mount script from 1 or 2 pages back. Just mount the cloud share first, see if that works. Only then continue with the merger part.

gunna try a new config for UNRAID and start there, now that I understand how rclone works in UNRAID (used to ubuntu and used the muffin tutorial)...will report back soon.

 

UPDATE: Well did a new config, switched around my drives a little bit, kept the "gdrive-media:media" of the rclone config and yahtzee.  We are in business.  Thank you guys for all the help, it is truly appreciated.

Edited by MonsterBandit04
Link to comment
On 7/5/2023 at 10:41 PM, Kaizac said:

 

Debrid is not personal cloud storage. It allows you to download torrent files on their servers, often the torrents have already been downloaded by other members. It also gives premium access to a lot of file hosters. So for media consumption you can use certain programs like add-ons with Kodi or Stremio. With Stremio you install Torrentio, setup your Debrid account and you have all the media available to you in specific files/formats/sizes/languages. Having an own Media library is pretty pointless with this, unless you're a real connoisseur and want to have very specific formats and audio codecs. It also isn't great for non-mainstream audio languages, so you could host those locally when needed.

 

I still got my library with both Plex and Emby lifetime, but I almost never use it anymore.

Thanks I don't think I will get into torrent and am using Plex which is good. Plus I have a good percent of non English language so only option is to buy the drives since 90 $ for Dropbox is to expensive for me. 

Link to comment

back...so I got the script to work as far as rclone is concerned...thank you for that.  Any help on getting mergerfs working would be huge as well.  I have sonarr (radarr doesnt see the same gmedia folder that sonarr can see, so i can't get my movies imported into radarr, i get a huge error message when i go to import).  I think I've got things configured correctly as far as the "arrs" and my downloader are concerned, because I can download, but I can't get the "arrs" to process the renaming and move to the correct folders.  Thoughts anyone?

Edited by MonsterBandit04
Link to comment
  • 2 weeks later...

So i have also recently received the email notice that my drive will change to read only due to being over the storage limit. It has been a great service over the last few years considering the cost, but i knew this day would come. Luckily i have been building up my local storage in this time.

 

What is the best way for me to start copying files from gdrive to local storage. Should i just use rsync and copy from the rclone mounts to the relevant local folder.

 

Thanks

Link to comment

So I've been using the upload script for awhile, but im trying to optimise it for Dropbox now. Do i need to take-off the -- in these to work? 

 

# process files
    rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
    --user-agent="$RcloneUploadRemoteName" \
    -vv \
    --dropbox-batch-mode sync
    --buffer-size 5G \
    --drive-chunk-size 128M \
    --tpslimit 8 \
    --checkers 8 \
    --transfers 32 \
    --order-by modtime,$ModSort \
    --min-age $MinimumAge \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --exclude *fuse_hidden* \
    --exclude *_HIDDEN \
    --exclude .recycle** \
    --exclude .Recycle.Bin/** \
    --exclude *.backup~* \
    --exclude *.partial~* \
    --drive-stop-on-upload-limit \
    --bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
    --bind=$RCloneMountIP $DeleteEmpty

 

Thanks

Link to comment
14 hours ago, KeyBoardDabbler said:

So i have also recently received the email notice that my drive will change to read only due to being over the storage limit. It has been a great service over the last few years considering the cost, but i knew this day would come. Luckily i have been building up my local storage in this time.

 

What is the best way for me to start copying files from gdrive to local storage. Should i just use rsync and copy from the rclone mounts to the relevant local folder.

 

Thanks

Just rclone copy back from the mount to your local share. I would advise using the user0 path to bypass cache.

 

1 hour ago, fzligerzronz said:

So I've been using the upload script for awhile, but im trying to optimise it for Dropbox now. Do i need to take-off the -- in these to work? 

 

# process files
    rclone $RcloneCommand $LocalFilesLocation $RcloneUploadRemoteName:$BackupRemoteLocation $ServiceAccount $BackupDir \
    --user-agent="$RcloneUploadRemoteName" \
    -vv \
    --dropbox-batch-mode sync
    --buffer-size 5G \
    --drive-chunk-size 128M \
    --tpslimit 8 \
    --checkers 8 \
    --transfers 32 \
    --order-by modtime,$ModSort \
    --min-age $MinimumAge \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --exclude *fuse_hidden* \
    --exclude *_HIDDEN \
    --exclude .recycle** \
    --exclude .Recycle.Bin/** \
    --exclude *.backup~* \
    --exclude *.partial~* \
    --drive-stop-on-upload-limit \
    --bwlimit "${BWLimit1Time},${BWLimit1} ${BWLimit2Time},${BWLimit2} ${BWLimit3Time},${BWLimit3}" \
    --bind=$RCloneMountIP $DeleteEmpty

 

Thanks

No, you need to add some more \ after each command. Dropbox batch is missing one at least.

Link to comment
6 hours ago, fzligerzronz said:

Thanks. Upload speed restored to full speed!

Just a heads up for you and others considering moving to Dropbox. Since 1 or 2 days ago Dropbox took the policy to only grant 1TB additional storage per user per month. So getting allocated storage beforehand and then increasing it rapidly for migrating will be impossible, or you must be very lucky with the support rep you get. Dropbox can't handle the big influx from Google refugees.

 

But there have also been strong rumors that Dropbox is moving to the same offering as Google, limiting to 10TB per user. So be warned, that you might end up in the same situation as now with Google.

  • Upvote 1
Link to comment
9 hours ago, Kaizac said:

Just rclone copy back from the mount to your local share. I would advise using the user0 path to bypass cache.

 

I just tried running the below command, it did copy the correct files from the shared drive to my local directory but the folder/ file names are now obscured in the local folder. Is it possible to remove the crypt on copy? 

 

#!/bin/bash

# Relocate moviesclasic > local
rclone copy tdrivesportsppv:crypt/6uub5b0iurd8j74m2dddn77dcs /mnt/user0/sports-ppv \
--user-agent="transfer" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit 

exit

 

I tried the built in file explorer via the unraid webui and copied some files from "/mnt/user/mount_rclone/tdrive_sports_ppv_vfs/sports_ppv" to "/mnt/user0/sports-ppv". This worked as expected but i dont think this is the best method to transfer large folder.

Edited by KeyBoardDabbler
Link to comment
11 minutes ago, KeyBoardDabbler said:

 

I just tried running the below command, it did copy the correct files from the shared drive to my local directory but the folder/ file names are now obscured in the local folder. Is it possible to remove the crypt on copy? 

 

#!/bin/bash

# Relocate moviesclasic > local
rclone copy tdrivesportsppv:crypt/6uub5b0iurd8j74m2dddn77dcs /mnt/user0/sports-ppv \
--user-agent="transfer" \
-vv \
--buffer-size 512M \
--drive-chunk-size 512M \
--tpslimit 8 \
--checkers 8 \
--transfers 4 \
--order-by modtime,ascending \
--exclude *fuse_hidden* \
--exclude *_HIDDEN \
--exclude .recycle** \
--exclude .Recycle.Bin/** \
--exclude *.backup~* \
--exclude *.partial~* \
--drive-stop-on-upload-limit 

exit

 

I tried the built in file explorer via the unraid webui and copied some files from "/mnt/user/mount_rclone/tdrive_sports_ppv_vfs/sports_ppv" to "/mnt/user0/sports-ppv". This worked as expected but i dont think this is the best method to transfer large folder.

 

Transferring from within your folder structure is more risky. You won't get the Google Drive feedback signals and in case your mount drops connection it might also corrupt.

 

You need to change --drive-stop-on-upload-limit to --drive-stop-on-download-limit. You're not uploading but downloading, which has a 10TB limit per day. Not something you can achieve with a gigabit connection, so not really needed to put it in.

 

Regarding encrypted data, when you transfer from the crypt mount to your local storage it should be the decrypted data. I'm doing that as we speak. But your rclone mount and folder structure is strange to me. It seems you are copying from your regular mount since you use an encrypted folder name. You need to copy from your crypt mount. So lets say you have gdrive: as regular mount and your crypt is pointed to gdrive: named gdrive_crypt: then you would transfer from gdrive_crypt: to your local storage.

Link to comment

hello guys. 
Who can explain me how I can run multiple encrypted_vfs mounts at same time? 
e.g. google_vfs and onedrive_vfs
inside mount_mergerfs folder. 

I try duplicate mount scripts, but mergerfs throw me error :
Failed to start remote control: failed to init server: listen tcp 127.0.0.1:5572: bind: address already in use

My script 
 

Spoiler

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="2720h" # rclone dir cache time
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="2048G" # Maximum size of rclone cache
RcloneCacheMaxAge="2720h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="jellyfin" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.223.151" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
    LocalFilesLocation="/tmp/$RcloneRemoteName"
    eval mkdir -p $LocalFilesLocation
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
    mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
    echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
    echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    touch mountcheck
    rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
        if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
            echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        else
            echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
            ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
        fi
        echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    else
        RCloneMountIP=""
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    fi
# create rclone mount
    rclone mount \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --allow-other \
    --umask 000 \
    --dir-cache-time $RcloneMountDirCacheTime \
    --attr-timeout $RcloneMountDirCacheTime \
    --log-level INFO \
    --poll-interval 10s \
    --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
    --drive-pacer-min-sleep 10ms \
    --drive-pacer-burst 1000 \
    --vfs-cache-mode full \
    --vfs-cache-max-size $RcloneCacheMaxSize \
    --vfs-cache-max-age $RcloneCacheMaxAge \
    --vfs-read-ahead 1G \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
    echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
    sleep 5
    echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    else
        echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
        docker stop $DockerStart
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
        exit
    fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
    if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    else
# check if mergerfs already installed
        if [[ -f "/bin/mergerfs" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
        else
# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
            echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
            sleep 5
            if [[ -f "/bin/mergerfs" ]]; then
                echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
            else
                echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
                rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
                exit
            fi
        fi
# Create mergerfs mount
        echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
        if [[  $LocalFilesShare2 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare2=":$LocalFilesShare2"
        else
            LocalFilesShare2=""
        fi
        if [[  $LocalFilesShare3 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare3=":$LocalFilesShare3"
        else
            LocalFilesShare3=""
        fi
        if [[  $LocalFilesShare4 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare4=":$LocalFilesShare4"
        else
            LocalFilesShare4=""
        fi
# make sure mergerfs mount point is empty
        mv $MergerFSMountLocation $LocalFilesLocation
        mkdir -p $MergerFSMountLocation
# mergerfs mount command
        mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
        echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
        if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
        else
            echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
            docker stop $DockerStart
            rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
            exit
        fi
    fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
    if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
        echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
    else
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
        echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
        docker start $DockerStart
    fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit
 

 

Link to comment

-

11 minutes ago, aesthetic-barrage8546 said:

hello guys. 
Who can explain me how I can run multiple encrypted_vfs mounts at same time? 
e.g. google_vfs and onedrive_vfs
inside mount_mergerfs folder. 

I try duplicate mount scripts, but mergerfs throw me error :
Failed to start remote control: failed to init server: listen tcp 127.0.0.1:5572: bind: address already in use

My script 
 

  Hide contents

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="2720h" # rclone dir cache time
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="2048G" # Maximum size of rclone cache
RcloneCacheMaxAge="2720h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="jellyfin" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1="--rc"
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.223.151" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

####### END SETTINGS #######

###############################################################################
#####   DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING   #######
###############################################################################

####### Preparing mount location variables #######
RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount
LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount
MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location

####### create directories for rclone mount and mergerfs mounts #######
mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files
mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files
if [[  $LocalFilesShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested."
    LocalFilesLocation="/tmp/$RcloneRemoteName"
    eval mkdir -p $LocalFilesLocation
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders."
    eval mkdir -p $LocalFilesLocation/"$MountFolders"
fi
mkdir -p $RcloneMountLocation

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders."
    mkdir -p $MergerFSMountLocation
fi


#######  Check if script is already running  #######
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}"
echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running."
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running."
    exit
else
    echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
    touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
fi

####### Checking have connectivity #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online"
ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform
if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
    echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online"
else
    echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity.  Will try again on next run"
    rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
    exit
fi

#######  Create Rclone Mount  #######

# Check If Rclone Mount Already Created
if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted."
else
    echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote."
# Creating mountcheck file in case it doesn't already exist
    echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote."
    touch mountcheck
    rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse
# Check bind option
    if [[  $CreateBindMount == 'Y' ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform
        if [ $? -eq 0 ]; then # ping returns exit status 0 if successful
            echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}"
        else
            echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}"
            ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber
        fi
        echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}"
    else
        RCloneMountIP=""
        echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}"
    fi
# create rclone mount
    rclone mount \
    $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \
    --allow-other \
    --umask 000 \
    --dir-cache-time $RcloneMountDirCacheTime \
    --attr-timeout $RcloneMountDirCacheTime \
    --log-level INFO \
    --poll-interval 10s \
    --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \
    --drive-pacer-min-sleep 10ms \
    --drive-pacer-burst 1000 \
    --vfs-cache-mode full \
    --vfs-cache-max-size $RcloneCacheMaxSize \
    --vfs-cache-max-age $RcloneCacheMaxAge \
    --vfs-read-ahead 1G \
    --bind=$RCloneMountIP \
    $RcloneRemoteName: $RcloneMountLocation &

# Check if Mount Successful
    echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds"
# slight pause to give mount time to finalise
    sleep 5
    echo "$(date "+%d.%m.%Y %T") INFO: continuing..."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount."
    else
        echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems.  Stopping dockers"
        docker stop $DockerStart
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
        exit
    fi
fi

####### Start MergerFS Mount #######

if [[  $MergerfsMountShare == 'ignore' ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested."
else
    if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place."
    else
# check if mergerfs already installed
        if [[ -f "/bin/mergerfs" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount"
        else
# Build mergerfs binary
            echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now."
            mkdir -p /mnt/user/appdata/other/rclone/mergerfs
            docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build
            mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin
# check if mergerfs install successful
            echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds"
            sleep 5
            if [[ -f "/bin/mergerfs" ]]; then
                echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount."
            else
                echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully.  Please check for errors.  Exiting."
                rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
                exit
            fi
        fi
# Create mergerfs mount
        echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount."
# Extra Mergerfs folders
        if [[  $LocalFilesShare2 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare2=":$LocalFilesShare2"
        else
            LocalFilesShare2=""
        fi
        if [[  $LocalFilesShare3 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare3=":$LocalFilesShare3"
        else
            LocalFilesShare3=""
        fi
        if [[  $LocalFilesShare4 != 'ignore' ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount."
            LocalFilesShare4=":$LocalFilesShare4"
        else
            LocalFilesShare4=""
        fi
# make sure mergerfs mount point is empty
        mv $MergerFSMountLocation $LocalFilesLocation
        mkdir -p $MergerFSMountLocation
# mergerfs mount command
        mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
# check if mergerfs mount successful
        echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created."
        if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then
            echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created."
        else
            echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed.  Stopping dockers."
            docker stop $DockerStart
            rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
            exit
        fi
    fi
fi

####### Starting Dockers That Need Mergerfs Mount To Work Properly #######

# only start dockers once
if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then
    echo "$(date "+%d.%m.%Y %T") INFO: dockers already started."
else
# Check CA Appdata plugin not backing up or restoring
    if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then
        echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers."
    else
        touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started
        echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."
        docker start $DockerStart
    fi
fi

rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit
 

sudo lsof -t -i:5572
sudo kill -9 $(sudo lsof -t -i:5572)

Try this in termmional

sudo lsof -t -i:5572
sudo kill -9 $(sudo lsof -t -i:5572)

Edited by Klyver
Link to comment
2 minutes ago, Klyver said:

-

Try this in termmional

sudo lsof -t -i:5572
sudo kill -9 $(sudo lsof -t -i:5572)

I mean I can`t run two scripts at one port at same time. 
I need some help how to map my scripts to differrent ports or merge two rclone mounts into one script. 

Now i try add custom command --rc-addr :5573 and seems all started, bu I don`t test this thing yet

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.