Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Set up service accounts and screwed up everything.

Output/logs running the “Mount” script—
 

 03.02.2022 12:48:58 INFO: Creating local folders.
 03.02.2022 12:48:58 INFO: Creating MergerFS folders.
 03.02.2022 12:48:58 INFO: *** Starting mount of remote gdrive_vfs
 03.02.2022 12:48:58 INFO: Checking if this script is already running.
 03.02.2022 12:48:58 INFO: Script not running - proceeding.
 03.02.2022 12:48:58 INFO: *** Checking if online
 03.02.2022 12:48:59 PASSED: *** Internet online
 03.02.2022 12:48:59 INFO: Mount not running. Will now mount gdrive_vfs remote.
 03.02.2022 12:48:59 INFO: Recreating mountcheck file for gdrive_vfs remote.
 2022/02/03 12:48:59 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "gdrive_vfs:" "-vv" "--no-traverse"]
 2022/02/03 12:48:59 DEBUG : Creating backend with remote "mountcheck"
 2022/02/03 12:48:59 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
 2022/02/03 12:48:59 DEBUG : fs cache: adding new entry for parent of "mountcheck", "/usr/local/emhttp"
 2022/02/03 12:48:59 DEBUG : Creating backend with remote "gdrive_vfs:"
 2022/02/03 12:48:59 DEBUG : Creating backend with remote "gdrive:crypt"
 2022/02/03 12:48:59 Failed to create file system for "gdrive_vfs:": failed to make remote "gdrive:crypt" to wrap: drive: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive_upload.json: no such file or directory
 03.02.2022 12:48:59 INFO: *** Checking if IP address 192.168.1.252 already created for remote gdrive_vfs
 03.02.2022 12:49:00 INFO: *** IP address 192.168.1.252 already created for remote gdrive_vfs
 03.02.2022 12:49:00 INFO: *** Created bind mount 192.168.1.252 for remote gdrive_vfs
 03.02.2022 12:49:00 INFO: sleeping for 5 seconds
 2022/02/03 12:49:00 Failed to create file system for "gdrive_vfs:": failed to make remote "gdrive:crypt" to wrap: drive: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive_upload.json: no such file or directory
 03.02.2022 12:49:05 INFO: continuing...
 03.02.2022 12:49:05 CRITICAL: gdrive_vfs mount failed - please check for problems. Stopping dockersplexradarrsonarrbinhex-readarrtautulliprowlarrlidarrbinhex-readarr-audiblebinhex-qbittorrentvpnnzbgetScript 
 Finished Feb 03, 2022 12:49.05
 



My RClone config if you think it would be helpful!
 

 [gdrive]
 type = drive
 scope = drive
 service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive_upload.json
 team_drive = 
 server_side_across_configs = true
 
 [gdrive_vfs]
 type = crypt
 remote = gdrive:crypt
 filename_encryption = standard
 password = some-password
 password2 = some-password
 directory_name_encryption = true
 



Went through all the steps as well as I could understand in the “Follow steps 1-4” link. Everything was working fine aside from those errors, then I set up SAs and my damned power went out. After reboot, I get what you saw above.

Edited by Raneydazed
mobile copy pasta sucks butts
Link to comment
On 2/2/2022 at 4:19 AM, DZMM said:

right at the end.....all looks good????

So per my upload script it’s rclone isn’t running which I believe could be the culprit.  Not sure what’s wrong unfortunately. Rclone has its config file all complete.


 

Oddly I can go into krusader, navigate to rclone and it says I have 1 petabyte available. I also can move files directly via krusader and they do show up on google drive encrypted.

 

I uploaded the pertinent files I believe.

65915C69-89F8-442F-B972-7FDCBD346696.jpeg

F883DBD1-C9C1-4030-B451-B2846AF406C1.png

A3D5E67F-A0DE-413C-B7F6-5A937005760F.png

096A6F76-53C2-44DB-958B-1721EFF1F597.png

439C3633-B7DC-4566-9BC9-C713E2912E29.png

5F0B3322-EDF6-46A7-BDC0-D2FCD06FA054.png

Edited by Jharris1984
Upload pics
Link to comment
9 hours ago, Raneydazed said:

Set up service accounts and screwed up everything.

Output/logs running the “Mount” script—
 

 03.02.2022 12:48:58 INFO: Creating local folders.
 03.02.2022 12:48:58 INFO: Creating MergerFS folders.
 03.02.2022 12:48:58 INFO: *** Starting mount of remote gdrive_vfs
 03.02.2022 12:48:58 INFO: Checking if this script is already running.
 03.02.2022 12:48:58 INFO: Script not running - proceeding.
 03.02.2022 12:48:58 INFO: *** Checking if online
 03.02.2022 12:48:59 PASSED: *** Internet online
 03.02.2022 12:48:59 INFO: Mount not running. Will now mount gdrive_vfs remote.
 03.02.2022 12:48:59 INFO: Recreating mountcheck file for gdrive_vfs remote.
 2022/02/03 12:48:59 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "gdrive_vfs:" "-vv" "--no-traverse"]
 2022/02/03 12:48:59 DEBUG : Creating backend with remote "mountcheck"
 2022/02/03 12:48:59 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
 2022/02/03 12:48:59 DEBUG : fs cache: adding new entry for parent of "mountcheck", "/usr/local/emhttp"
 2022/02/03 12:48:59 DEBUG : Creating backend with remote "gdrive_vfs:"
 2022/02/03 12:48:59 DEBUG : Creating backend with remote "gdrive:crypt"
 2022/02/03 12:48:59 Failed to create file system for "gdrive_vfs:": failed to make remote "gdrive:crypt" to wrap: drive: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive_upload.json: no such file or directory
 03.02.2022 12:48:59 INFO: *** Checking if IP address 192.168.1.252 already created for remote gdrive_vfs
 03.02.2022 12:49:00 INFO: *** IP address 192.168.1.252 already created for remote gdrive_vfs
 03.02.2022 12:49:00 INFO: *** Created bind mount 192.168.1.252 for remote gdrive_vfs
 03.02.2022 12:49:00 INFO: sleeping for 5 seconds
 2022/02/03 12:49:00 Failed to create file system for "gdrive_vfs:": failed to make remote "gdrive:crypt" to wrap: drive: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive_upload.json: no such file or directory
 03.02.2022 12:49:05 INFO: continuing...
 03.02.2022 12:49:05 CRITICAL: gdrive_vfs mount failed - please check for problems. Stopping dockersplexradarrsonarrbinhex-readarrtautulliprowlarrlidarrbinhex-readarr-audiblebinhex-qbittorrentvpnnzbgetScript 
 Finished Feb 03, 2022 12:49.05
 



My RClone config if you think it would be helpful!
 

 [gdrive]
 type = drive
 scope = drive
 service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive_upload.json
 team_drive = 
 server_side_across_configs = true
 
 [gdrive_vfs]
 type = crypt
 remote = gdrive:crypt
 filename_encryption = standard
 password = some-password
 password2 = some-password
 directory_name_encryption = true
 



Went through all the steps as well as I could understand in the “Follow steps 1-4” link. Everything was working fine aside from those errors, then I set up SAs and my damned power went out. After reboot, I get what you saw above.

Well i went ahead and over corrected and made a bunch of different projects, made a bunch of different changes to things. Only to go back to the first OAuth client ID i made and got rid of the SAs. I can't figure out where i went wrong here. Right now I'm just getting LONG lines of errors about how nothing can be uploaded due to quota being met? Its going to take me 64 years to get things uploaded as of now. I am patient but, thats a bit too long for me right now! 

 

2022/02/03 22:06:50 INFO  : 
Transferred:   	    5.097 GiB / 642.745 GiB, 1%, 334 B/s, ETA 64y45w2d1h12m45s
Errors:              9740 (retrying may help)
Checks:              9740 / 9744, 100%
Transferred:            0 / 10012, 0%
Elapsed time:     1h3m0.8s
Checking:

Transferring:
 * data/media/tv/Tacoma F… 2.0][h264]-CtrlHD.mkv:  0% /1.723Gi, 0/s, -
 * data/media/tv/Tacoma F… 5.1][h264]-CtrlHD.mkv:  0% /1.780Gi, 0/s, -
 * data/media/tv/Ted Lass…mos 5.1][h264]-NTb.mkv:  0% /2.544Gi, 0/s, -
 * data/media/tv/Steven U…p][EAC3 2.0][x264].mkv:  0% /237.423Mi, 0/s, -

2022/02/03 22:06:50 ERROR : data/media/tv/Tacoma FD (2019) [imdb-tt8026448]/Season 01/Tacoma FD S01E08 [WEBDL-1080p][EAC3 2.0][h264]-CtrlHD.mkv: Failed to copy: googleapi: Error 403: The user's Drive storage quota has been exceeded., storageQuotaExceeded
2022/02/03 22:06:50 ERROR : data/media/tv/Tacoma FD (2019) [imdb-tt8026448]/Season 01/Tacoma FD S01E08 [WEBDL-1080p][EAC3 2.0][h264]-CtrlHD.mkv: Not deleting source as copy failed: googleapi: Error 403: The user's Drive storage quota has been exceeded., storageQuotaExceeded

 

    So, can i use the existing project, and the existing OAuth Client ID i have/had set up previously? My current Oauth Client 2 that im using (Client ID and Client secret in rclone config), am i able to use that to create service accounts? since it has drive api etc?

 

    I've been using Rclone for a few weeks now. I'd like to set up SAs but I'm missing something here. I tried to set up some and ended up putting a hundred in each of my projects, had like 1000 of them. Which was whack. Do i need to make a different Oauth thing for each of the python quickstart links, ie. drive api and directory api? I have workspace enterprise so unlimited. I can't for the life of me understand whats going on right now

Edited by Raneydazed
Overwhelmed with questions at the moment. Please forgive.
Link to comment
On 2/2/2022 at 1:37 PM, Raneydazed said:

 

 #!/bin/bash########################## Mount Script ############################ Version 0.96.9.3 ############################### EDIT ONLY THESE SETTINGS ######## INSTRUCTIONS# 1. Change the name of the rclone remote and shares to match your setup# 2. NOTE: enter RcloneRemoteName WITHOUT ':'# 3. Optional: include custom command and bind mount settings# 4. Optional: include extra folders in mergerfs mount# REQUIRED SETTINGSRcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive dataRcloneMountShare="/mnt/user/mount_rclone2" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rcloneRcloneMountDirCacheTime="720h" # rclone dir cache timeLocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disableRcloneCacheShare="/mnt/user0/mount_rclone2" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rcloneRcloneCacheMaxSize="300G" # Maximum size of rclone cacheRcloneCacheMaxAge="336h" # Maximum age of cache filesMergerfsMountShare="/mnt/user/mount_mergerfs2" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disableDockerStart="plex radarr sonarr binhex-readarr tautulli prowlarr lidarr binhex-readarr-audible binhex-qbittorrentvpn nzbget " # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings pageMountFolders=\{"data/usenet/completed,data/usenet/intermediate,data/usenet/nzb,data/usenet/queue,data/usenet/tmp,data/usenet/scripts,data/torrent/complete,data/torrent/intermediate,data/torrent/queue,data/media/audible,data/media/books,data/media/movies,data/media/music,data/media/tv,data/media/anime"\} # comma separated list of folders to create within the mount# Note: Again - remember to NOT use ':' in your remote name above# OPTIONAL SETTINGS# Add extra paths to mergerfs mount in addition to LocalFilesShareLocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disableLocalFilesShare3="ignore"LocalFilesShare4="ignore"# Add extra commands or filtersCommand1="--rc"Command2=""Command3=""Command4=""Command5=""Command6=""Command7=""Command8=""CreateBindMount="Y" # Y/N. Choose whether to bind traffic to a particular network adapterRCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP addressNetworkAdapter="eth0" # choose your network adapter. eth0 recommendedVirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them####### END SETTINGS #######
 



28831c56bec1f9374c5a66d89d531eac.jpg

IMG_6989.jpg

do you have any fast drives for your /mnt/user/mount_mergerfs2 share or just HDDs?  That could be why you are slow - e.g. I download to a non-mergerfs folder and then unpack to my mergerfs share which is 100% HDDs:

image.thumb.png.805bb0bb4d0857b9a16a146a649e3475.png

Link to comment

Just tried reinstalling the rclone plugin as well.  Did not work unfortunately. I deleted the crypt folder in google drive and it does reinstall it automatically when it does the mountcheck during the mount. Still saying rclone is not installed when I run the upload file.
 

I’m really at a loss so any help would be appreciated.  Pics are in my post two replies up roughly.

Link to comment
13 minutes ago, Jharris1984 said:

Just tried reinstalling the rclone plugin as well.  Did not work unfortunately. I deleted the crypt folder in google drive and it does reinstall it automatically when it does the mountcheck during the mount. Still saying rclone is not installed when I run the upload file.
 

I’m really at a loss so any help would be appreciated.  Pics are in my post two replies up roughly.

I'm a newb - but have you checked to see if the mouncheck files got created properly? 

Link to comment
3 hours ago, Jharris1984 said:

Mountcheck does end up in the mount_rclone/gdrive_media_vfs/ folder and appears encrypted in google drive. It’s just one file if it matters.

 

Ok got it working. Deleted my upload script and downloaded a new one and it seemed to do the trick for whatever reason.


Additional question now that I have it working.  I’d like to have an encrypted folder for my Plex media (this is done) and an unencrypted folder where I put everything else. Is there a best way to go about this?

Link to comment
8 hours ago, Jharris1984 said:

I’d like to have an encrypted folder for my Plex media (this is done) and an unencrypted folder where I put everything else. Is there a best way to go about this?

Create a rclone remote that isn't encrypted and run a second mount and upload script pair to upload to it.

Link to comment
On 2/2/2022 at 10:14 AM, DZMM said:

are you opening plex AFTER you've successfully mounted?  Plex is saying the file isn't there and that's why it's not playing.

 

You need to launch plex, radarr etc after the mount is successful - that's why the script has a section to do this.

There is not a problem with the mount if I go to the share with my computer I can see and download the files without a problem when I have this issue.

Link to comment
10 hours ago, Michel Amberg said:

There is not a problem with the mount if I go to the share with my computer I can see and download the files without a problem when I have this issue.

I never said there was a problem with the mount!  

Quote

Jan 28, 2022 23:10:25.783 [0x14cf6be2eb38] ERROR - Error opening file '"/movies/XXXXXX XXXXXX (2021)/XXXXXX XXXXXX (2021) WEBDL-1080p.mkv"' - No such file or directory (2)

 

You have to open dockers that access the mount AFTER the mount has successfully been created.  That's why the script takes care of this

Link to comment

Is it safe to delete the contents of rclone_cache ? 

 

I ended up with two of them (rclone_cache and rclone_cache_old) due to the way I initially setup my mounts. I'd like to delete the old one - but wasn't sure if it'll have some detrimental effect on my backend files. 

 

Thanks!

Link to comment
10 hours ago, axeman said:

Is it safe to delete the contents of rclone_cache ? 

 

I ended up with two of them (rclone_cache and rclone_cache_old) due to the way I initially setup my mounts. I'd like to delete the old one - but wasn't sure if it'll have some detrimental effect on my backend files. 

 

Thanks!

If you are sure the old one isn't in use, then yes.

  • Thanks 1
Link to comment

I've managed this weekend to successfully integrate a seedbox into my setup and I'm sharing how I did it. 

 

I've purchased a cheap seedbox as my Plex streams were taking up too much bandwidth as I've gone from a 1000/1000 -->360/180 -->1000/120 connection, so it's been a pain trying to balance each day the bandwidth and file space requirements of moving files from /local to the cloud, and having enough bandwidth for Plex, backup jobs etc etc

 

My setup now is:

 

1. Seedbox downloading to /home/user/local/nzbget and /home/user/local/rutorrent

2. rclone script running each min to move files from /home/user/local/nzbget --> tdrive_vfs:seedbox/nzbget and sync files from /home/user/local/rutorrent --> tdrive:seedbox/rutorrent (torrent files need to stay for seeding)

3. added remote path to ***arr to look in /user/mount/mergerfs/tdrive_vfs for files in /home/user/local (thanks @Akatsuki)

 

image.thumb.png.3913ed3e6d131c62dfc107da9c0a04d6.png

 

It's working perfectly so far as my local setup hasn't changed, with rclone polling locally for changes that have occured in the cloud. 

 

Here's my script - I've stripped out all the options as I don't need them:

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

# REQUIRED SETTINGS
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Add extra commands or filters
Command1="--exclude _unpack/**"
Command2="--fast-list"
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# OPTIONAL SETTINGS

CountServiceAccounts="14"

####### END SETTINGS #######

echo "$(date "+%d.%m.%Y %T") INFO: *** Starting Core Upload Script ***"

####### create directory for script files #######
mkdir -p /home/user/rclone/remotes/tdrive_vfs

#######  Check if script already running  ##########
echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_upload script ***"
if [[ -f "/home/user/rclone/remotes/tdrive_vfs/upload_running" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."
	exit
else
	echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding."
	touch /home/user/rclone/remotes/tdrive_vfs/upload_running
fi

####### Rotating serviceaccount.json file #######

cd /home/user/rclone/remotes/tdrive_vfs/
CounterNumber=$(find -name 'counter*' | cut -c 11,12)
CounterCheck="1"
if [[ "$CounterNumber" -ge "$CounterCheck" ]];then
	echo "$(date "+%d.%m.%Y %T") INFO: Counter file found."
else
	echo "$(date "+%d.%m.%Y %T") INFO: No counter file found . Creating counter_1."
	touch /home/user/rclone/remotes/tdrive_vfs/counter_1
	CounterNumber="1"
fi
ServiceAccount="--drive-service-account-file=/home/user/rclone/service_accounts/sa_spare_upload$CounterNumber.json"
echo "$(date "+%d.%m.%Y %T") INFO: Adjusted service_account_file for upload remote to ${ServiceAccountFile}${CounterNumber}.json based on counter ${CounterNumber}."

#######  Transfer files  ##########

# Upload nzbget files
/usr/local/bin/rclone move /home/user/local tdrive_vfs: $ServiceAccount --config=/home/user/.config/rclone/rclone.conf --user-agent="external" -vv --order-by modtime,$ModSort --min-age 1m $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 --drive-chunk-size=128M --transfers=4 --checkers=8 --exclude rutorrent/** --exclude deluge/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude .Recycle.Bin/** --exclude *.backup~* --exclude *.partial~* --drive-stop-on-upload-limit --delete-empty-src-dirs  --log-file=/home/user/rclone/upload_log.txt

# Sync rutorrent files
/usr/local/bin/rclone sync /home/user/local/seedbox/rutorrent tdrive_vfs:seedbox/rutorrent $ServiceAccount --config=/home/user/.config/rclone/rclone.conf --user-agent="external" -vv --order-by modtime,$ModSort --min-age 1m $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 --drive-chunk-size=128M --transfers=4 --checkers=8 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude .Recycle.Bin/** --exclude *.backup~* --exclude *.partial~* --drive-stop-on-upload-limit --log-file=/home/user/rclone/sync_log.txt

#######  Remove Control Files  ##########

# update counter and remove other control files

	if [[ "$CounterNumber" == "$CountServiceAccounts" ]];then
		rm /home/user/rclone/remotes/tdrive_vfs/counter_*
		touch /home/user/rclone/remotes/tdrive_vfs/counter_1
		echo "$(date "+%d.%m.%Y %T") INFO: Final counter used - resetting loop and created counter_1."
	else
		rm /home/user/rclone/remotes/tdrive_vfs/counter_*
		CounterNumber=$((CounterNumber+1))
		touch /home/user/rclone/remotes/tdrive_vfs/counter_$CounterNumber
		echo "$(date "+%d.%m.%Y %T") INFO: Created counter_${CounterNumber} for next upload run."
	fi

# remove dummy files and replace directories
rm /home/user/rclone/remotes/tdrive_vfs/upload_running
mkdir -p /home/user/local/seedbox/nzbget/completed
echo "$(date "+%d.%m.%Y %T") INFO: Script complete"

exit

 

Edited by DZMM
  • Like 2
Link to comment
On 2/5/2022 at 7:06 AM, DZMM said:

Create a rclone remote that isn't encrypted and run a second mount and upload script pair to upload to it.

So I attempted this but I'm getting a --- 2022/02/13 00:48:17 Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use --- error when I attempt the mount script.  Any particular setting I should change from the original script to get this to work properly?

 

Thanks in advance.

Link to comment
3 hours ago, Jharris1984 said:

So I attempted this but I'm getting a --- 2022/02/13 00:48:17 Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use --- error when I attempt the mount script.  Any particular setting I should change from the original script to get this to work properly?

 

Thanks in advance.

read the message - you are trying to start the remote control twice and bind to the same IP.....

Link to comment
6 hours ago, DZMM said:

read the message - you are trying to start the remote control twice and bind to the same IP.....

Understand that - but the only setting I saw that could change that was for createBindMount which I have the following.

 

CreateBindMount="Y" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.199" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

 

for the second one I have 192.168.1.200 and IP 3.

 

When I run it with a Y in the bind mount it does the following.

 

13.02.2022 10:58:16 INFO: *** Checking if IP address 192.168.1.199 already created for remote gdrive_media_vfs
13.02.2022 10:58:19 INFO: *** Creating IP address 192.168.1.199 for remote gdrive_media_vfs
13.02.2022 10:58:19 INFO: *** Created bind mount 192.168.1.199 for remote gdrive_media_vfs
13.02.2022 10:58:19 INFO: sleeping for 5 seconds
2022/02/13 10:58:19 Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use

 

 

I am not seeing anything that allows me to change the 127.0.0.1:5572 to anything else in the script.  Appreciate the reply though.

Link to comment
1 hour ago, Jharris1984 said:

Understand that - but the only setting I saw that could change that was for createBindMount which I have the following.

 

CreateBindMount="Y" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.199" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

 

for the second one I have 192.168.1.200 and IP 3.

 

When I run it with a Y in the bind mount it does the following.

 

13.02.2022 10:58:16 INFO: *** Checking if IP address 192.168.1.199 already created for remote gdrive_media_vfs
13.02.2022 10:58:19 INFO: *** Creating IP address 192.168.1.199 for remote gdrive_media_vfs
13.02.2022 10:58:19 INFO: *** Created bind mount 192.168.1.199 for remote gdrive_media_vfs
13.02.2022 10:58:19 INFO: sleeping for 5 seconds
2022/02/13 10:58:19 Failed to start remote control: start server failed: listen tcp 127.0.0.1:5572: bind: address already in use

 

 

I am not seeing anything that allows me to change the 127.0.0.1:5572 to anything else in the script.  Appreciate the reply though.

you must have -rc somewhere in the custom commands - you have to move it somewhere else.

 

Re the bind mount - either change it to N, or make sure you enter a different IP for RCloneMountIP

Link to comment

Hey @DZMM! Just updated the script, and discovered these lines

What if I don't want cache? Is this a new features or has this been here for a long time without me noticing? Don't know what to put in these values :

 

RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="400G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files

 

Edited by Lucka
Link to comment
20 hours ago, Lucka said:

Hey @DZMM! Just updated the script, and discovered these lines

What if I don't want cache? Is this a new features or has this been here for a long time without me noticing? Don't know what to put in these values :

 

RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="400G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files

 

The vfs cache stores reads and writes on a first in, first out basis if the file isn't in use.  E.g. it will store a plex stream so that it doesn't need downloading again for increased responsiveness, new writes direct to the mount (not the local folder via mergerfs) will go here first (if there's space).

 

I keep mine small as my plex library scans and plex usage mean it would need to be massive to have a decent hit rate.  I probably should investigate the implications of disabling it as it causes endless writes, and I'd probably be better off just using memory.

Link to comment
2 hours ago, DZMM said:

The vfs cache stores reads and writes on a first in, first out basis if the file isn't in use.  E.g. it will store a plex stream so that it doesn't need downloading again for increased responsiveness, new writes direct to the mount (not the local folder via mergerfs) will go here first (if there's space).

 

I keep mine small as my plex library scans and plex usage mean it would need to be massive to have a decent hit rate.  I probably should investigate the implications of disabling it as it causes endless writes, and I'd probably be better off just using memory.

Ok for now I just set those value to 0, my libraries are more than 50tb for now.

Link to comment
1 hour ago, Lucka said:

Ok for now I just set those value to 0, my libraries are more than 50tb for now.

I don't think 0 is a good idea as e.g. it could mean that all writes will go direct to google drive and won't be retried if there's a problem.  You'll also miss out on read benefits e.g. if same TV episode is accessed, it will load faster from the cache rather than downloading again - I think this also helps with seeking.

 

I have my caches currently set to between 10 and 200GB depending on my retention target.  After posting earlier I did some quick research and for a stable rclone experience you should put something in.

Link to comment
49 minutes ago, DZMM said:

I don't think 0 is a good idea as e.g. it could mean that all writes will go direct to google drive and won't be retried if there's a problem.  You'll also miss out on read benefits e.g. if same TV episode is accessed, it will load faster from the cache rather than downloading again - I think this also helps with seeking.

 

I have my caches currently set to between 10 and 200GB depending on my retention target.  After posting earlier I did some quick research and for a stable rclone experience you should put something in.

I just rolled back to 0.96.7, the latest version of the script is causing me some issues with the cache things, would need more testing on my server.
Also, I execute the upload script in the background via cron, is it possible to check the logs in real time?

Edited by Lucka
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.