Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Trying to access rclone GUI using this command 'rclone rcd --rc-web-gui-no-open-browser --rc-user=admin --rc-pass=admin --rc-addr 192.168.0.188:5572'. It says cant assign address.

Its my first time using rclone, just installed and only configuration I did is set user and password in installation.

 

 

Screenshot_11.jpg

Link to comment

Am I the only one who hasn't received mails regarding GSuite transition?

I'm afraid of loosing my unlimited date if they're degrading it, but I haven't received any mails and I can only see my active GSuite Business account and next invoice date in February 2022.

 

Question 2: In OAuth consent screen have you guys made the project external or internal?

I had it as external, but now I'm not sure.

Edited by Bjur
Link to comment
2 hours ago, Bjur said:

Am I the only one who hasn't received mails regarding GSuite transition?

I'm afraid of loosing my unlimited date if they're degrading it, but I haven't received any mails and I can only see my active GSuite Business account and next invoice date in February 2022.

 

Question 2: In OAuth consent screen have you guys made the project external or internal?

I had it as external, but now I'm not sure.

 

I still haven't gotten it and am afraid also. Still waiting.

 

I don't remember what I set on mine for the Oauth... 

Link to comment
# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_media_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

 

I have this settings and i cannot delete the files. I get this error:

 

2022/01/06 16:22:20 DEBUG : tv: Failed to Rmdir: remove /mnt/user/local/gdrive_media_vfs/tv: directory not empty
2022/01/06 16:22:20 DEBUG : Local file system at /mnt/user/local/gdrive_media_vfs: failed to delete 2 directories

 

What is my mistake?

Link to comment

Hey, I have been running these scripts for about a month and they have been working fantastic and then this morning it ran into an issue. When the mount script runs its comes up with this 

 

11.01.2022 07:50:01 INFO: Creating local folders.
11.01.2022 07:50:01 INFO: Creating MergerFS folders.
11.01.2022 07:50:01 INFO: *** Starting mount of remote gdrive_media_vfs
11.01.2022 07:50:01 INFO: Checking if this script is already running.
11.01.2022 07:50:01 INFO: Script not running - proceeding.
11.01.2022 07:50:01 INFO: *** Checking if online
11.01.2022 07:50:02 PASSED: *** Internet online
11.01.2022 07:50:02 INFO: Success gdrive_media_vfs remote is already mounted.
11.01.2022 07:50:02 INFO: Mergerfs already installed, proceeding to create mergerfs mount
11.01.2022 07:50:02 INFO: Creating gdrive_media_vfs mergerfs mount.
mv: cannot move '/mnt/user/gmedia/mount_mergerfs/gdrive_media_vfs' to '/mnt/user/gmedia/local/gdrive_media_vfs/gdrive_media_vfs': File exists
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option
11.01.2022 07:50:02 INFO: Checking if gdrive_media_vfs mergerfs mount created.
11.01.2022 07:50:02 CRITICAL: gdrive_media_vfs mergerfs mount failed. Stopping dockers.
nzbget
plex
sonarr
radarr
jackett
Script Finished Jan 11, 2022 07:50.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Mount/log.txt

2022/01/11 07:50:03 INFO : vfs cache: cleaned: objects 421 (was 421) in use 0, to upload 0, uploading 0, total size 397.537Gi (was 397.537Gi)

 

I'm assuming it thinks it already has gdrive_media_vfs mergerfs this mounted?

 

when I look inside the Mergerfs folder their is gdrive_media_vfs folder created but has nothing in it

 

Not sure if their is a way to clear the mount point and rerun the script.

 

This is my first time posting on the forum so if you guys need any other info please let me know. Ive also attached the full Mount Log

log (2).zip

Edited by Kevin Clark
Link to comment
1 hour ago, Kevin Clark said:

11.01.2022 07:50:02 INFO: Creating gdrive_media_vfs mergerfs mount.
mv: cannot move '/mnt/user/gmedia/mount_mergerfs/gdrive_media_vfs' to '/mnt/user/gmedia/local/gdrive_media_vfs/gdrive_media_vfs': File exists
fuse: mountpoint is not empty
fuse: if you are sure this is safe, use the 'nonempty' mount option

sometimes files end up in your mergerfs mount location PRIOR to mount.  Go into each disk and manually move files from /mount_mergerfs --> /local then run the mount script again

 

.i.e. /mnt/disk1/mount_mergerfs/.... ----> /mnt/disk1/local/

/mnt/disk2/mount_mergerfs/.... ----> /mnt/disk2/local/

 

etc etc until you've moved all troublesome files.

Link to comment
13 minutes ago, FranticPanic said:

Am I missing or misunderstanding something here? I have content that was downloaded in October still showing in the mount_rclone directory, and looking at the last time it was played was in November.

I'm not sure - I'm tempted to turn the cache off completely as my hit rate must be virtually zero as files don't reside there long enough.   Maybe it's Plex scanning a file that leads to rclone downloading the file for it to be analysed?

Link to comment
30 minutes ago, DZMM said:

sometimes files end up in your mergerfs mount location PRIOR to mount.  Go into each disk and manually move files from /mount_mergerfs --> /local then run the mount script again

 

.i.e. /mnt/disk1/mount_mergerfs/.... ----> /mnt/disk1/local/

/mnt/disk2/mount_mergerfs/.... ----> /mnt/disk2/local/

 

etc etc until you've moved all troublesome files.

Worked perfectly and mounted correctly, thank you for the quick response!

 

Next one's on me, cheers! 🍺

  • Like 1
Link to comment

Alright, back again with what might be a stupid question but here goes. When I want something added to my gdrive(cloud storage) I can add it manually to my mergerfs share right? If I decide to unload my music for example, drag it over to /mnt/user/mount_mergerfs/gdrive_vfs/media/music for example. And it copies over to my local folder(ie. /mnt/user/local/gdrive_vfs/media/music). Which is then uploaded to the cloud storage?

I guess my main question is, how the hell do I know if it worked or if it stays locally and isn’t getting uploaded. Upload logs after running the script tell me

 11.01.2022 11:41:09 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs,data ***11.01.2022 11:41:09 INFO: *** Starting rclone_upload script for gdrive_vfs,data ***11.01.2022 11:41:09 INFO: Exiting as script already running.



How do I know if it actually uploads? It has said the same thing in logs every hour (it’s set to run the ‘upload’ script hourly) for the last week.

Link to comment
16 hours ago, Raneydazed said:

Alright, back again with what might be a stupid question but here goes. When I want something added to my gdrive(cloud storage) I can add it manually to my mergerfs share right? If I decide to unload my music for example, drag it over to /mnt/user/mount_mergerfs/gdrive_vfs/media/music for example. And it copies over to my local folder(ie. /mnt/user/local/gdrive_vfs/media/music). Which is then uploaded to the cloud storage?

I guess my main question is, how the hell do I know if it worked or if it stays locally and isn’t getting uploaded. Upload logs after running the script tell me

 11.01.2022 11:41:09 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_vfs for gdrive_vfs,data ***11.01.2022 11:41:09 INFO: *** Starting rclone_upload script for gdrive_vfs,data ***11.01.2022 11:41:09 INFO: Exiting as script already running.
 



How do I know if it actually uploads? It has said the same thing in logs every hour (it’s set to run the ‘upload’ script hourly) for the last week.

 

Easiest way to tell if stuff has been uploaded into the cloud would be to check your mount_rclone folder (as this is basically a mapped folder to GDrive directly)

 

To answer your first question, yes, you can copy stuff manually to your mergefs folder. I've noticed for my though when I do it copies at like 10MB/sec but if I copy to my "local" folder directly (by manually creating the music folder for example) it copies at gigabit speeds to then be uploaded.

  • Like 1
  • Thanks 1
Link to comment
6 hours ago, Akatsuki said:

 

Easiest way to tell if stuff has been uploaded into the cloud would be to check your mount_rclone folder (as this is basically a mapped folder to GDrive directly)

 

To answer your first question, yes, you can copy stuff manually to your mergefs folder. I've noticed for my though when I do it copies at like 10MB/sec but if I copy to my "local" folder directly (by manually creating the music folder for example) it copies at gigabit speeds to then be uploaded.

Isn't the mount_rclone share empty? Shoot, mine is! I think i must have it configured wrong, because in my mount script I changed it to make /data shares so /data/media/movies,/data/media/tv,/data/media/music. I'll post my scripts and see if you guys can spot anything unusual? 

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone2" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone2" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="300G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs2" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="nzbgetremote binhex-radarr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"data/media/movies,data/media/music,data/media/tv,data/usenet/complete"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1=""
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="y" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

also, my logs for running the mount script say that the "containers are already running" but they aren't. 

 

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone2" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="15M"
BWLimit3Time="16:00"
BWLimit3="12M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1=""
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="y" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="/mnt/user/mount_mergerfs2/backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="/mnt/user/mount_mergerfs2/backup/deleted_files" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

It seems like I'm having an issue with how rclone and mergerfs see the shares? in my /mnt, I have, /mnt/user/mount_mergerfs/gdrive_vfs. gets weird after that. i then have /mount_mergerfs/grdive_vfs/gdrive_vfs/data/media/ tv|movies|music. i also have, /mount_mergerfs/gdrive_vfs/data/media/ tv|movies|music. Local has the same, including the /usenet/complete. My rclone mount only has ---> /mount_rclone2/gdrive_vfs/gdrive_vfs/data/media/movies. thats it. what the heck am i missing?

Edited by Raneydazed
realized how long that was
Link to comment
16 hours ago, Raneydazed said:

Isn't the mount_rclone share empty? Shoot, mine is! I think i must have it configured wrong, because in my mount script I changed it to make /data shares so /data/media/movies,/data/media/tv,/data/media/music. I'll post my scripts and see if you guys can spot anything unusual? 

 

#!/bin/bash

######################
#### Mount Script ####
######################
## Version 0.96.9.3 ##
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Change the name of the rclone remote and shares to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: include custom command and bind mount settings
# 4. Optional: include extra folders in mergerfs mount

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone2" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
RcloneMountDirCacheTime="720h" # rclone dir cache time
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
RcloneCacheShare="/mnt/user0/mount_rclone2" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
RcloneCacheMaxSize="300G" # Maximum size of rclone cache
RcloneCacheMaxAge="336h" # Maximum age of cache files
MergerfsMountShare="/mnt/user/mount_mergerfs2" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="nzbgetremote binhex-radarr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFolders=\{"data/media/movies,data/media/music,data/media/tv,data/usenet/complete"\} # comma separated list of folders to create within the mount

# Note: Again - remember to NOT use ':' in your remote name above

# OPTIONAL SETTINGS

# Add extra paths to mergerfs mount in addition to LocalFilesShare
LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
LocalFilesShare3="ignore"
LocalFilesShare4="ignore"

# Add extra commands or filters
Command1=""
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

CreateBindMount="y" # Y/N. Choose whether to bind traffic to a particular network adapter
RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended
VirtualIPNumber="2" # creates eth0:x e.g. eth0:1.  I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them

also, my logs for running the mount script say that the "containers are already running" but they aren't. 

 

#!/bin/bash

######################
### Upload Script ####
######################
### Version 0.95.5 ###
######################

####### EDIT ONLY THESE SETTINGS #######

# INSTRUCTIONS
# 1. Edit the settings below to match your setup
# 2. NOTE: enter RcloneRemoteName WITHOUT ':'
# 3. Optional: Add additional commands or filters
# 4. Optional: Use bind mount settings for potential traffic shaping/monitoring
# 5. Optional: Use service accounts in your upload remote
# 6. Optional: Use backup directory for rclone sync jobs

# REQUIRED SETTINGS
RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gdrive_vfs" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone2" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

# Note: Again - remember to NOT use ':' in your remote name above

# Bandwidth limits: specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. Or 'off' or '0' for unlimited.  The script uses --drive-stop-on-upload-limit which stops the script if the 750GB/day limit is achieved, so you no longer have to slow 'trickle' your files all day if you don't want to e.g. could just do an unlimited job overnight.
BWLimit1Time="01:00"
BWLimit1="off"
BWLimit2Time="08:00"
BWLimit2="15M"
BWLimit3Time="16:00"
BWLimit3="12M"

# OPTIONAL SETTINGS

# Add name to upload job
JobName="_daily_upload" # Adds custom string to end of checker file.  Useful if you're running multiple jobs against the same remote.

# Add extra commands or filters
Command1=""
Command2=""
Command3=""
Command4=""
Command5=""
Command6=""
Command7=""
Command8=""

# Bind the mount to an IP address
CreateBindMount="y" # Y/N. Choose whether or not to bind traffic to a network adapter.
RCloneMountIP="192.168.1.253" # Choose IP to bind upload to.
NetworkAdapter="eth0" # choose your network adapter. eth0 recommended.
VirtualIPNumber="1" # creates eth0:x e.g. eth0:1.

# Use Service Accounts.  Instructions: https://github.com/xyou365/AutoRclone
UseServiceAccountUpload="N" # Y/N. Choose whether to use Service Accounts.
ServiceAccountDirectory="/mnt/user/appdata/other/rclone/service_accounts" # Path to your Service Account's .json files.
ServiceAccountFile="sa_gdrive_upload" # Enter characters before counter in your json files e.g. for sa_gdrive_upload1.json -->sa_gdrive_upload100.json, enter "sa_gdrive_upload".
CountServiceAccounts="15" # Integer number of service accounts to use.

# Is this a backup job
BackupJob="N" # Y/N. Syncs or Copies files from LocalFilesLocation to BackupRemoteLocation, rather than moving from LocalFilesLocation/RcloneRemoteName
BackupRemoteLocation="/mnt/user/mount_mergerfs2/backup" # choose location on mount for deleted sync files
BackupRemoteDeletedLocation="/mnt/user/mount_mergerfs2/backup/deleted_files" # choose location on mount for deleted sync files
BackupRetention="90d" # How long to keep deleted sync files suffix ms|s|m|h|d|w|M|y

It seems like I'm having an issue with how rclone and mergerfs see the shares? in my /mnt, I have, /mnt/user/mount_mergerfs/gdrive_vfs. gets weird after that. i then have /mount_mergerfs/grdive_vfs/gdrive_vfs/data/media/ tv|movies|music. i also have, /mount_mergerfs/gdrive_vfs/data/media/ tv|movies|music. Local has the same, including the /usenet/complete. My rclone mount only has ---> /mount_rclone2/gdrive_vfs/gdrive_vfs/data/media/movies. thats it. what the heck am i missing?

So I read through like 60 pages of people complaining and found the answer! Run the cleanup script! And it worked like a charm. I hadn't added it back. (this is my second attempt at using rclone and gdrive, i nuked the whole first attempt lol) So everything is fine now! 

 

Question, easiest way to move a good portion of my local files (about 40tbs give or take) to the gdrive? I'd prefer not to redownload everything if I can avoid it. Copying is taking FOREVER

Link to comment

Hey guys! I currently run my rclone_upload script every minute, so when something finish downloading, it is uploaded instantly. I noticed something weird : the script run for exactly 9m33s every time (when nothing need to be uploaded), so in reality, each downloaded files are uploaded every 9m33s.


Here's the log I get from the script when nothing need to be uploaded (idk why the entire log is bugged, so I don't have the full one) 

 

2022/01/16 20:43:01 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/01/16 20:43:01 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1
2022/01/16 20:43:01 DEBUG : torrents/downloads: Excluded
2022/01/16 20:43:01 DEBUG : torrents/downloads: Excluded
2022/01/16 20:44:01 NOTICE: Scheduled bandwidth change. Bandwidth limits disabled
2022/01/16 20:44:01 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Elapsed time: 0.0s

2022/01/16 20:45:01 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Elapsed time: 0.0s

2022/01/16 20:46:01 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Elapsed time: 0.0s

2022/01/16 20:47:01 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Elapsed time: 0.0s

2022/01/16 20:48:01 INFO :
Transferred: 0 / 0

 

Link to comment
On 1/1/2022 at 2:18 AM, DZMM said:

Hmm this is interesting (in a wrong way of course!)  I have the same setup and the only thing I can think of is that maybe rclone upload doesn't like hardlinks i.e. after it's uploaded the file it's deleting the original file rather than respecting the hardlink?  Mergerfs definitely supports hardlinks.

 

This would explain why I haven't come across this as I seed for a max of 14 days, whereas because of my slow upload speed, rclone upload doesn't typically upload a file until 14 days+.

 

I can't think of a solution, other than maybe ditching hardlinks and doing a copy to your media folder so that rclone can move the copy?  

 

Worth a test to see if this is the cause?

 

Did some further digging/testing and I think I've been able to figure out how to fix hardlinking. I think for some reason rclone is thinking of the mergefs as another filesystem (maybe because it's mounted?) and it seems setting up remote path mapping in Sonarr/Radarr fixes this:

 

I followed the below guide:

https://docs.usbx.me/books/sonarr/page/initial-setup#bkmrk-remote-path-mapping

 

My mappings for Sonarr as follows:

 

image.png.8278591d018b4894776d6a4325af73fb.png

 

Within Sonarr --> Settings --> Download Clients

 

image.png.e1dde52eb0ba36cc34d1efd9c2270df9.png

 

 

 

Edited by Akatsuki
Link to comment
7 hours ago, Akatsuki said:

 

Did some further digging/testing and I think I've been able to figure out how to fix hardlinking. I think for some reason rclone is thinking of the mergefs as another filesystem (maybe because it's mounted?) and it seems setting up remote path mapping in Sonarr/Radarr fixes this:

 

I followed the below guide:

https://docs.usbx.me/books/sonarr/page/initial-setup#bkmrk-remote-path-mapping

 

My mappings for Sonarr as follows:

 

image.png.8278591d018b4894776d6a4325af73fb.png

 

Within Sonarr --> Settings --> Download Clients

 

image.png.e1dde52eb0ba36cc34d1efd9c2270df9.png

 

 

 

This is very interesting, and I think you've unlocked a significant improvement. 

 

One question first.  So, in your torrent client have you changed the download location from /cloud/downloads/torrents/sonarr to /cloud/local/downloads/torrents/sonarr?  Or was it always /cloud/local/downloads/torrents/sonarr?

Edited by DZMM
Link to comment
16 hours ago, DZMM said:

This is very interesting, and I think you've unlocked a significant improvement. 

 

One question first.  So, in your torrent client have you changed the download location from /cloud/downloads/torrents/sonarr to /cloud/local/downloads/torrents/sonarr?  Or was it always /cloud/local/downloads/torrents/sonarr?

 

My download client is set to /cloud/downloads/torrents/sonarr (so it's using the mount_mergefs folder)

Link to comment
8 hours ago, Akatsuki said:

 

My download client is set to /cloud/downloads/torrents/sonarr (so it's using the mount_mergefs folder)

Thanks. I think your find means we can ditch mergerfs and use rclone union instead.  I didn't use union before because it doesn't support hardlinks, but I think this workaround fixes that.

 

I'll try and test this week.

Link to comment

Why Rclone keep taking up download bandwidth ??

 

Script Finished Jan 24, 2022 15:30.03

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

2022/01/24 15:30:59 INFO : vfs cache: cleaned: objects 1 (was 1) in use 1, to upload 0, uploading 0, total size 355.439Mi (was 355.439Mi)
2022/01/24 15:31:59 INFO : vfs cache: cleaned: objects 1 (was 1) in use 0, to upload 0, uploading 0, total size 355.439Mi (was 355.439Mi)
2022/01/24 15:32:59 INFO : vfs cache: cleaned: objects 1 (was 1) in use 0, to upload 0, uploading 0, total size 355.439Mi (was 355.439Mi)
2022/01/24 15:33:59 INFO : vfs cache: cleaned: objects 10 (was 10) in use 9, to upload 0, uploading 0, total size 469.470Mi (was 469.470Mi)
2022/01/24 15:34:59 INFO : vfs cache: cleaned: objects 30 (was 30) in use 8, to upload 0, uploading 0, total size 958.388Mi (was 958.388Mi)
2022/01/24 15:35:59 INFO : vfs cache: cleaned: objects 53 (was 53) in use 9, to upload 0, uploading 0, total size 1.568Gi (was 1.568Gi)
2022/01/24 15:36:59 INFO : vfs cache: cleaned: objects 73 (was 73) in use 12, to upload 0, uploading 0, total size 2.203Gi (was 2.203Gi)
2022/01/24 15:37:59 INFO : vfs cache: cleaned: objects 97 (was 97) in use 12, to upload 0, uploading 0, total size 2.835Gi (was 2.835Gi)
2022/01/24 15:38:59 INFO : vfs cache: cleaned: objects 123 (was 123) in use 11, to upload 0, uploading 0, total size 3.486Gi (was 3.486Gi)
2022/01/24 15:39:59 INFO : vfs cache: cleaned: objects 143 (was 143) in use 6, to upload 0, uploading 0, total size 4.105Gi (was 4.105Gi)
Script Starting Jan 24, 2022 15:40.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

24.01.2022 15:40:01 INFO: Creating local folders.
24.01.2022 15:40:01 INFO: Creating MergerFS folders.
24.01.2022 15:40:01 INFO: *** Starting mount of remote gdrive_vfs
24.01.2022 15:40:01 INFO: Checking if this script is already running.
24.01.2022 15:40:01 INFO: Script not running - proceeding.
24.01.2022 15:40:01 INFO: *** Checking if online
24.01.2022 15:40:02 PASSED: *** Internet online
24.01.2022 15:40:02 INFO: Success gdrive_vfs remote is already mounted.
24.01.2022 15:40:02 INFO: Check successful, gdrive_vfs mergerfs mount in place.
24.01.2022 15:40:02 INFO: dockers already started.
24.01.2022 15:40:02 INFO: Script complete
Script Finished Jan 24, 2022 15:40.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt

2022/01/24 15:40:59 INFO : vfs cache: cleaned: objects 169 (was 169) in use 9, to upload 0, uploading 0, total size 4.880Gi (was 4.880Gi)
2022/01/24 15:41:59 INFO : vfs cache: cleaned: objects 193 (was 193) in use 11, to upload 0, uploading 0, total size 5.619Gi (was 5.619Gi)
2022/01/24 15:42:59 INFO : vfs cache: cleaned: objects 214 (was 214) in use 12, to upload 0, uploading 0, total size 6.162Gi (was 6.162Gi)
2022/01/24 15:43:59 INFO : vfs cache: cleaned: objects 230 (was 230) in use 11, to upload 0, uploading 0, total size 6.804Gi (was 6.804Gi)
2022/01/24 15:44:59 INFO : vfs cache: cleaned: objects 249 (was 249) in use 10, to upload 0, uploading 0, total size 7.455Gi (was 7.455Gi)
2022/01/24 15:45:59 INFO : vfs cache: cleaned: objects 270 (was 270) in use 9, to upload 0, uploading 0, total size 8.037Gi (was 8.037Gi)
2022/01/24 15:46:59 INFO : vfs cache: cleaned: objects 293 (was 293) in use 10, to upload 0, uploading 0, total size 8.652Gi (was 8.652Gi)

Edited by belizejackie
Link to comment

I'm finally joining the cool kids club!

 

Quote

MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

 

Is there a reason to have the upload script do this? as opposed to just having a path like /tower/local/downloads ? 

 

 

Link to comment

Hi

 

I frequently get failed mount attempts on first start. I have to keep running it and then it will catch and run flawlessly.

 

Reboots don't happen often, but I'm keen to try get it running on first attempt.

 

2022/01/26 12:18:28 DEBUG : 4 go routines active
26.01.2022 12:18:28 INFO: *** Creating mount for remote gdrive
26.01.2022 12:18:28 INFO: sleeping for 20 seconds
2022/01/26 12:18:28 NOTICE: Serving remote control on http://localhost:5572/
26.01.2022 12:18:48 INFO: continuing...
26.01.2022 12:18:48 CRITICAL: gdrive mount failed - please check for problems.  Stopping dockers

 

Any one solved this yet?
 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.