[Plugin] rclone


Waseh

906 posts in this topic Last Reply

Recommended Posts

Hi, all,

 

In another thread i posted about huge amounts of data being used by the Unraid box (its static ip address as shown in BandiwdthD and ntop shows 150GB-250GB received per night!)

 

I have the 2020.09.29 version of this rclone plugin from Waseh's Repository.

 

Got a gdrive mounted and am able to use Plex and Emby with the rclone-mounted gdrive.

 

I disabled this plugin (rclone), rebooted unraid, and the huge nightly amounts of data being downloaded stopped.

 

So the culprit is rclone plugin. Or is it? Could it be some bug? Could it be the Plex or Emby plugin doing something stupid? (The Plex and Emby database syncs were over weeks or couple months ago.)

 

Is there any way I can find out what is happening? What and why is downoading 200GB or more of data every night?

 

I'm disabling the Plex docker image and see if it happens tonight. If yes, I'll next disable Emby. But in the mean time, I'm hoping for some pointer from you guys.

 

Thank you!

Link to post
  • Replies 905
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

rclone Plugin   Hey everyone   This is a simple plugin which installs rclone on your unraid system. The plugins have now been merged so both the stable and beta branch are

Hey guys Sorry for the lack of updates in a (long) while ? Real life has been taking up a lot of time and my own install of rclone has been sufficient for my needs. However both the stable branch

Someone beat me to it. And probably explained it in a nicer way than i would have Reinstalling the plugin will also result in the newest version being pulled Take a look at the change log

Posted Images

Any reason the rclone plugin would not be updating? The Unrain plugin, not rclone itself.

 

I'm on 2020.09.19 and the update to 2020.09.29 hangs and never downloads.  Several other plugins updated just fine.

Link to post
  • 2 weeks later...

When trying to follow the spaceinvader one tutorial, I get this error when I run rclone config via ssh:

 

Failed to load config file "/boot/config/plugins/rclone/.rclone.conf": could not parse line: rclone config

 

The config file only contains the words rclone config, I'm not sure if that's normal? I removed and re-added the plugin. I'm not sure what else to try?

Link to post
1 hour ago, Waseh said:

Have your tried removing the config file and trying again?

Other than shutting down the server and plugging the flash drive into another system, is there another way to delete it? Browsing via the Flash share on another system won't let me delete it. I also can't figure out the path in Krusader.

 

Edit: I figured if out! In the actual plugin window, I just needed to delete "rclone config" and click update. Then when I tried to actually run it again, I get the new remote option etc.

Edited by Bearco
Link to post

this is my mount script...it used to work perfectly but now I am getting an error 

 

The current mounting script:

 

#!/bin/bash

#######  Check if script is already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_mount_running

fi

#######  End Check if script already running  ##########

#######  Start rclone google mount  ##########

# check if google mount already created

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

else

echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

# create directories for rclone mount and unionfs mount

mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs

rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off media_vfs: /mnt/user/mount_rclone/google_vfs &

# check if mount successful

# slight pause to give mount time to finalise

sleep 5

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone google vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone google vfs mount failed - please check for problems."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End rclone google mount  ##########

#######  Start unionfs mount   ##########

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."

else

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End Mount unionfs   ##########

############### starting dockers that need unionfs mount ######################

# only start dockers once

if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: dockers already started"

else

touch /mnt/user/appdata/other/rclone/dockers_started

echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."

docker start binhex-emby
docker start binhex-sabnzbd
docker start binhex-radarr
docker start binhex-sonarr

fi

############### end dockers that need unionfs mount ######################

exit

 

The error message that I get:

27.02.2021 15:00:01 INFO: mounting rclone vfs.
2021/02/27 15:00:03 Fatal error: Directory is not empty: /mnt/user/mount_rclone/google_vfs If you want to mount it anyway use: --allow-non-empty option
27.02.2021 15:00:06 CRITICAL: rclone google vfs mount failed - please check for problems.
Script Finished Feb 27, 2021 15:00.06

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_unionfs_mount/log.txt

 

 

so as suggested in the error message I added "--allow-non-empty" and the mounting script looks like this:

rclone mount --allow-non-empty --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off media_vfs: /mnt/user/mount_rclone/google_vfs &

 

 

Now I get another error message: 

27.02.2021 15:07:21 INFO: mounting rclone vfs.
2021/02/27 15:07:22 mount helper error: fusermount: unknown option 'nonempty'
2021/02/27 15:07:22 Fatal error: failed to mount FUSE fs: fusermount: exit status 1
27.02.2021 15:07:26 CRITICAL: rclone google vfs mount failed - please check for problems.
Script Finished Feb 27, 2021 15:07.26

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_unionfs_mount/log.txt

 

Please help me!!

Edited by livingonline8
more clear
Link to post

FWIW,

I'm running version 2020.09.29 of the rclone plugin with absolutely no issues on 6.9 RC2. Been running for months. I don't know if this is the most recent version of the plugin or not but it kinda doesn't matter.
You can update rclone from within the plugin so.....ya, this is great!

Also worth mentioning, I don't recall exactly if this is what I did...but I believe I had this version of the rclone plugin installed prior to updating to any 6.9 release. Then upgraded to 6.9. Again, no issues.

Edited by Stupifier
Link to post

@livingonline8

... I see that you were actually one of the people that already asked this question back in December and Stupifier told you not to mount to a non-empty folder.

The allow-non-empty function does not work anymore on unraid.

 

I suggest you read the suggestions you already got a couple of months ago 🙄

Edited by Waseh
Link to post
31 minutes ago, Waseh said:

@livingonline8

... I see that you were actually one of the people that already asked this question back in December and Stupifier told you not to mount to a non-empty folder.

The allow-non-empty function does not work anymore on unraid.

 

I suggest you read the suggestions you already got a couple of months ago 🙄

Thank you for your patience with me... please forgive me but I am not tech guru... I bet you are 😊

 

it is true that I was told not to mount to a non-empty folder but my setup was working fine for 2 years and I have to mount to non empty folder because I am combining  “unionfs” media folder in google drive with media folder on my unbraid 

 

why did this break all of a sudden and again I really have no option but to mount to non-empty because all my media is there

Link to post

Hi @Waseh, I am facing an issue with rclone. Below I have mentioned the detailed information regarding that. 

 

Unraid: 6.8.3 | rclone: 1.54.0

 

I have multiple apps running on my docker such as Sonarr, Radarr, Plex and rclone. Before rclone can mount the Google Drive, Sonarr/Radarr starts making the folder onto my mounting location.

 

Yes, I use Google Drive for the Media and Sonarr/Radarr stores the files to Google Drive which is mounted by rclone.

 

So, if Sonarr/Radarr ran before rclone could complete the mounting, rclone would throw this error:

 

2021/03/01 20:31:27 Fatal error: Directory is not empty: /mnt/disks/ua_hdd1/gdrive If you want to mount it anyway use: --allow-non-empty option
Script Starting Mar 01, 2021 20:38.01

 

Now, as the error suggests, make use of --allow-non-empty but I think that has been discontinued by rclone officially.

 

So let me know if there was a concrete solution which would make sure nothing writes onto the mounting location before rclone mounts the Google Drive on the mounting location?

 

rclone command:

 

rclone mount --allow-other --buffer-size 256M --dir-cache-time 96h --timeout 1h --default-permissions --uid 99 --gid 100 --umask 002 --vfs-cache-mode full --vfs-cache-max-age 24h -v --progress --cache-dir /mnt/disks/ua_hdd1/rclone/cache gdrive: /mnt/disks/ua_hdd1/gdrive &

 

mounting locaiton: /mnt/disks/ua_hdd1/gdrive

 

rclone config (sanitized):

 

[gdrive]
type = drive
client_id = xxxxxxx.apps.googleusercontent.com
client_secret = xxxxxx
scope = drive
token = {"access_token":"xxxxxxx"}
team_drive = xxxxxx
root_folder_id = 

 

Looking forward to hearing from you.

 

rclone forum post regarding the same issue: https://forum.rclone.org/t/how-to-delete-files-folder-inside-the-mount-location-before-mounting/22566

 

Thanks.

Edited by learningunraid
Link to post
35 minutes ago, learningunraid said:

Only if I knew how to do that? Because, Sonarr/Radarr runs automatically.

Unraid GUI --> Docker Tab --> Uncheck Autostart next to Sonarr/Radarr....now they do not run automatically......

Link to post

Hey guys... I got rClone working and have been able to link my gDrives using custom APIs and all that stuff, drop box and Mega's and wot not.. but they are all sort of mounted as directories.. and this is great for some of them.. but I would also like to have a traditional type of cloud were the files are actually mirrored locally and changes to the files, or files added are mirrored back to the cloud....

 

How would I go about setting this up.. if it is possible?

Link to post
23 hours ago, questionbot said:

Hey guys... I got rClone working and have been able to link my gDrives using custom APIs and all that stuff, drop box and Mega's and wot not.. but they are all sort of mounted as directories.. and this is great for some of them.. but I would also like to have a traditional type of cloud were the files are actually mirrored locally and changes to the files, or files added are mirrored back to the cloud....

 

How would I go about setting this up.. if it is possible?

 

if I make a script and set it to daily like this... is this going ot work?


 

#!/bin/bash
rclone sync /path1 remote1:

rclone sync /path2 remote2:

rclone sync /path3 remote3:

rclone sync /path4 remote4:

 

Do I need some kind og code to determine if the script is already running? Then not run if it is?

Link to post

how do you mount a remote when you have an encrypted config file? I had it all working.. then went to encrypt the config.. now rclone mount is asking for the password every time it is run. So my mount script is broken.... 

 

Is there a way to get mounting to work somehow? 

Edited by questionbot
Link to post

/tmp/user.scripts/tmpScripts/Rclone_mount/script: line 5: section: command not found
I get this when I try to run a mount script


Sent from my iPhone using Tapatalk
Sorted it out I just needed to run in background

Link to post
  • 2 weeks later...

Hello friends of Unraid,

I would like to know where i am struggling...

I just arrived to setup my personal OneDrive and i were able to see my files who are already on my OneDrive by using "Rclone ls Onedrive:"... For now i try to Mount the Onedrive but i dont arrive it.
Is it possible to mount the drive by using unassigned devices plugin somehow?

It would be nice to make the OneDrive available in the /mnt directory.

What i have seen is the directory /mnt/disks/rclone_volume but the directory is empty (watching with WinSCP)...

I would like to expose the Onedrive as share from unraid, could this be done somehow?

Edited by Lobsi
Link to post

This doesn't necessarily have anything to do with the plugin or rclone or unraid but... Just in case anyone runs into this issue... here is a solution.  This is more of an issue with pihole but rclone is affected because it does so many DNS lookups.

 

I had rclone backing up to onedrive from the unraid server and it wouldn't delete and uploads were odd.  I checked the script result and I saw these entries:

 

2021/03/24 11:51:54 Failed to create file system for "onedrive:backups/some_folder_name": failed to get root: Get "https://graph.microsoft.com/v1.0/drives/blahblahblah/root": dial tcp: lookup graph.microsoft.com on 192.168.xx.xx:53 server misbehaving
2021/03/24 11:51:54 Failed to create file system for "onedrive:backups/some_folder_name": failed to get root: Get "https://graph.microsoft.com/v1.0/drives/blahblahblah/root": dial tcp: lookup graph.microsoft.com on 192.168.xx.xx:53 server misbehaving

 

This meant that rclone was having DNS resolution issues.  Well, I had a pihole serving up DNS requests.  Apparently, they introduced rate limiting to the pihole and because rclone is so chatty with DNS, it was hitting the rate limiting count when trying to backup.  You can see this by a spike on pihole during a backup.

 

The solution is to modify pihole's /etc/pihole/pihole-FTL.conf and set the RATE_LIMIT.  I set it to RATE_LIMIT=0/0 because that disables it but I imagine you can tweak and tune as you want.

 

Link to post

server-diagnostics-20210326-2259.zip

 

Hello :)

 

I got a strange problem, started today after working superb for like 6 months.

 

After I mount my seedbox I can enter and see all the content from the share but when I go into my dockers editor I can't see the "files" folder, I can manually add it to the path and then see the content inside but none of the dockers can access it.

 

I have tried to fix permissions, moved the share to cache and back to user share but nothing works, any help would be much appreciated :) 

folders01.PNG.4fe33c861712e4969eaa9c1ae5b8be61.PNGfolders02.PNG.1f572c8228c77e38a455d9ccfc2ad15e.PNG

 

edit-

 

Fixed it, went into unRAID settings and then rclone settings and updated to latest version :)

 

edit-

Edited by X672
Link to post

I'm sorry if this has been explain, but I just can't find it:

 

Any reason why I'm getting this warning in fix common problems:

 

Docker application Rclone-mount has volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option

image.png.a7c49cf54e385a8cd7eefb00a284ff0f.png

 

 

This is what the docker looks like (see docker-rclone-1.png).

 

 

 

 

docker-rclone-1.PNG

Link to post

@APD189

This is a thread for the plugin version of rclone. If you're having trouble with the docker version you should probably direct your question to the docker specific thread.

Link to post

Hi everyone and sorry for my, probably, dumb question:
installed plugin and mapped a folder to reach gdrive folders (prefer to avoid sync on 2 ways because i'm using it on various pc).

Evertything seems to be fine because on unraid i can see content of my gdrive but, when i try to reach it using W10, system gave me an error telling i don't have authorization to enter the folder.
Where i'm in wrong?
Thanks for your help!
rodeg3.thumb.JPG.b270888569990de8a27b2713f3bee973.JPG
[GDrive_rodeg]
type = drive
client_id = xxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com
client_secret = xxxxxxxxxxxxxxxxx
scope = drive
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"1//xxxx","expiry":"2021-03-29T10:47:02.890763241+02:00"}

rodeg1.JPG

rodeg2.JPG

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.