[Plugin] rclone


Waseh

Recommended Posts

1 hour ago, Waseh said:

Have your tried removing the config file and trying again?

Other than shutting down the server and plugging the flash drive into another system, is there another way to delete it? Browsing via the Flash share on another system won't let me delete it. I also can't figure out the path in Krusader.

 

Edit: I figured if out! In the actual plugin window, I just needed to delete "rclone config" and click update. Then when I tried to actually run it again, I get the new remote option etc.

Edited by Bearco
Link to comment

this is my mount script...it used to work perfectly but now I am getting an error 

 

The current mounting script:

 

#!/bin/bash

#######  Check if script is already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_mount_running

fi

#######  End Check if script already running  ##########

#######  Start rclone google mount  ##########

# check if google mount already created

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

else

echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

# create directories for rclone mount and unionfs mount

mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs

rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off media_vfs: /mnt/user/mount_rclone/google_vfs &

# check if mount successful

# slight pause to give mount time to finalise

sleep 5

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone google vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone google vfs mount failed - please check for problems."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End rclone google mount  ##########

#######  Start unionfs mount   ##########

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."

else

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End Mount unionfs   ##########

############### starting dockers that need unionfs mount ######################

# only start dockers once

if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: dockers already started"

else

touch /mnt/user/appdata/other/rclone/dockers_started

echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."

docker start binhex-emby
docker start binhex-sabnzbd
docker start binhex-radarr
docker start binhex-sonarr

fi

############### end dockers that need unionfs mount ######################

exit

 

The error message that I get:

27.02.2021 15:00:01 INFO: mounting rclone vfs.
2021/02/27 15:00:03 Fatal error: Directory is not empty: /mnt/user/mount_rclone/google_vfs If you want to mount it anyway use: --allow-non-empty option
27.02.2021 15:00:06 CRITICAL: rclone google vfs mount failed - please check for problems.
Script Finished Feb 27, 2021 15:00.06

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_unionfs_mount/log.txt

 

 

so as suggested in the error message I added "--allow-non-empty" and the mounting script looks like this:

rclone mount --allow-non-empty --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off media_vfs: /mnt/user/mount_rclone/google_vfs &

 

 

Now I get another error message: 

27.02.2021 15:07:21 INFO: mounting rclone vfs.
2021/02/27 15:07:22 mount helper error: fusermount: unknown option 'nonempty'
2021/02/27 15:07:22 Fatal error: failed to mount FUSE fs: fusermount: exit status 1
27.02.2021 15:07:26 CRITICAL: rclone google vfs mount failed - please check for problems.
Script Finished Feb 27, 2021 15:07.26

Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_unionfs_mount/log.txt

 

Please help me!!

Edited by livingonline8
more clear
Link to comment

FWIW,

I'm running version 2020.09.29 of the rclone plugin with absolutely no issues on 6.9 RC2. Been running for months. I don't know if this is the most recent version of the plugin or not but it kinda doesn't matter.
You can update rclone from within the plugin so.....ya, this is great!

Also worth mentioning, I don't recall exactly if this is what I did...but I believe I had this version of the rclone plugin installed prior to updating to any 6.9 release. Then upgraded to 6.9. Again, no issues.

Edited by Stupifier
Link to comment

@livingonline8

... I see that you were actually one of the people that already asked this question back in December and Stupifier told you not to mount to a non-empty folder.

The allow-non-empty function does not work anymore on unraid.

 

I suggest you read the suggestions you already got a couple of months ago 🙄

Edited by Waseh
Link to comment
31 minutes ago, Waseh said:

@livingonline8

... I see that you were actually one of the people that already asked this question back in December and Stupifier told you not to mount to a non-empty folder.

The allow-non-empty function does not work anymore on unraid.

 

I suggest you read the suggestions you already got a couple of months ago 🙄

Thank you for your patience with me... please forgive me but I am not tech guru... I bet you are 😊

 

it is true that I was told not to mount to a non-empty folder but my setup was working fine for 2 years and I have to mount to non empty folder because I am combining  “unionfs” media folder in google drive with media folder on my unbraid 

 

why did this break all of a sudden and again I really have no option but to mount to non-empty because all my media is there

Link to comment

Hi @Waseh, I am facing an issue with rclone. Below I have mentioned the detailed information regarding that. 

 

Unraid: 6.8.3 | rclone: 1.54.0

 

I have multiple apps running on my docker such as Sonarr, Radarr, Plex and rclone. Before rclone can mount the Google Drive, Sonarr/Radarr starts making the folder onto my mounting location.

 

Yes, I use Google Drive for the Media and Sonarr/Radarr stores the files to Google Drive which is mounted by rclone.

 

So, if Sonarr/Radarr ran before rclone could complete the mounting, rclone would throw this error:

 

2021/03/01 20:31:27 Fatal error: Directory is not empty: /mnt/disks/ua_hdd1/gdrive If you want to mount it anyway use: --allow-non-empty option
Script Starting Mar 01, 2021 20:38.01

 

Now, as the error suggests, make use of --allow-non-empty but I think that has been discontinued by rclone officially.

 

So let me know if there was a concrete solution which would make sure nothing writes onto the mounting location before rclone mounts the Google Drive on the mounting location?

 

rclone command:

 

rclone mount --allow-other --buffer-size 256M --dir-cache-time 96h --timeout 1h --default-permissions --uid 99 --gid 100 --umask 002 --vfs-cache-mode full --vfs-cache-max-age 24h -v --progress --cache-dir /mnt/disks/ua_hdd1/rclone/cache gdrive: /mnt/disks/ua_hdd1/gdrive &

 

mounting locaiton: /mnt/disks/ua_hdd1/gdrive

 

rclone config (sanitized):

 

[gdrive]
type = drive
client_id = xxxxxxx.apps.googleusercontent.com
client_secret = xxxxxx
scope = drive
token = {"access_token":"xxxxxxx"}
team_drive = xxxxxx
root_folder_id = 

 

Looking forward to hearing from you.

 

rclone forum post regarding the same issue: https://forum.rclone.org/t/how-to-delete-files-folder-inside-the-mount-location-before-mounting/22566

 

Thanks.

Edited by learningunraid
Link to comment

Hey guys... I got rClone working and have been able to link my gDrives using custom APIs and all that stuff, drop box and Mega's and wot not.. but they are all sort of mounted as directories.. and this is great for some of them.. but I would also like to have a traditional type of cloud were the files are actually mirrored locally and changes to the files, or files added are mirrored back to the cloud....

 

How would I go about setting this up.. if it is possible?

Link to comment
23 hours ago, questionbot said:

Hey guys... I got rClone working and have been able to link my gDrives using custom APIs and all that stuff, drop box and Mega's and wot not.. but they are all sort of mounted as directories.. and this is great for some of them.. but I would also like to have a traditional type of cloud were the files are actually mirrored locally and changes to the files, or files added are mirrored back to the cloud....

 

How would I go about setting this up.. if it is possible?

 

if I make a script and set it to daily like this... is this going ot work?


 

#!/bin/bash
rclone sync /path1 remote1:

rclone sync /path2 remote2:

rclone sync /path3 remote3:

rclone sync /path4 remote4:

 

Do I need some kind og code to determine if the script is already running? Then not run if it is?

Link to comment

how do you mount a remote when you have an encrypted config file? I had it all working.. then went to encrypt the config.. now rclone mount is asking for the password every time it is run. So my mount script is broken.... 

 

Is there a way to get mounting to work somehow? 

Edited by questionbot
Link to comment
  • 2 weeks later...

Hello friends of Unraid,

I would like to know where i am struggling...

I just arrived to setup my personal OneDrive and i were able to see my files who are already on my OneDrive by using "Rclone ls Onedrive:"... For now i try to Mount the Onedrive but i dont arrive it.
Is it possible to mount the drive by using unassigned devices plugin somehow?

It would be nice to make the OneDrive available in the /mnt directory.

What i have seen is the directory /mnt/disks/rclone_volume but the directory is empty (watching with WinSCP)...

I would like to expose the Onedrive as share from unraid, could this be done somehow?

Edited by Lobsi
Link to comment

This doesn't necessarily have anything to do with the plugin or rclone or unraid but... Just in case anyone runs into this issue... here is a solution.  This is more of an issue with pihole but rclone is affected because it does so many DNS lookups.

 

I had rclone backing up to onedrive from the unraid server and it wouldn't delete and uploads were odd.  I checked the script result and I saw these entries:

 

2021/03/24 11:51:54 Failed to create file system for "onedrive:backups/some_folder_name": failed to get root: Get "https://graph.microsoft.com/v1.0/drives/blahblahblah/root": dial tcp: lookup graph.microsoft.com on 192.168.xx.xx:53 server misbehaving
2021/03/24 11:51:54 Failed to create file system for "onedrive:backups/some_folder_name": failed to get root: Get "https://graph.microsoft.com/v1.0/drives/blahblahblah/root": dial tcp: lookup graph.microsoft.com on 192.168.xx.xx:53 server misbehaving

 

This meant that rclone was having DNS resolution issues.  Well, I had a pihole serving up DNS requests.  Apparently, they introduced rate limiting to the pihole and because rclone is so chatty with DNS, it was hitting the rate limiting count when trying to backup.  You can see this by a spike on pihole during a backup.

 

The solution is to modify pihole's /etc/pihole/pihole-FTL.conf and set the RATE_LIMIT.  I set it to RATE_LIMIT=0/0 because that disables it but I imagine you can tweak and tune as you want.

 

Link to comment

server-diagnostics-20210326-2259.zip

 

Hello :)

 

I got a strange problem, started today after working superb for like 6 months.

 

After I mount my seedbox I can enter and see all the content from the share but when I go into my dockers editor I can't see the "files" folder, I can manually add it to the path and then see the content inside but none of the dockers can access it.

 

I have tried to fix permissions, moved the share to cache and back to user share but nothing works, any help would be much appreciated :) 

folders01.PNG.4fe33c861712e4969eaa9c1ae5b8be61.PNGfolders02.PNG.1f572c8228c77e38a455d9ccfc2ad15e.PNG

 

edit-

 

Fixed it, went into unRAID settings and then rclone settings and updated to latest version :)

 

edit-

Edited by X672
Link to comment

I'm sorry if this has been explain, but I just can't find it:

 

Any reason why I'm getting this warning in fix common problems:

 

Docker application Rclone-mount has volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option

image.png.a7c49cf54e385a8cd7eefb00a284ff0f.png

 

 

This is what the docker looks like (see docker-rclone-1.png).

 

 

 

 

docker-rclone-1.PNG

Link to comment

Hi everyone and sorry for my, probably, dumb question:
installed plugin and mapped a folder to reach gdrive folders (prefer to avoid sync on 2 ways because i'm using it on various pc).

Evertything seems to be fine because on unraid i can see content of my gdrive but, when i try to reach it using W10, system gave me an error telling i don't have authorization to enter the folder.
Where i'm in wrong?
Thanks for your help!
rodeg3.thumb.JPG.b270888569990de8a27b2713f3bee973.JPG
[GDrive_rodeg]
type = drive
client_id = xxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com
client_secret = xxxxxxxxxxxxxxxxx
scope = drive
token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxxxxx","token_type":"Bearer","refresh_token":"1//xxxx","expiry":"2021-03-29T10:47:02.890763241+02:00"}

rodeg1.JPG

rodeg2.JPG

Link to comment
  • 2 weeks later...

I get the next error in the log off Sonarr when he try to import a downloaded episode.
 

System.UnauthorizedAccessException: Access to the path is denied.

 

I started the sonarr container after that i run the mount script.

When i change the rights in unraid by tools--> new persions then is the problem solved. But after a couple of minutes i have the same problem.

Edited by Stephan296
Link to comment
  • 2 weeks later...

@Dr.NAS

Could you link me the official source? Can't seem to find it on the rclone github repo.

 

I agree though the colorful icon is a bit jarring in the new design language of unraid. There is an all black version on the rclone repo i might switch to.

 

Edit: oh now i remember why I chose the color version, the monochrome won't work on the dark mode theme! Going to leave it as it is.

Edited by Waseh
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.