[Plugin] rclone


Waseh

Recommended Posts

3 minutes ago, Kaizac said:

You mean it doesn't finish? Try running it from User Scripts and create a script with the mount command and run it in the background. You can check SpaceInvaders video for that if you want some visual guidance.

 

For the location I don't know if and why Waseh said that, but it's not true. I'm mounting in /mnt/user/ and all my dockers are writing and reading from the mount.

For more efficient mounting of Gdrive check this topic, maybe it helps you understand it better.

 

 

I have now tried to run the same command via SSH in Putty and have tried it with a user script, but as I expected the result is the same. The command just never returns.

Link to comment
[GDrive]
type = drive
scope = drive
token = {"access_token":"REDACTED","token_type":"Bearer","refresh_token":"REDACTED","expiry":"REDACTED"}
client_id = REDACTED
client_secret = REDACTED

Alright, here it is, with all the tokens, etc. REDACTED. Expiry is definitely still valid and other commands work as well (eg. sync, ls).

Link to comment
1 hour ago, sasjafor said:

[GDrive]
type = drive
scope = drive
token = {"access_token":"REDACTED","token_type":"Bearer","refresh_token":"REDACTED","expiry":"REDACTED"}
client_id = REDACTED
client_secret = REDACTED

Alright, here it is, with all the tokens, etc. REDACTED. Expiry is definitely still valid and other commands work as well (eg. sync, ls).

Sorry for not understanding. What exactly do you expect to happen which doesn't? You say it gets mounted and the commands like sync work. So what is not going right then?

Link to comment
37 minutes ago, Kaizac said:

Sorry for not understanding. What exactly do you expect to happen which doesn't? You say it gets mounted and the commands like sync work. So what is not going right then?

The mount command never finishes, so if I tried to write a script that first mounts and then copies something to the GoogleDrive, it would simply get stuck on the mount command, even though the GoogleDrive successfully mounts.

Something seems to be going wrong where the mounting itself works, but the mount command gets stuck, maybe in an endless loop.

Link to comment
2 minutes ago, sasjafor said:

The mount command never finishes, so if I tried to write a script that first mounts and then copies something to the GoogleDrive, it would simply get stuck on the mount command, even though the GoogleDrive successfully mounts.

Something seems to be going wrong where the mounting itself works, but the mount command gets stuck, maybe in an endless loop.

Ah I see it, you missed the & at the end of your mount code.

 

Should be:

 

rclone mount --max-read-ahead 1024k GDrive: /mnt/disks/gdrive &

Link to comment
6 hours ago, sasjafor said:

Thanks, that works! Strange that that is necessary though.

Glad you got it working! If you want to use the Gdrive heavily I really recommend to read the topic I linked earlier. The mount command you use now is pretty shitty for performance (not just streaming)

Link to comment

I too am struggling to get Google Drive to sync anything.  I'm using the following Config:

 

[google-drive-folder]
type = drive
token = {"access_token":"REDACTED","token_type":"Bearer","refresh_token":"REDACTED","expiry":"2019-01-24T02:00:33.009530267-05:00"}

and with the following mount script:

#!/bin/bash
#----------------------------------------------------------------------------
#first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access
#there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure
#you only need as many as what you need to mount for dockers or a network share

mkdir -p /mnt/disks/google-drive-myfolder
#mkdir -p /mnt/disks/dropbox
#mkdir -p /mnt/disks/google
#mkdir -p /mnt/disks/secure


#This section mounts the various cloud storage into the folders that were created above.

rclone mount --max-read-ahead 1024k --allow-other google1: /mnt/disks/google-drive-myfolder &
#rclone mount --max-read-ahead 1024k --allow-other dropbox: /mnt/disks/dropbox &
#rclone mount --max-read-ahead 1024k --allow-other google: /mnt/disks/google &
#rclone mount --max-read-ahead 1024k --allow-other secure: /mnt/disks/secure &

i commented out the stuff I'm not using (copied from the SpaceInvaderOne video).

 

When I use the LSD command via the Terminal, I see two folders (Google Photos) and another I had set up previously, so I know it's mounted.  However, when I try to run the following command (to sync my Google Photos folder to my Unraid Share):

 

rclone sync -v '/mnt/disks/google-drive-myfolder/Google Photos' /mnt/user/Multimedia/Photos/google-photos-myfolder/

I get the following:

2019/01/24 01:42:40 ERROR : : error reading source directory: directory not found
2019/01/24 01:42:40 INFO  : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: Waiting for checks to finish
2019/01/24 01:42:40 INFO  : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: Waiting for transfers to finish
2019/01/24 01:42:40 ERROR : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: not deleting files as there were IO errors
2019/01/24 01:42:40 ERROR : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: not deleting directories as there were IO errors
2019/01/24 01:42:40 ERROR : Attempt 1/3 failed with 1 errors and: not deleting files as there were IO errors

Anyone see anything I'm doing blatantly wrong?

Link to comment
6 minutes ago, Coolsaber57 said:

I too am struggling to get Google Drive to sync anything.  I'm using the following Config:

 


[google-drive-folder]
type = drive
token = {"access_token":"REDACTED","token_type":"Bearer","refresh_token":"REDACTED","expiry":"2019-01-24T02:00:33.009530267-05:00"}

and with the following mount script:


#!/bin/bash
#----------------------------------------------------------------------------
#first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access
#there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure
#you only need as many as what you need to mount for dockers or a network share

mkdir -p /mnt/disks/google-drive-myfolder
#mkdir -p /mnt/disks/dropbox
#mkdir -p /mnt/disks/google
#mkdir -p /mnt/disks/secure


#This section mounts the various cloud storage into the folders that were created above.

rclone mount --max-read-ahead 1024k --allow-other google1: /mnt/disks/google-drive-myfolder &
#rclone mount --max-read-ahead 1024k --allow-other dropbox: /mnt/disks/dropbox &
#rclone mount --max-read-ahead 1024k --allow-other google: /mnt/disks/google &
#rclone mount --max-read-ahead 1024k --allow-other secure: /mnt/disks/secure &

i commented out the stuff I'm not using (copied from the SpaceInvaderOne video).

 

When I use the LSD command via the Terminal, I see two folders (Google Photos) and another I had set up previously, so I know it's mounted.  However, when I try to run the following command (to sync my Google Photos folder to my Unraid Share):

 


rclone sync -v '/mnt/disks/google-drive-myfolder/Google Photos' /mnt/user/Multimedia/Photos/google-photos-myfolder/

I get the following:


2019/01/24 01:42:40 ERROR : : error reading source directory: directory not found
2019/01/24 01:42:40 INFO  : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: Waiting for checks to finish
2019/01/24 01:42:40 INFO  : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: Waiting for transfers to finish
2019/01/24 01:42:40 ERROR : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: not deleting files as there were IO errors
2019/01/24 01:42:40 ERROR : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: not deleting directories as there were IO errors
2019/01/24 01:42:40 ERROR : Attempt 1/3 failed with 1 errors and: not deleting files as there were IO errors

Anyone see anything I'm doing blatantly wrong?

 

If the info you posted is corrected then you called your Gdrive in rclone google-drive-folder. But then in your mount:\

rclone mount --max-read-ahead 1024k --allow-other google1: /mnt/disks/google-drive-myfolder &

 

You mount google1, which doesn't exist. So you either rename your gdrive in rclone config to google1 or you change your mount to google-drive-folder.

Link to comment
8 hours ago, Kaizac said:

 

If the info you posted is corrected then you called your Gdrive in rclone google-drive-folder. But then in your mount:\

rclone mount --max-read-ahead 1024k --allow-other google1: /mnt/disks/google-drive-myfolder &

 

You mount google1, which doesn't exist. So you either rename your gdrive in rclone config to google1 or you change your mount to google-drive-folder.

That makes sense, so I updated it to show:

 

#!/bin/bash
#----------------------------------------------------------------------------
#first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access
#there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure
#you only need as many as what you need to mount for dockers or a network share

mkdir -p /mnt/disks/google-drive-myfolder
#mkdir -p /mnt/disks/dropbox
#mkdir -p /mnt/disks/google
#mkdir -p /mnt/disks/secure


#This section mounts the various cloud storage into the folders that were created above.

rclone mount --max-read-ahead 1024k --allow-other google-drive-myusername: /mnt/disks/google-drive-myfolder &

The myusername matches the name of the config.  I then saved it to User Scripts, unmounted, then mounted.

 

However, when I try to test it with this command:

rclone sync /mnt/disks/google-drive-myfolder/brewing.xlsx /mnt/user/Multimedia/brewing.xlsx

I still see these errors when trying to sync a file:

2019/01/24 10:23:38 ERROR : : error reading source directory: directory not found
2019/01/24 10:23:38 ERROR : Local file system at /mnt/user/Multimedia/brewing.xlsx: not deleting files as there were IO errors
2019/01/24 10:23:38 ERROR : Local file system at /mnt/user/Multimedia/brewing.xlsx: not deleting directories as there were IO errors
2019/01/24 10:23:38 ERROR : Attempt 1/3 failed with 1 errors and: not deleting files as there were IO errors

So maybe it's not actually mounting correctly, but I can see the folders using the lsd command in the terminal, which tells me that the config is correct.

Link to comment
4 minutes ago, Coolsaber57 said:

That makes sense, so I updated it to show:

 


#!/bin/bash
#----------------------------------------------------------------------------
#first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access
#there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure
#you only need as many as what you need to mount for dockers or a network share

mkdir -p /mnt/disks/google-drive-myfolder
#mkdir -p /mnt/disks/dropbox
#mkdir -p /mnt/disks/google
#mkdir -p /mnt/disks/secure


#This section mounts the various cloud storage into the folders that were created above.

rclone mount --max-read-ahead 1024k --allow-other google-drive-myusername: /mnt/disks/google-drive-myfolder &

The myusername matches the name of the config.  I then saved it to User Scripts, unmounted, then mounted.

 

However, when I try to test it with this command:


rclone sync /mnt/disks/google-drive-myfolder/brewing.xlsx /mnt/user/Multimedia/brewing.xlsx

I still see these errors when trying to sync a file:


2019/01/24 10:23:38 ERROR : : error reading source directory: directory not found
2019/01/24 10:23:38 ERROR : Local file system at /mnt/user/Multimedia/brewing.xlsx: not deleting files as there were IO errors
2019/01/24 10:23:38 ERROR : Local file system at /mnt/user/Multimedia/brewing.xlsx: not deleting directories as there were IO errors
2019/01/24 10:23:38 ERROR : Attempt 1/3 failed with 1 errors and: not deleting files as there were IO errors

So maybe it's not actually mounting correctly, but I can see the folders using the lsd command in the terminal, which tells me that the config is correct.

If you lsd the rclone gdrive and not the mount it doesn't meant the mount went well. But I think I the problem is you are syncing a file and maybe you can only sync folders. See if that's better

Link to comment

Guys....do NOT attempt rclone copy/ sync commands from or to rclone mounts. It is known to be very unreliable. This is why you are all experiencing IO errors.

 

Instead, point directly to your rclone remote (ie gdrive:) as your source and/or destination in your rclone copy/sync commands.

 

Generally speaking, rclone mounts are only good for READING data, not WRITING. And if you are writing data, it is buggy and only works with small amounts of changes.

 

Moral of story.....only use the rclone mount to READ. If you need to write, do not involve the mount AT ALL.

 

This is not really a bug or anything specific to Unraid.....it is just how rclone works

 

 

 

 

Link to comment
34 minutes ago, Kaizac said:

If you lsd the rclone gdrive and not the mount it doesn't meant the mount went well. But I think I the problem is you are syncing a file and maybe you can only sync folders. See if that's better

Ok, so it appears that there's some issue with some of the files in the Google drive that rclone doesn't like.  And you were right, it seems to work better with folders.  For a lot of the documents I have in my gdrive account, it shows IO errors - not sure if there's some kind of permissions setting either in rclone or gdrive that I need to set.  Any suggestions?

 

Link to comment
26 minutes ago, Stupifier said:

Guys....do NOT attempt rclone copy/ sync commands from or to rclone mounts. It is known to be very unreliable. This is why you are all experiencing IO errors.

 

Instead, point directly to your rclone remote (ie gdrive:) as your source and/or destination in your rclone copy/sync commands.

 

Generally speaking, rclone mounts are only good for READING data, not WRITING. And if you are writing data, it is buggy and only works with small amounts of changes.

 

Moral of story.....only use the rclone mount to READ. If you need to write, do not involve the mount AT ALL.

 

This is not really a bug or anything specific to Unraid.....it is just how rclone works

 

 

 

 

To clarify, for my future sync commands, I should be using:

 

rclone sync google-drive-myusername: /mnt/user/Multimedia/Photos/google-photos-myfolder/

if google-drive-myusername is my rclone config name?

Link to comment
2 minutes ago, Coolsaber57 said:

To clarify, for my future sync commands, I should be using:

 


rclone sync google-drive-myusername: /mnt/user/Multimedia/Photos/google-photos-myfolder/

if google-drive-myusername is my rclone config name?

I'm not sure exactly what you're trying to do...but yes, if your rclone config remote name is "google-drive-myusername".....then your rclone sync command should include "google-drive-myusername:". Look at the rclone sync documentation for further help.

 

I just wanted to make sure everyone knows what rclone mounts are designed for and why they were having issues. Just keep to my guidelines in my prior post and rclone will work great

Link to comment
12 minutes ago, Coolsaber57 said:

To clarify, for my future sync commands, I should be using:

 


rclone sync google-drive-myusername: /mnt/user/Multimedia/Photos/google-photos-myfolder/

if google-drive-myusername is my rclone config name?

Here is an example rclone sync command....that syncs my entire Google drive data.....into /home/gdrive/. It is a one-way sync..FROM Google Drive TO /home/gdrive/

 

rclone sync -c --transfers 40 --checkers 40 --tpslimit 5 --drive-upload-cutoff 256M --drive-v2-download-min-size 0M -v --no-update-modtime --drive-chunk-size 128M --fast-list --disable move,copy --log-file="/home/seed/rc-uploadlog.log" gdrive: /home/gdrive/

Link to comment
8 minutes ago, Stupifier said:

Here is an example rclone sync command....that syncs my entire Google drive data.....into /home/gdrive/. It is a one-way sync..FROM Google Drive TO /home/gdrive/

 

rclone sync -c --transfers 40 --checkers 40 --tpslimit 5 --drive-upload-cutoff 256M --drive-v2-download-min-size 0M -v --no-update-modtime --drive-chunk-size 128M --fast-list --disable move,copy --log-file="/home/seed/rc-uploadlog.log" gdrive: /home/gdrive/

 

16 minutes ago, Stupifier said:

I'm not sure exactly what you're trying to do...but yes, if your rclone config remote name is "google-drive-myusername".....then your rclone sync command should include "google-drive-myusername:". Look at the rclone sync documentation for further help.

 

I just wanted to make sure everyone knows what rclone mounts are designed for and why they were having issues. Just keep to my guidelines in my prior post and rclone will work great

That's all I really want to do as well - sync from Google Drive into my Unraid share.  I actually want to set up two: One to sync the Google Photos folder into my Photos share, and one to sync everything else.  I did get it to work and after updating my Krusader docker container to rw slave, I can see the files in the mounted folder.

 

And yes, the google-drive-myusername is the Config name.

 

For those other options in your sync command, is that to speed up the sync? Are there recommended flags we should be using?

Link to comment
2 minutes ago, Coolsaber57 said:

 

That's all I really want to do as well - sync from Google Drive into my Unraid share.  I actually want to set up two: One to sync the Google Photos folder into my Photos share, and one to sync everything else.  I did get it to work and after updating my Krusader docker container to rw slave, I can see the files in the mounted folder.

 

And yes, the google-drive-myusername is the Config name.

 

For those other options in your sync command, is that to speed up the sync? Are there recommended flags we should be using?

Those extra flags are a mixed bag. I'd recommend looking each individual flag up in the rclone documentation.

-You should use the fast-list flag for your sync jobs as long as there are not massive changes to be synced.

-If you have a massive bandwidth take a look at the transfers/checkers flag

-If you are transferring massive amounts of data utilize the tpslimit flag to avoid api limit bans

 

Read docs on the rest or find forum posts about em. I personally use that specific command to sync one massive Google drive to another Google drive (I modified the paths when I posted obviously). And I use Google Compute Engine (GCE) instances to do that specific rclone transfer cuz GCE has completely insane bandwidth to/from Google drives (almost 1GB/sec, yes Gigabyte not Gigabit).

Link to comment
5 minutes ago, Stupifier said:

Those extra flags are a mixed bag. I'd recommend looking each individual flag up in the rclone documentation.

-You should use the fast-list flag for your sync jobs as long as there are not massive changes to be synced.

-If you have a massive bandwidth take a look at the transfers/checkers flag

-If you are transferring massive amounts of data utilize the tpslimit flag to avoid api limit bans

 

Read docs on the rest or find forum posts about em. I personally use that specific command to sync one massive Google drive to another Google drive (I modified the paths when I posted obviously). And I use Google Compute Engine (GCE) instances to do that specific rclone transfer cuz GCE has completely insane bandwidth to/from Google drives (almost 1GB/sec, yes Gigabyte not Gigabit).

Got it, thank you.

Link to comment

I'm having this same issue:

plugin: installing: https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg
plugin: downloading https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg
plugin: downloading: https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg ... done

+==============================================================================
| Installing new package /boot/config/plugins/rclone-beta/install/rclone-2018.08.25-bundle.txz
+==============================================================================

Verifying package rclone-2018.08.25-bundle.txz.
Installing package rclone-2018.08.25-bundle.txz:
PACKAGE DESCRIPTION:
Package rclone-2018.08.25-bundle.txz installed.
Downloading rclone
Downloading certs
Download failed - No existing archive found - Try again later
plugin: run failed: /bin/bash retval: 1

 

I tried your suggestion and changed the timeout to 10 and it seemed to work:

curl --connect-timeout 5 --retry 3 --retry-delay 2 --retry-max-time 30 -o /boot/config/plugins/rclone/install/rclone-current.zip https://downloads.rclone.org/rclone-current-linux-amd64.zip

 

However when I try to setup rclone up, i get the error that rclone command inst found, so it doesn't seem to install still. Any other suggestions?

Thanks!

 

 

EDIT: So i got it to install. First i had to manually creat the rclone/install directories in the boot/config folder, then run your curl command, then install the plugin through apps.

Edited by CriticalMach
Link to comment
On 1/25/2019 at 7:25 PM, Waseh said:

Yea i haven't had time to push the fix (simple as it is). I'll see if I can get it done tonight. 

Not trying to be ungrateful, but did you manage to push the fix. I got the same error that the archive does not exist. The curl command doesn't seem to solve the issue and am currently without a working rclone.

 

Thanks again!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.