[Plugin] rclone


Waseh

906 posts in this topic Last Reply

Recommended Posts

1 minute ago, xl3b4n0nx said:

I have google drive and an encrypted folder mounted as google and secure. I set this up following Spaceinvaderone's tutorial. The problem I am having is with the unmount script. The 'fusermount -u /mnt/disks/secure' returns 'Invalid argument' and can't properly unmount the secure folder. How can I fix this error?

Use this if fusermount doesn't work

 

umount -l  /mnt/disks/google

Link to post
  • Replies 905
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

rclone Plugin   Hey everyone   This is a simple plugin which installs rclone on your unraid system. The plugins have now been merged so both the stable and beta branch are

Hey guys Sorry for the lack of updates in a (long) while ? Real life has been taking up a lot of time and my own install of rclone has been sufficient for my needs. However both the stable branch

Someone beat me to it. And probably explained it in a nicer way than i would have Reinstalling the plugin will also result in the newest version being pulled Take a look at the change log

Posted Images

I have a small question regarding the --bwlimit.

 

I have a two scripts that run every hour to copy files from a remote server to local server. I use the bwlimit command and this works sometimes. When there is a lot of data to copy, it ends up saturating my bandwidth making browsing and other online activities slow.

 

What I think is happening is that the --bwlimit is applying to each script and not globally. For instance at hour 0 th script starts and bandwidth limit is set to 4M. If this is still running at hour 1, then a new session is created with bwlimit at 4M for a total of 8M and so on. Can anyone shed some light on this interaction?

 

Also, if the above is the case, is there a way to set a global limit to prevent this stacking of sessions?

 

Thanks in advance for any help and thanks to the devs for this great plugin.

Link to post
I have a small question regarding the --bwlimit.
 
I have a two scripts that run every hour to copy files from a remote server to local server. I use the bwlimit command and this works sometimes. When there is a lot of data to copy, it ends up saturating my bandwidth making browsing and other online activities slow.
 
What I think is happening is that the --bwlimit is applying to each script and not globally. For instance at hour 0 th script starts and bandwidth limit is set to 4M. If this is still running at hour 1, then a new session is created with bwlimit at 4M for a total of 8M and so on. Can anyone shed some light on this interaction?
 
Also, if the above is the case, is there a way to set a global limit to prevent this stacking of sessions?
 
Thanks in advance for any help and thanks to the devs for this great plugin.
rclone does not have any sort of special logic available that checks for other active rclone instances. You'll have to program that yourself. There are many ways to skin a cat...You just need to be a little creative and willing to dive into a little bash coding (or whatever you are most familiar with).

Alternatively....You could just run both at 2M speed permanently if you are unwilling to figure out a few lines of code.
Link to post
  • 2 weeks later...
On 7/20/2019 at 11:12 PM, ProphetSe7en said:

Thank you. Need to look into that and see what I can make of it.

 

I have made a script for testing discord messages. This one sends a message to discord. All it does is showing a bell, pinging my user and type the text "this is a test". Now I need to figure out how to integrate it to the script to get the correct message.

 


curl -X POST "webhookurl" \
            -H "Content-Type: application/json"  \
            -d '{"username":"borg", "content":":bell: Hey <@userid>  This is a test"}' 

There is also a script that uses borg + rclone. At the end it sends a email if any error or backup has finished without errors. It should be possible to change this to use discord, I just dont know how to yet.
https://pastebin.com/8WGmJgiQ

Hello, did you get any success with notifications for rclone? It's pretty much the only feature I'm missing from rclone - notifications. Don't need anything fancy, discord notification or email sent on sync errors only would be enough;)

Link to post
  • 2 weeks later...

Hi. I'm having trouble getting rclone to actually mount using spaceinvaderone's guide. When I view my Dropbox mount from the terminal in rclone (reclone lsd Dropbox:) I can see all of the folders in the root of my Dropbox.

 

However, when I try to mount it using the following script, nothing appears in the /mnt/disks/Dropbox folder:

 

mkdir -p /mnt/disks/Dropbox

rclone mount --max-read-ahead 1024k --allow-other Dropbox: /mnt/disks/Dropbox &

 

Why can the terminal see my Dropbox folders and files but not Krusader when I run the mount script?

Link to post
I moved rclone from one server to another, It appears the the scripts tab it is running fine, however if i go to terminal and run rclone edit it doesnt know the command. Any ideas?
They is because rclone edit isn't a command. Try rclone config.
Refer to the rclone official documentation for the full list of commands
Link to post

I can't get my cache to mount because for some reason the --allow-non-empty is unknown to this version of fusermount.

root@Ovi:/mnt/disk3# rclone mount gcache: /mnt/disk3/gdrive --allow-other --cache-db-purge --allow-non-empty --buffer-size 32M --use-mmap --dir-cache-time 72h --drive-chunk-size 16M  --timeout 1h  --vfs-cache-mode minimal --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G &
[1] 36504
root@Ovi:/mnt/disk3# 2020/06/25 14:01:00 mount helper error: fusermount: unknown option 'nonempty'
2020/06/25 14:01:00 Fatal error: failed to mount FUSE fs: fusermount: exit status 1

yet if I try to mount the cache without it tells me to use it.

root@Ovi:/mnt/disk3# rclone mount gcache: /mnt/disk3/gdrive --allow-other --cache-db-purge --buffer-size 32M --use-mmap --dir-cache-time 72h --drive-chunk-size 16M  --timeout 1h  --vfs-cache-mode minimal --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G &
[1] 44383
root@Ovi:/mnt/disk3# 2020/06/25 14:06:25 Fatal error: Directory is not empty: /mnt/disk3/gdrive If you want to mount it anyway use: --allow-non-empty option

anyone have any idea?

Link to post
11 minutes ago, crazyhorse90210 said:

I think it's been merged to v1.53 now.

Does not help as Nextcloud/Owncloud is not able to update the modification date. The only possibility is to reupload everything. WebDAV sucks :(

Link to post
  • 2 weeks later...

I'm having an issue using rclone with Onedrive.

 

Due to rclone not supporting bidirectional syncing, I have been investigating options to provide this. The best route I found so far, was to use rclone to mount Onedrive, then use 2 seperate dockers of syncthings to sync between the Onedrive mount & the local folder.

Unfortunatly, doing this throws up errors in syncthings, saying there is insufficient space in the folder. From a bit of investigation, it looks like the reason for this, is that the Onedrive mount is seen as being 1MB in size, rather than the 1TB it actually is.

Is there a way that either the size can be correctly shown, or at the very least be increased in size (e.g. to 100MB) to allow syncthings to know there is sufficient space?

Link to post
3 hours ago, thingie2 said:

I'm having an issue using rclone with Onedrive.

 

Due to rclone not supporting bidirectional syncing, I have been investigating options to provide this. The best route I found so far, was to use rclone to mount Onedrive, then use 2 seperate dockers of syncthings to sync between the Onedrive mount & the local folder.

Unfortunatly, doing this throws up errors in syncthings, saying there is insufficient space in the folder. From a bit of investigation, it looks like the reason for this, is that the Onedrive mount is seen as being 1MB in size, rather than the 1TB it actually is.

Is there a way that either the size can be correctly shown, or at the very least be increased in size (e.g. to 100MB) to allow syncthings to know there is sufficient space?

People use tools like cloudplow to do what you want. You set it up to use rclone sync function and run in regular intervals. Just a suggestion

https://github.com/l3uddz/cloudplow

Link to post
3 hours ago, Stupifier said:

People use tools like cloudplow to do what you want. You set it up to use rclone sync function and run in regular intervals. Just a suggestion

https://github.com/l3uddz/cloudplow

I hadn't come across cloudplow whilst looking around, but from a quick initial look, it seems like it should do what I want, thanks. I'll take a better look/give it a try as an alternative.

Edited by thingie2
Link to post
  • 2 weeks later...

I'm trying to setup a OneDrive rclone config but I can't open the url in the config :

http://127.0.0.1:53682/auth?state=....

I changed the localhost by the UnRAID IP server but it won't load.

Edited by Alex.b
Link to post
On 7/4/2020 at 5:54 PM, Stupifier said:

Cloudplow developer also has made a complete rewrite of it called "crop". It is still beta sorta....but is like a Swiss army knife of rclone.

https://github.com/l3uddz/crop

Thanks, I've been having a bit of a look into this & Cloudplow, however I'm a little unclear on the setup instructions & can't find a guide/walkthrough. Are you aware of one anywhere?

Link to post
2 hours ago, thingie2 said:

Thanks, I've been having a bit of a look into this & Cloudplow, however I'm a little unclear on the setup instructions & can't find a guide/walkthrough. Are you aware of one anywhere?

Both are on GitHub and include readme files with sample configurations. The cloudplow readme also includes a full breakdown of everything as well as that tool is already very mature. That is all I used to get setup. You can also ask people in the Cloudbox discord. They usually help but are pretty strictly focused on Cloudbox setups of cloudplow.

 

For Crop, the readme doesn't have QUITE as much hand holding because we'll....it is still in very active development.

 

Oh and also for Cloudplow it uses a systemd service.....but since we don't have that on Unraid you could just schedule manual runs via Userscripts plugin

 

Good luck

Edited by Stupifier
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.