Jump to content
Waseh

[Plugin] rclone

508 posts in this topic Last Reply

Recommended Posts

Hi, I have setup the rclone working with plex docker. Everything working fine on the server itself, but there is one problem trying to access the files using SMB in Windows 10.

 

I have the following in my Samba extra configuration:

 

   [ACD]
   path = /mnt/disks/acd
   read only = yes
   guest ok = yes 

 

I can see the files, but I'm getting very slow speed when trying to watch or copying the files. The speed is only about 2-5Mb. When looked into iftop, I see a bunch of amazon IPs that are connecting and dropped.

 

ec2-54-xxx-xxx-xx.compute-1.amazona

 

Anyone knows why that might be?

 

Are you trying to copy the files over SMB or something?

Share this post


Link to post

Are you trying to copy the files over SMB or something?

 

Yes, I'm trying to copy or watch the files over SMB. I don't have problem doing that with local files.

 

Share this post


Link to post

Has anyone got this to work with unassigned devices? I mount rclone to /mnt/disks/acd but it never comes up as a disk under unassigned devices.

Share this post


Link to post

Not that I use rclone, but I don't see why it would come up via unassigned devices since its not managed by unassigned devices.

 

If you're looking to have the mount as a share, then you're going to have to manually edit config/smb-extra.cfg on the flash drive

Share this post


Link to post

Yeah, it's not a device. It's a mount. Not going to work with unassigned devices. What is it you are trying to do?

Share this post


Link to post

Thanks guys. Squid is right, I want mount it as a SMB share through the GUI. I guess the only way to do it is add the mount as a parameter in "SMB Samba extra configuration"?

Share this post


Link to post

Thanks guys. Squid is right, I want mount it as a SMB share through the GUI. I guess the only way to do it is add the mount as a parameter in "SMB Samba extra configuration"?

Yes

 

Sent from my LG-D852 using Tapatalk

 

 

Share this post


Link to post

Finally have my entire backup nearly done! However while the backup has been occurring I've been adding files, moving files and doing what gets done with NAS like unRAID. I've also run into issues with files having pathnames too long. For folks who may run into this the following command will find files that are too long so you can rename them. Run it from a terminal session or SSH in the /mnt/user directory and it'll return the names of files that exceed 230.

 

find -regextype posix-extended -regex '.{230,}'

 

Note that max allowed length is 280 but for some reason quite a few extra characters seem to get tacked on and I've had to search for lengths longer than 230. I had over 130 instances of this, no fun to clean up!

 

I have one last issue - exclusions! I run this using UserScripts with the following CustomScript:

 

rclone --log-file /mnt/user/work/rclone.log --max-size=40G --transfers=2 --bwlimit=8.8M sync /mnt/user/ crypt: --exclude="work/HB" --exclude="work/rclone.log" --exclude="work"--checkers=10

 

"work" is a shared mount under /mnt/user and I'd like to exclude it. When I'm compressing video or doing random downloads this is where that winds up - it's also where I have a running log for rclone being created. Because the rclone log is constantly being updated when rclone tries to upload it the file changes and I get an error. If rclone finds an I/O error like this it refuses to do deletions on the target. Since I've had some pretty hefty files get uploaded from my Work directory that I don't want hanging around I need that full synch to occur.

 

So, how best to exclude "work"? I've tried "/mnt/work/" and I've tried what's above. I'm going to try "/mnt/work/" again as this seems like it ought to work and rclone has revved a few times since I last tried it. I've got it running now with just one error found so far - the darned log file! All of my name changes seem to have taken - whew! So close to having a good baseline but so far - I'd appreciate a pointer from anyone who's gotten exclusions to work.

 

Edit: Ugh, even 230 wasn't short enough - I've still got at least 20 files with issues!

Share this post


Link to post

I haven't run into the file name length issue but as a work around it might to possible to have another mount that goes deeper into the folder structure and then syncs. I'm assuming full path is the issue. How deep does your folder structure go?

Share this post


Link to post

Here's an example of one file that's just failed. I'm still getting over 100 failures despite having modified every file that was claimed to fail on the last run! :(

 

 BlackHat/Defcon/DEF CON 9/DEF CON 9 audio/DEF CON 9 Hacking Conference Presentation By Daniel Burroughs - Applying Information Warfare Theory to Generate a Higher Level of Knowledge from Current IDS - Audio.m4b

 

Another

 

BlackHat/Defcon/DEF CON 19/DEF CON 19 slides/DEF CON 19 Hacking Conference Presentation By - Kenneth Geers - Strategic Cyber Security An Evaluation of Nation-State Cyber Attack Mitigation Strategies - Slides.m4v

 

These don't show up with a find as short as 225 chars so I'm a bit frustrated and am wondering if there's something in the name of the file itself that's tripping these up except I see little in common. The error is:

 

u0027 failed to satisfy constraint: Member must have length less than or equal to 280

Share this post


Link to post

Yup, Amazon! I've just done a bit of hack and slash with a bulk renamer so we'll see what fails this round. IF all of the lengths check out it will still fail however since it will be unable to avoid trying to backup the changing log file. Has anyone got excludes working properly? I've got a work directory or two I'd like to skip and that log file for sure! Deleting the extra crap would sure be nice and that won't occur without a full run without I/O errors it says...

 

Edit: 4 more and counting! Some of these I've edited 2&3 times now and have names shorter than others. The cryto must occasionally get a wild card and screw them up!

Share this post


Link to post

Alright, after having to severely slash some filenames and move the logfile out of the range that's being backed up I'm finally able to have target files deleted that are no longer needed. I'd really appreciate some help from anyone that has exclusions working. Next up is scheduling this to run regularly. Hopefully User Scripts handles it - fingers crossed!

Share this post


Link to post

Alright, after having to severely slash some filenames and move the logfile out of the range that's being backed up I'm finally able to have target files deleted that are no longer needed. I'd really appreciate some help from anyone that has exclusions working. Next up is scheduling this to run regularly. Hopefully User Scripts handles it - fingers crossed!

 

No experience with exclusions but somewhere in this thread a posted I daemon.

Share this post


Link to post

Currently working very well for me nightly with UserScipts, I managed to clear the long filenames and it was a PITA but now done. This runs maybe an hour or three a night and updates Amazon fine. First time it ran it deleted a good TB and a half off of Amazon from all of my previous attempts and files moving around. It's now run multiple nights in a row without issue, starts up at 4:40am and am not sure how to change it but it's working lol. For the logfile, I simply keep it at a higher level than I'm backing up - at the /mnt level.

 

I did modify the calling script to rename rclone-bak.log, then move rclone.log to rclone-bak.log so I have 2 nights worth of logs I can look at if something weird happens. Currently sitting at about 27.1TB backed up and about 520K files give or take a few K :)

Share this post


Link to post

Here is the daemon I wrote if interested:

http://lime-technology.com/forum/index.php?topic=53365.msg515342#msg515342

There is also a github page for it but it's the same thing

https://github.com/bobbintb/backup-bash

 

There are two others as well:

https://github.com/rhummelmose/rclonesyncservice

https://github.com/resipsa2917/rcloned

 

I've only used mine. It works well and I have not had any issues. It starts the sync whenever the OS detects a file system change and it will not run a sync until the first one completes. I think rclone will have a daemon built in eventually but it's pretty low priority since there are ways around it.

Share this post


Link to post

Would it be possible to get Rclone browser (https://mmozeiko.github.io/RcloneBrowser/) to work with this? I've been trying to learn how to install a app in a blank docker but I'm having trouble researching how to. Would it be easy for you to make a plugin/docker  with rclone browser so that it can compliment your plug in? Or if you could point me to how i would start making the docker myself? 

Share this post


Link to post

That would be out of the scope for this plugin.
It would probaly be possible to run rclone browser in a  RDP Docker (examples of these are found in the docker section) however running rclone in a docker would lose you the ability to mount shares to the filesystem. :) 

Share this post


Link to post

Yeah, just to elaborate on what as Waseh was saying, the two are going to have to remain separate, at least for most people. The reason being, as he said, running rclone in a docker means loosing the ability to mount shares to the file system. Mounting a remote cloud share is one of the main reasons people use rclone so that's a big issue. While there is also an rclone docker from a different developer, I don't know how well maintained it is and I don't think a lot of people use it because not being able to mount shares makes it useless for most people. This isn't a bug or anything that can be fixed, just a result of what docker is. On the other side of that, rclone browser is a gui tool and UnRAID is headless. So the only way to use rclone browser is in a docker.

 

If someone already made a docker image of rclone browser, it should be pretty easy to just use the docker template in UnRAID. I doubt anyone has though so you'll have to make your own docker image first. I can't help with that but a quick google search help with that.

Edited by bobbintb

Share this post


Link to post

Hi guys

Heres a tutorial on how to setup the excellent Rclone plugin on unRAID. You will see how to install it then connect to 3 different cloud storage providers. Amazon, dropbox and google drive. You will see how to encrypt and decrypt files in the cloud. You will see how to connect a docker container to rclone and even stream and encrypted media file to emby or plex.
You will then see how to make the rclone mount into a network share. Finally, you will see how to easily sync a folder to the cloud.

 

 

Share this post


Link to post

Is there a way that I can limit transfers to Amazon Drive to certain extensions. For example, I want to backup my photos (free) and not my videos. 

 

Thanks!

Share this post


Link to post
On 15/03/2017 at 4:00 PM, Kash76 said:

Is there a way that I can limit transfers to Amazon Drive to certain extensions. For example, I want to backup my photos (free) and not my videos. 

 

Thanks!

 

Yes you certainly can. You can use --include  or --exclude.

So try first using rclone ls to test

For example 

rclone ls --include *.jpg secure:

will just list the jpegs in that rclone mount. once you have tested use that with rclone sync, move, copy etc

Have a read here for details  http://rclone.org/filtering/

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.