[Plugin] rclone


Waseh

Recommended Posts

36 minutes ago, thingie2 said:

I've been having another look, and I hadn't realised I needed the nerd pack plugin in order to install python modules, which is why the installation instructions weren't working.

 

I've at least got further now, unraid just doesn't recognise the "cloudplow" command, so got that to work out now.

Be sure to chmod +x the sxript

 

You can either run the cloudplow command from within the cloudplow directory like "./cloudplow"

 

Or

 

You can symlink the script to your $PATH directory. Then you can run the script from anywhere. Good luck

Link to comment
12 hours ago, Stupifier said:

Be sure to chmod +x the sxript

 

You can either run the cloudplow command from within the cloudplow directory like "./cloudplow"

 

Or

 

You can symlink the script to your $PATH directory. Then you can run the script from anywhere. Good luck

I was having a bit more of a look into it, and for some reason, the rclone mount is now showing as 1TB (which it should be), rather than 1MB (I'm not aware of doing anything different, so I'm not sure why it's now changed), so I'm able to use syncthing instead.

 

Still being fairly new to Linux in general, I'm still not fully confident in what's needed using command line, but I'm getting there. Until then, the GUI for syncthings will suit me well!

Link to comment
On 6/24/2020 at 11:11 PM, HALPtech said:

Hi. I'm having trouble getting rclone to actually mount using spaceinvaderone's guide. When I view my Dropbox mount from the terminal in rclone (reclone lsd Dropbox:) I can see all of the folders in the root of my Dropbox.

 

However, when I try to mount it using the following script, nothing appears in the /mnt/disks/Dropbox folder:

 


mkdir -p /mnt/disks/Dropbox

rclone mount --max-read-ahead 1024k --allow-other Dropbox: /mnt/disks/Dropbox &

 

Why can the terminal see my Dropbox folders and files but not Krusader when I run the mount script?

Hi,

I'm having the exact same problem. Did you find a solution? Thanks!

 

 

EDIT : seems like it's a permission issue. When I SSH as root I can see all the files mounted but they are owned by root:root. I'll try to edit the mount script accordingly.

 

Edited by spiderben25
Link to comment
11 hours ago, thingie2 said:

I was having a bit more of a look into it, and for some reason, the rclone mount is now showing as 1TB (which it should be), rather than 1MB (I'm not aware of doing anything different, so I'm not sure why it's now changed), so I'm able to use syncthing instead.

 

Still being fairly new to Linux in general, I'm still not fully confident in what's needed using command line, but I'm getting there. Until then, the GUI for syncthings will suit me well!

I think I spoke too soon!

 

The folder is being seen with the correct size, and I can access it fine from everywhere, other than syncthing... It seems to be a permissions issue, but I think it's on the syncthing side, rather than rclone. Time to head over to the syncthing docker support topic & hope someone can help there.

Link to comment
On 3/16/2020 at 6:17 PM, Stupifier said:

You have a case-issue. According to your terminal output, you named the rclone remote "OneDrive" not "onedrive". That should fix that isssue.

Now to the followup question about having it visible on network.

  1. You should always mount rclone remotes to /mnt/disks/remote_name......like you already doing. Do NOT ever mount rclone remotes to user share locations.
  2. In Unraid Web UI...go to Settings--->SMB. Enable and In the extras field...you need to add your what you want to share. Here is an example:
    
    [global]
    force user = nobody
    [google]
    path = /mnt/disks/google
    comment =
    browseable = yes
    # Public
    public = yes
    read only=no
    writeable=yes
    writable=yes
    write ok=yes
    guest ok = yes
    vfs objects =

    You can just do a google search about SMB settings if you want to learn more. And obviously, there is also NFS and AFP network sharing protocols.....I only gave you an example of the SMB one because I share with Windows PCs.

So I just read through this whole thread, and your post seems to be most relevant to what I'm trying to do. Here is my situation:

 

- I have a Share called "Recordings" that are for my security cam videos, and set to "No Cache".

- I have Shinobi record security camera footage to "/mnt/user0/Recordings"

- I would like to set up a continuous sync to my Google Drive to make sure my security footage are backed up to the cloud with as little latency as possible, in the event of a smash and grab job.

 

Previously, I had a QNAP, and that was dead simple to setup using their NVR app and Hybrid Cloud Sync app. I'm trying to replicate the functionality in Unraid.

 

Here's what I've tried so far:

 

I've already tried the Rclone and Rclone-mount dockers and hit a brickwall. I tried mounting my Gdrive remote folder (/NVR) to "/mnt/user0/Recordings". However, the directories in "/mnt/user" and "/mnt/user0" are all owned by "nobody/users", but directories and files below that level are all owned by different users. In the Recordings folder's case, all files under it are owned by "root/root". When I try to mount my remote Gdrive dir to "/mnt/user0/Recordings", I can either mount it as "nobody/users" or "root/root". In either case, it appears to disconnect it from "/mnt/user/Recordings". So on my PC where I have "Recordings" mounted as a SMB share, I don't see any new files written into "/mnt/user0/Recordings". And conversely, if I write any new files into the "Recordings" SMB share (or directly into "/mnt/user/Recordings" in Unraid terminal), those new files do not appear in "/mnt/user0/Recordings".

 

I see that you've recommended to never mount rclone remotes to user share location, which is exactly what I tried to do. What are the reasons you said that? Does it have anything to do with the difficulties I've encountered?

 

Any suggestions on how to best go about doing what I want?

Edited by Phoenix Down
Link to comment

I do not really understand why so many people try to mount there cloud storage. After you added the cloud through rclone you can sync it through rclone by its name for example:

rclone sync /mnt/user/sharename cloudname:backup/sharename

And I really suggest to do that because mounting a cloud is completely different than accessing it through rclone itself. For example you can not preserve the file modification time for webdav clouds if you use the mount path as target instead of the rclone cloudname.

 

You can test it by yourself. Mount the cloud and sync a subfolder without much files and use the --vv command to see all rclone actions. One time you sync to "cloudname:backup/sharename" and the second time you use "mnt/disks/cloudname/backup/sharename". You will see that rclone returns a completely different output. Especially if you change files and/or overwrite them or use flags that only work for specific clouds.

Edited by mgutt
  • Thanks 1
Link to comment

@mgutt: The pre-requisite for what you just said is that you have local storage space to sync stuff up. A major point of mounting is to have stuff in the cloud so you don't need to deal with local storage.

 

@Phoenix Down: you should not write directly to the mount. Use a pooling solution (e.g. unionfs / mergerfs) to pool local storage + cloud mount and then use an upload script to upload stuff from local to cloud. (look for @DZMM set of scripts on the forum).

Alternatively if you are just using the cloud strictly as a backup then use rclone sync like what @mgutt said and only access local storage. You shouldn't use mount in that case.

 

/mnt/user is used by Unraid to aggregate data from the individual devices. Your adding of non-device stuff onto there will cause confusion and potential issues.

 

 

  • Thanks 1
Link to comment
39 minutes ago, testdasi said:

A major point of mounting is to have stuff in the cloud so you don't need to deal with local storage.

Ok, understand, but are there so many people not syncing their cloud data with their NAS. Brave ^^

Edited by mgutt
Link to comment

@mgutt
I saw you quoted me. I just wanted to echo what everything has already said.

 

  1. rclone mounts were not really designed to be fully function write spaces. The recommended use for them is just for reads. They work EXCELLENT for reads. They work so so for writes...If you value the integrity/reliability of your data, don't go on writing tons to an rclone mount. Sure, you might see it work....but that doesn't mean it will work all the time reliably. It is just a very big no no among rclone pros
  2. When you do rclone mount in unraid. DO NOT MOUNT IT TO /mnt/user/.... or /mnt/user0/.... or whatever. Please create a directory in /mnt/disks (for example /mnt/disks/rclone) and mount it there. /mnt/user paths are special to Unraid. They are our shares and part of your array...it would be totally whack if you started mounting your rclone mount INTO your unraid array. Even SpaceInvaderOne follows this rule in his videos. 
  3. So now how do you do what you want...the whole nas sync stuff. Well......As others have eluded to.....You can execute scheduled scripts using the userscripts plugin to perform regular rclone sync/copy commands as needed. Take special care here.....it is a BAD idea to rclone sync/copy content that is STILL being written to. For example, if you have a video camera actively recording/writing into a specific video file.....of course rclone is going to have problems uploading that file to the cloud. Once writing to that file is COMPLETE, rclone copy/sync can successfully push it up to the cloud.

    There are actually very cleverly designed scripts on github which do a lot of this for you. I would specifically recommend "cloudplow". It is a very mature script...so mature the developer has moved onto a full re-design still in active beta called "crop". And PLEASE please please....read the documentation extremely carefully if you plan to use these scripts. Cloudplow uses systemd which we don't have on Unraid...but instead, you can still trigger the script regularly with unraid userscripts plugin
    https://github.com/l3uddz/cloudplow
    https://github.com/l3uddz/crop
  • Thanks 1
Link to comment

@mguttsync does work just fine and that's what I've been doing. I'm only considering mount because of the potential lower latency (i.e. rclone sync crons can only go as low as once per minute). But that idea may have been thwarted, as it doesn't look like rclone will sync any files that is still being written to (as @Stupifier mentioned in his post). From the rclone sync log:

2020/07/22 10:30:08 ERROR : Attempt 1/3 failed with 2 errors and: can't copy - source file is being updated (size changed from 842 to 1025570)

The QNAP cloud sync app does sync partial files, so I just assumed that rclone does as well.

 

@testdasi: I assume you are referring to @DZMM's guide here?

 

 

That seems like a great solution for people who want to have a directory with files that are both local and remote but transparent to the end user. That actually reminds me of how Unraid handles files in cache and array. It's transparent to the user regardless whether the file is in cache or in array, and the mover can move the files around.

 

But as you said, since I'm only using the remote as a sync destination and never read from it, perhaps "rclone sync" is enough.

 

@Stupifier: Thanks for the reply. What you said about not mounting to "/mnt/user" or "/mnt/user0" totally makes sense. However, @DZMM's guide mentioned above specifically instructs you to create various mount points inside "/mnt/user". Why isn't that frowned upon? @testdasi - feel free to chime in here as well.

 

I looked at Cloudplow, it seems like it does the same/similar thing as @DZMM's scripts? I don't use Plex, NZBGet, or Unionfs, so this is just my impression from a quick read on their Github page.

Edited by Phoenix Down
Link to comment

@Phoenix DownThat is a good point regarding what @DZMM does in his script. If you look at his script, he gives the option to change the location to wherever you would like....but ya...his default is inside the unraid user space. I personally would mount to /mnt/disks/....

That is just what makes sense to me. But yes, I see your point. If I had to guess, those /mnt/user mount locations are NOT configured as Unraid shares and that is why it is OK. I personally would rather have a more clear path distinction between what is an Unraid Share and what is NOT....to not confuse myself. That's another reason why I keep things which are not official Unraid Shares OUT of the /mnt/user/ directory.

And I've never used QNAP, so I have no idea what's going on there with partial uploads....proprietary stuff....dunno.

Link to comment
18 minutes ago, Stupifier said:

@Phoenix DownThat is a good point regarding what @DZMM does in his script. If you look at his script, he gives the option to change the location to wherever you would like....but ya...his default is inside the unraid user space. I personally would mount to /mnt/disks/....

That is just what makes sense to me. But yes, I see your point. If I had to guess, those /mnt/user mount locations are NOT configured as Unraid shares and that is why it is OK. I personally would rather have a more clear path distinction between what is an Unraid Share and what is NOT....to not confuse myself. That's another reason why I keep things which are not official Unraid Shares OUT of the /mnt/user/ directory.

And I've never used QNAP, so I have no idea what's going on there with partial uploads....proprietary stuff....dunno.

When you mount a rclone remote to "/mnt/disks/", where are those files physically located on the local machine? Is it on a specific array disk? Or is it essentially just a bunch of symlinks to the files in the remote?

Link to comment
8 minutes ago, Phoenix Down said:

When you mount a rclone remote to "/mnt/disks/", where are those files physically located on the local machine? Is it on a specific array disk? Or is it essentially just a bunch of symlinks to the files in the remote?

I think this is the thought:

  1. Unraid mounts physical disks to /mnt/disks/
  2. Unassigned Devices plugin mounts physical disks to /mnt/disks/
  3. So lets throw virtual mounts into /mnt/disks/ too

Here is a good discussion on it....others have asked....squid himself mounts to /mnt/disks/ too. It seems the official recommendation is /mnt/disks/ and if you mount to /mnt unraid will complain and recommend /mnt/disks/

 


As to where /mnt/disks/ physically resides...I have no clue. Maybe someone explains in that thread I linked. I didn't read the entire thing.

  • Like 1
Link to comment
3 hours ago, Phoenix Down said:

rclone sync crons can only go as low as once per minute

What is your target? Every 10 seconds? Then add 6 scripts and let them all start every minute with different delay:

rclone sync
sleep 10

rclone sync
sleep 20

rclone sync

 

and so on ...

 

If you need atomic execution (avoiding two rclone processes for the same folder) for all scripts use this after "sleep" and before "rclone sync":

# make script race condition safe
if [[ -d "/tmp/atomic_rclone_sync" ]] || ! mkdir "/tmp/atomic_rclone_sync"; then
    exit 1
fi
trap 'rmdir "/tmp/atomic_rclone_sync"' EXIT

(As long "/tmp/atomic_rclone_sync" exists the script is not executed and its only deleted after the script is finished)

 

 

Regarding the "files in use" issue you could play around with the "--min-age" flag. As long the file is written its modification time should be updated I think.

Edited by mgutt
  • Thanks 1
Link to comment
11 minutes ago, mgutt said:

What is your target? Every 10 seconds? Then add 6 scripts and let them all start every minute with different delay:


rclone sync

sleep 10

rclone sync

sleep 20

rclone sync

 

and so on ...

 

If you need atomic execution (avoiding two rclone processes for the same folder) for all scripts use this after "sleep" and before "rclone sync":


# make script race condition safe
if [[ -d "/tmp/atomic_rclone_sync" ]] || ! mkdir "/tmp/atomic_rclone_sync"; then
    exit 1
fi
trap 'rmdir "/tmp/atomic_rclone_sync"' EXIT

(As long "/tmp/atomic_rclone_sync" exists the script is not executed and its only deleted after the script is finished)

Thanks for the tip! Since I've now realized that rclone won't sync partial/open files, there is no point in trying to sync so frequently. Now I have the sync schedule aligned with the video file creation (set to be 5 minutes max per video). So my cron schedule is now:

 

*/5 * * * *

 

And my script is:

 

sleep 5

rclone sync ...

 

 

Link to comment
9 hours ago, Phoenix Down said:

Thanks for the tip! Since I've now realized that rclone won't sync partial/open files, there is no point in trying to sync so frequently. 

If I understand this post correctly, rclone does try to sync the partial file, but retries again and again (as long the file changes) until it reaches the retry limit (default is 3).

Link to comment
On 7/19/2020 at 9:35 PM, thingie2 said:

I think I spoke too soon!

 

The folder is being seen with the correct size, and I can access it fine from everywhere, other than syncthing... It seems to be a permissions issue, but I think it's on the syncthing side, rather than rclone. Time to head over to the syncthing docker support topic & hope someone can help there.

For anyone else who find this with similar issues, I found a solution.

 

The issue is the tasks that the rclone filesystem supports. Once I realised this, and could look into rclone's capability a bit more, I found the vfs-cache-mode parameter. I've since set this to write, and it has resolved my issues.

Link to comment

Sooooo....not quite sure what happened but realized today my rclone using google drive wasn't working - it wasn't mounted.  Turns out it complaining that here are files in the mount directory - i have been running sync jobs constantly - but i'm not sure why this is happening that its writing to the mount directory.  Anyways, any tips for clearing the folder so i can once again mount, as i'm not finding an easy way to delete the files that are there

EDIT:  I can see the files and folders if in terminal i ls them - they don't exist at all if i try to mount to smb or any container

Edited by mcrommert
Link to comment

Okay here's where it sits  - in terminal at /mnt/disks/Google everything is mounted correctly and can be seen - when i go to share the folder to a container in edit it shows all the subfolders just like you would expect

In any of the containers or smb share - it sees nothing and has nothing contained in it

I have force unmounted the folders and deleted them before remounting using rclone mount - same issue and it doesn't resolve anything

 

EDIT: My sync script for rclone also continues to run with no issues

Edited by mcrommert
Link to comment
1 hour ago, mcrommert said:

Okay here's where it sits  - in terminal at /mnt/disks/Google everything is mounted correctly and can be seen - when i go to share the folder to a container in edit it shows all the subfolders just like you would expect

In any of the containers or smb share - it sees nothing and has nothing contained in it

I have force unmounted the folders and deleted them before remounting using rclone mount - same issue and it doesn't resolve anything

 

EDIT: My sync script for rclone also continues to run with no issues

I have this exact same issue. It started happening yesterday when I switched my unraid install to some new hardware. Everything had been working fine up until this point. I haven't found out much more information than you but we are definitely affected by the same issue

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.