[Support] binhex - Rclone


Recommended Posts

Ok, so I changed the variable RCLONE_OPERATION=sync to RCLONE_OPERATION=copy and now files are copied, but and changes are not recognised.

 

As a result a changed file is duplicated, where the "old" (un-renamed) file is replaced and the "new" (renamed) file is synced.

image.png.75d6884357ad5d436e30c224d7d36136.png

 

 

Any ideas?

Edited by KptnKMan
Link to comment

A sync operation will have to work in one direction first, it cannot sync simultaneously with local and remote, one of these has to be the source of truth here, for example a scenario:-

 

local file has new 3 files

remote cloud has new 5 files

 

if you set RCLONE_OPERATION=sync and RCLONE_DIRECTION=both then what happens?, well it has to sync in one direction first before it can sync in the other direction, so files will be matched from local to remote, all 5 files on remote will be deleted and replaced with the 3 files that are local.

 

it will then sync from remote to local, so what will happen?, the 3 files that now exist on remote match the local, so no action is done, sync is now complete in both directions.

 

in short a sync operation cannot sync 3 files on the local to the remote and also keep the 5 existing files, that is NOT a sync operation, that is a copy operation.  It is also not normal to set the RCLONE_OPERATION to sync and the RCLONE_DIRECTION to both, as there is little point in this, as has been described above (in italics).

 

PLEASE be careful with sync operation, it WILL delete everything on the destination, and if you are syncing from remote cloud to local server then that means potentially deleting everything on your unraid server that is not present on the remote cloud.

Link to comment
31 minutes ago, KptnKMan said:

As a result a changed file is duplicated, where the "old" (un-renamed) file is replaced and the "new" (renamed) file is synced.

im sorry i have read this 3 times and i dont know what you are trying to achieve here, where is the 'old' file?, local? remote? and where is the 'new' file? local? remote? what is RCLONE_DIRECTION now set to?.

 

EDIT - my advise start small:

 

can you copy a new file from local to remote?, yes or no?

can you copy a new file from remote to local? yes or no?

can you change a local file and copy to remote and see the change? yes or no?

can you change a remote file and copy to local and see the change? yes or no?

 

 

Link to comment

Ok, I understand the risks of what you've described, but I think maybe I'm not communicating this well enough. I think I'm using the word/term "sync" loosely, so maybe I can clarify. My bad.

 

I set the variables to RCLONE_OPERATION=copy and RCLONE_DIRECTION=both, and made 2 changes on the remote cloud side; I created a new file "dont_delete_me_please.txt" and renamed a file "rclone_test2.txt" to "rclone_test_2.txt" (added the 2nd underscore to the filename).

 

With that logic, I expected the operation to delete the local copy of "rclone_test2.txt", then copy down "rclone_test_2.txt" and "dont_delete_me_please.txt".

 

But I had the duplicate behaviour as shown in the screenshot.

It duplicated the renamed file, and copied down the new file.

 

But I take it that the RCLONE_OPERATION=copy behaviour will never delete anything, but only copy, which is fair enough (Is that correct?).

 

The original issue however, was that when setting RCLONE_OPERATION=sync, it would only sync in 1 direction.

 

So am I correct in thinking that (Assuming RCLONE_DIRECTION=both) an RCLONE_OPERATION=sync operation will copy everything in 1 direction, then copy everything (including the previous copy) in the other direction (including new duplicates)?

 

Am I missing something?

Link to comment
35 minutes ago, binhex said:

can you copy a new file from local to remote?, yes or no?

can you copy a new file from remote to local? yes or no?

can you change a local file and copy to remote and see the change? yes or no?

can you change a remote file and copy to local and see the change? yes or no?

 

Hi,

using the settings RCLONE_OPERATION=copy and RCLONE_DIRECTION=both I can see traffic in both directions like so:

1) new local file, copied to remote, yes. renamed file produces duplicate.

2) new remote file, copied to local, yes. renamed file produces duplicate.

3) change local file (edit txt), changes are reflected on remote, yes.

image.png.40ee17f9da3e8e0cd2cb92cf521251b0.png

4) change remote file (edit txt), changes are NOT reflected on local, no. Changes are overwritten.

image.png.440dd4bfa67cc001c10160103c58aafb.png

 

This is single changes, single change at a time.

Link to comment
1 minute ago, KptnKMan said:

But I take it that the RCLONE_OPERATION=copy behaviour will never delete anything, but only copy, which is fair enough (Is that correct?).

precisely, a copy will never delete files.

 

3 minutes ago, KptnKMan said:

So am I correct in thinking that (Assuming RCLONE_DIRECTION=both) an RCLONE_OPERATION=sync operation will copy everything in 1 direction, then copy everything (including the previous copy) in the other direction (including new duplicates)?

if you set  RCLONE_DIRECTION=both) an RCLONE_OPERATION=sync then a sync will happen (deleting everything on the remote that is not present on the local - syncs from local to remote first, it will then sync the other direction with nothing to do (all files will already be in a synchronized state).

  • Thanks 1
Link to comment
46 minutes ago, KptnKMan said:

) change remote file (edit txt), changes are NOT reflected on local, no. Changes are overwritten.

this is expected when RCLONE_DIRECTION=both , as a local to remote copy will happen first, overwriting changes made to the remote file, then a remote to local sync will happen, its all about what you want to achieve here, if you want to only sync changes from remote to local then set the RCLONE_DIRECTION accordingly, same goes for local to remote.

  • Thanks 1
Link to comment

Hi there, on edge the Web UI link from Unraid pulls up http://IP:5572 which redirects to http://IP:5572/#/dashboard

 

However - the Edge UI auth pop up continually requests auth. each time i try and do anything a new auth pop up.

 

On Chrome however, this happened once, it took me to the Web UI (not the Chrome UI auth box) and i could sign in to the site. no more issues

Link to comment

I seem to be having issues with the config file. I originally just created it using rclone config, read some more and realized that was incorrect, so reran it with rclone config --config /config/rclone/config/config.conf, I now have a config file showing 

 

image.png.83fcc4ce500d24b67202cb04426a77f0.png

 

But upon starting the web UI I'm getting "This site can't be reached". Looking at the log, looks like it is not seeing the config file? 

 

image.png.16a867b0cefbfc48e71fdcfeb597208c.png

 

Only thing I can think of is I messed up my config path somehow but I have it as the default still so I think I should be good? 

 

image.thumb.png.a4b3e29ae17679fc2ba3db02626d9a3b.png

 

Do you see anything obvious that I'm missing? I'm terrible with this stuff, so it probably is something right in front of me. 

Link to comment
On 10/27/2021 at 9:51 PM, casperse said:

Hi binhex

 

I think you are the one to ask ;-)

Over some years I have transferred data one way --> to my Unraid server

FTP, Synchting, Resilio Sync, and latest LFTP using Seedsync Docker & Ubuntu VM Last one very unstable)

 

Could I use your binhex rclone to connect to my off-site server and do a oneway synct to my unraid server?

I have read all of the post and they are mostly about google drive and big vendors?

 

I was able to install rclone on the remote server doing this:

image.thumb.png.65c3a6d2948fd00efbb52886f59e135c.png

 

Would this make it possible to use your docker?

And would the configuration be simpler?

Sorry I am still trying to get my head around settings this up to just run and work (FAST ;-)

 

Would this docker work for this problem?

Link to comment
15 minutes ago, detz said:

Is it possible to even use B2 with this setup as the buckets have to be unique?  No matter what I set it's going to try and upload to b2:/media/.... and I can't make a media bucket as that's already take in b2. What am I missing here?

have you defined the bucket name?, from my readme:-

 

Quote

If your cloud storage provider requires a 'bucket' defining (for instance Backblaze B2 does) then please specify this by appending :<bucket name> to the value for RCLONE_REMOTE_NAME e.g. -e RCLONE_REMOTE_NAME=backblaze-encrypt:bucket

 

Link to comment
2 hours ago, binhex said:

have you defined the bucket name?, from my readme:-

 

 

Thanks, I re-read that so many times because I thought I was crazy and missed something, turns out I was. 🙂

 

I did figure out from another one of your posts I can just make a new mount point in the container and use that instead of media so I just make one my bucket name and that works too.

 

Edit: Oh, i didn't see the readme, I was looking at the FAQ and it's not mentioned there: https://github.com/binhex/documentation/blob/master/docker/faq/rclone.md

Edited by detz
Link to comment
15 minutes ago, detz said:

Thanks, I re-read that so many times because I thought I was crazy and missed something, turns out I was. 🙂

 

I did figure out from another one of your posts I can just make a new mount point in the container and use that instead of media so I just make one my bucket name and that works too.

 

Edit: Oh, i didn't see the readme, I was looking at the FAQ and it's not mentioned there: https://github.com/binhex/documentation/blob/master/docker/faq/rclone.md

yeah you are right it does really need to go in the faq as well.

Link to comment
  • 2 weeks later...
  • 4 weeks later...
On 11/14/2021 at 5:42 PM, Matt0925 said:

I seem to be having issues with the config file. I originally just created it using rclone config, read some more and realized that was incorrect, so reran it with rclone config --config /config/rclone/config/config.conf, I now have a config file showing 

 

image.png.83fcc4ce500d24b67202cb04426a77f0.png

 

But upon starting the web UI I'm getting "This site can't be reached". Looking at the log, looks like it is not seeing the config file? 

 

image.png.16a867b0cefbfc48e71fdcfeb597208c.png

 

Only thing I can think of is I messed up my config path somehow but I have it as the default still so I think I should be good? 

 

image.thumb.png.a4b3e29ae17679fc2ba3db02626d9a3b.png

 

Do you see anything obvious that I'm missing? I'm terrible with this stuff, so it probably is something right in front of me. 

I had a SIMILAR problem. I never found a quick/easy solution, but adding another variable worked for me:

 

Capture.PNG.f4f71213600311e9923a39b5b3281fc2.PNG

 

After adding that and restarting the docker,  it is able to find the .conf stored in appdata.

 

I hope this helps someone. I also hope this issues is resolved somehow in a future release.

The dev doesn't seem to know what causes, lol.

 

Edited by BrotherBruh
dev is nitpicky and basically ignored my post because of a word
Link to comment
8 minutes ago, BrotherBruh said:

I assume you are clueless about why this happened to me,

no idea without a log, attach /config/supervisord.log oh and btw creating a new variable with key name RCLONE_CONFIG will do absolutely nothing for this image, it needs to be named RCLONE_CONFIG_PATH

Link to comment

Happy New Year everyone!

I installed unraid yesterday the first time and syncing my onedrive to the NAS is the only functionality missing which would enable me to leave QNAP QTS. I have followed along this thread, pretty much post by post, as I had all problems that came up for others it seems. This is where I am:

I have the web ui up and running, I created a config, that I can explore, but I cannot mount one folder, because I get the following error message, probably because I have no clue what to enter in Mount point.

Error creating mount Error: Request failed with status code 500

 

This is my config:

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='binhex-rclone' --net='bridge' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'RCLONE_CONFIG_PATH'='/config/rclone/config/rclone.conf' -e 'RCLONE_MEDIA_SHARES'='/mnt/user/onedrivetest/temp' -e 'RCLONE_REMOTE_NAME'='o' -e 'RCLONE_SLEEP_PERIOD'='24h' -e 'RCLONE_USER_FLAGS'='' -e 'RCLONE_OPERATION'='copy' -e 'RCLONE_DIRECTION'='remotetolocal' -e 'RCLONE_POST_CHECK'='yes' -e 'ENABLE_WEBUI'='yes' -e 'WEBUI_USER'='admin' -e 'WEBUI_PASS'='admin' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -p '53681:53682/tcp' -p '5572:5572/tcp' -v '/mnt/user':'/media':'rw' -v '/mnt/user/appdata/binhex-rclone':'/config':'rw' 'binhex/arch-rclone'
01fd2292a65a29496818cd654b5ce153aa4e0254533b35a684687abc474be17f

The command finished successfully!

 

In addition, the only option I think I have not understood yet is Host Path 2, so that might be the problem as well.

 

I would like to move data from specific folders in OneDrive to specific shares on the NAS. I would like to start with the folder on OneDrive called "Temp" to test.

 

Two additional questions:

  • Can I add multiple different OneDrives/remote names?
  • Do I need to create a config for each OneDrive (I have two to cover) or for each folder I want to copy within that OneDrive?

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.