[Plugin] rclone


Waseh

Recommended Posts

  • 2 weeks later...
  • 1 month later...

Hi guys - I posted my below post in General Support and was advised to come here as I am using the pluging

 

 

*****************************************

Hi Folks been working on this issue for some time and have a few simple questions that would help clear up a lot of, what seems to me are some gaps in knowledge. Please bear with me for the wall of text, but each question leads to another depending on the answer, but I assure you I know enough to tell that this is probably something stupid I am missing.

 

Quick background on the set up:

1. Brand new Ondrive account

2. Have a share that spans all disks called Media that points to folder /user/Media

3. Want to sync the Media share to the new OneDrive account

4. Can't decide if I should try to use the current folder /mnt/user/media as the local mount in the script or if I should create a new folder called /mnt/?/secure, copy data from /user/Media to /mnt/?/secure and then point the /user/Media share to /mnt/?/secure - I decided to go with the new folder and created /mnt/user/secure, I think this might be the problem

5. I will be editing files on the local storage and should automatically update OneDrive, but sync should also work in reverse I would assume

 

I've completed everything up to the point of mounting everything but obviously screwed up somewhere where the instructions were not clear - I think I know where, as per below, here we go:

 

I've completed the UNRAID / RCLONE from SpaceInvaders to get OneDrive working but at one point he asks to create the local mount in /mnt/disks/

 

QUESTION 1: There is no "/mnt/disks" in UNRAID. Only "/mnt/disk(disk number)". So I decided to use "/mnt/user/secure" as the local mount. Are we supposed to create the /mnt/disks directory, should it not already be there?

 

This leads me to another question

 

QUESTION 2:  What is the "disks" directory supposed to be anyway, why doesn't my instance have it and why are we writing directly to it?  I was under the impression that you really should not be writing to any disk directly and should always use the /user/ directory to let UNRAID figure out it's magic. Is this not the case here?? I know I am missing something here

 

Moving forward. Current Script below, already ran it.

 

Script:

#Local Mount Point
mntpoint="/mnt/user/secure" <<<<<<<<<<<<<< Is this wrong to put under /user as per above?

 

#Remote Share
remoteshare="secure:"

 

mkdir - $mntpoint
rclone mount --max-read-ahead 1024k --allow-other $remoteshare $mntpoint &

 

ISSUE: After the script ran, I uploaded a 1mb doc to onedrive. I then navigate to "shares" GUI, shows the "secure" folder as unprotected and there is nothing in it, even though I just added a 1mb document into the appropriate folder in onedrive. I would expect to see it in the local /secure folder, why is it not there? Is the sync not instant? Again I am missing something obviously.

***********************************

Edited by sannitig
Link to comment

ok forget it. I'm obviously completely lost. Maybe rclone just can't be used in the way I want it to be used - to write to an UNRAID share and have that share backed up to the cloud, but also have the local data protected by parity.

 

I get a "some or all files are unprotected" after using the script to create the mnt

 

This seems like a common and simple use case tbh

Link to comment

HEEYYYY OHHHHH!!! WENT FOR A SMOKE AND GOT IT WORKING!!!

 

So I figured out why my free space was going down to only 1TB on my local share, I believe it is because onedrive only has 1TB of space!!! Duh!

 

Now that it is working, I have two intelligent questions:

1. Should I be concerned about the "some or all files are unprotected" warning on the local share? I do not have a cache drive, the share was created via the user script and not via the gui as per usual. Even if you create via the gui and then run the script with the share name, you will trigger the yellow warning about "some or all files are unprotected"

 

2. writing to this share is SO slow now. Too slow to be honest. I believe (correct me if wrong) that it is so slow because it is also writing each file to the onedrive account. IF, this is the case, is there a way to have the syncing take place once a day rather than instant? I will be writing 5GB at a time and it took 20 seconds to write 60MB to this share! The initial write once I know everything is good, will be 500GB, this will probably take days - which is unacceptable.

Edited by sannitig
Link to comment
  • 1 month later...

Hi, I posted this in the binhex plex forums.

 

I created two, yes, two google drives with rclone plug-in. 

 

in Plex docker settings I added a new path for the new google drive mount.

 

So here's the script that have used with success until yesterday:
 

#!/bin/bash
#----------------------------------------------------------------------------
#first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access
#there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure
#you only need as many as what you need to mount for dockers or a network share

mkdir -p /mnt/disks/gdrive

#This section mounts the various cloud storage into the folders that were created above.

rclone mount --allow-other --dir-cache-time 96h --drive-chunk-size 32M --log-level INFO --timeout 1h --umask 002 --rc --tpslimit 8 --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit off gdrive: /mnt/disks/gdrive &

gdrive worked great. I could browse all subdirectories and Plex would "see" and play any content.

 

I edited the script to reflect newly created drive:


 

#!/bin/bash
#----------------------------------------------------------------------------
#first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access
#there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure
#you only need as many as what you need to mount for dockers or a network share

mkdir -p /mnt/disks/gdrive
mkdir -p /mnt/disks/gdrivenew

#This section mounts the various cloud storage into the folders that were created above.

rclone mount --allow-other --dir-cache-time 96h --drive-chunk-size 32M --log-level INFO --timeout 1h --umask 002 --rc --tpslimit 8 --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit off gdrive: /mnt/disks/gdrive &
rclone mount --allow-other --dir-cache-time 96h --drive-chunk-size 32M --log-level INFO --timeout 1h --umask 002 --rc --tpslimit 8 --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit off gdrivenew: /mnt/disks/gdrivenew &

 

Now when I "browse" the directory with terminal commands
 

cd /mnt/disks/gdrive
ls

I only see one directory in there even though there are 12 in that Google drive

 

when I run:
 

cd /mnt/disks/gdrivenew
ls

I see all the directories of that Google drive

 

Why does the old drive that used to work perfectly now only shows one directory out of 12. The new one works perfectly.

 

I hope this makes sense.

 

Any ideas?

 

Thank you!

Link to comment
9 hours ago, tvmainia said:

gdrive worked great. I could browse all subdirectories and Plex would "see" and play any content.

Are you on rclone 1.56?

If yes go to the plugin page and select the beta build and install it.

 

Release 1.56 has a bug in it so that rclone mount is not working properly.

Link to comment
14 hours ago, ich777 said:

Are you on rclone 1.56?

If yes go to the plugin page and select the beta build and install it.

 

Release 1.56 has a bug in it so that rclone mount is not working properly.

I AM indeed on 1.56.

 

When I go to the plugin page, i see:
Stable
Installed version: 1.56.0
Latest version:     1.56.0

 

When I select "Beta" from the pull-down menu, I see:
Beta
Installed version: 1.56.0
Latest version:     Error fetching version number

 

For the heck of it, I tried updating it, but it errors out:
"Update failed - Please try again"

 

Any ideas on how to get the beta to install?

 

Thank you

Link to comment
9 hours ago, tvmainia said:

I AM indeed on 1.56.

 

When I go to the plugin page, i see:
Stable
Installed version: 1.56.0
Latest version:     1.56.0

 

When I select "Beta" from the pull-down menu, I see:
Beta
Installed version: 1.56.0
Latest version:     Error fetching version number

 

For the heck of it, I tried updating it, but it errors out:
"Update failed - Please try again"

 

Any ideas on how to get the beta to install?

 

Thank you

There seems to be a problem at rclones end right now with no version number being returned for the beta version.

Will probably be fixed soon

Link to comment
  • 3 weeks later...

Hey @Waseh

 

Just wanted to summarize our conversation after the issues brought up over here:
  https://forums.unraid.net/topic/112745-stop-useless-backups/

 

It is unfortunate that rclone writes tokens to the config file on the flash drive so often, but the My Servers Flash Backup really amplifies those writes by updating the local git repo every time the config file changes. 

 

Thank you for updating the rclone plugin to prevent the config file from being backed up by the My Servers Flash Backup routine.

 

The downside is that the Flash Backup will no longer backup the config file at all. That is not necessarily a bad thing since the flash backups are not yet encrypted and this config file contains passwords.

Link to comment
  • 2 weeks later...

I have successfully setup rclone with google drive. It syncs when I "copy" a file from my pc to google drive local mount and vice versa.

 

However, if I edit a file, for example a text file within the google drive local mount, I cannot edit it. After you save the changes of a text file, it will return to the original file and your changes will not reflect.

 

If possible, how can I edit a file within the google drive mount?

 

Thank you.

Link to comment
  • 2 weeks later...
10 hours ago, IMTheNachoMan said:

First, thank you for making this app!

 

There are 37 pages in this so I apologize if this was already covered and I missed it.

 

Rclone now has a web UI capability. Any plans to add that to the Unraid UI for rclone? Like a quick way to start/stop, and change settings (like port and password)?

I actually considered this when i first saw it a couple of years ago, but then and seemingly still, the feature is classified as experimental.
When it gets more mature (or at least loses the experimental tag) i'd love to look into adding it in a more seamless way to the plugin.

Edited by Waseh
Link to comment
4 hours ago, Waseh said:

I actually considered this when i first saw it a couple of years ago, but then and seemingly stiil, the feature is classified as experimental.
When it gets more mature (or at least loses the experimental tag) i'd love to look into adding it in a more seemless way to the plugin.

 

Fair. Thank you. 

 

I was trying to see how to develop a plugin to see if I could do it myself but I couldn't figure out how the actual code works. I think it uses PHP. Will see if I can figure it out. :/

Link to comment
  • 2 weeks later...

Hi, has anyone got any working scripts etc for backing up teams (I guess one drive would do) to local?  It's a not so well known fact that Azure doesn't have any backups, other than to protect itself, so I was thinking to suck this down to a ZFS array with so I can use znapzend across it like I do everything else.

 

Seems like rclone could be a great solution.

 

Thanks,

 

Marshalleq

Link to comment
  • 2 weeks later...
  • 3 weeks later...

Hello,

 

I updated to 6.10.0-rc2 yesterday and now my folders inside my merge rclone folder is read only. I tried /Tools/NewPerms and that fixed it for a few minutes but then all folders went back to read only.

 

I also updated to latest rclone beta but that didn't help either.

 

\\SERVER\mount_mergerfs\gdrive_media_vfs\ <-- here can I write files

\\SERVER\mount_mergerfs\gdrive_media_vfs\Stuff1\ <-- read only

\\SERVER\mount_mergerfs\gdrive_media_vfs\Stuff2\ <-- read only

\\SERVER\mount_mergerfs\gdrive_media_vfs\Stuff3\More_Stuff1\ <-- here can I write files

\\SERVER\mount_mergerfs\gdrive_media_vfs\Stuff4\More_Stuff2\ <-- here can I write files

 

Any help is appreciated :)

 

edit-

 

Downgraded to 6.10.0-rc1 and ran a new /Tools/NewPerms and that seem to have fixed it :)

 

edit-

server-diagnostics-20211103-2004.zip

Edited by X672
Link to comment
  • 3 weeks later...

here is what I am using:

 

rclone mount --max-read-ahead 1024k --umask=0 --vfs-cache-mode writes --allow-other dropbox: /mnt/disks/dropbox

 

With 6.10.0-rc2 it appears that the permissions are incorrect on the mounted folders. When the folders are shared (at least with windows machine), you can read the files but you cannot write. Adding --mask=0 solved the issue for me.

Edited by bwarlick
  • Thanks 1
Link to comment
  • 2 weeks later...
  • 2 weeks later...

I'm running unraid 6.9.2 and rclone 1.57.0
 

Upon array startup I have Rclone setup to mount a Dropbox drive to a share that I have created.

 

Mount script:

rclone mount --max-read-ahead 1024k --allow-other dropbox: /mnt/user/dropbox &

 

It seems to work great.  I can access the share from anywhere by the UNC path.  \\server\dropbox\

 

Today I noticed that in my system log I have been getting an error ever second. 

The Error:

Dec 20 03:48:40 Jarvis emhttpd: error: share_luks_status, 5995: Operation not supported (95): getxattr: /mnt/user/dropbox

 

Obviously I am doing something wrong.  I have only been using unraid for a few weeks now, and I guess I am missing something. I have attached my diagnostics

 

If anyone can help, I would really appreciate it.

jarvis-diagnostics-20211221-1502.zip

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.