[Plugin] rclone


Waseh

Recommended Posts

On 7/19/2020 at 12:17 PM, spiderben25 said:

Hi,

I'm having the exact same problem. Did you find a solution? Thanks!

 

 

EDIT : seems like it's a permission issue. When I SSH as root I can see all the files mounted but they are owned by root:root. I'll try to edit the mount script accordingly.

 

Hey, sorry I'm seeing this so late - were you able to resolve this?

Link to comment

So I have done the full install, and unraid mounts the gdrive, but I can't see anything in it via plex, or anywhere in unraid. UNLESS I use a terminal in which case I can see the files listed there from the unraid instance.  But any docker container cannot see it.  Any help would be greatly appreciated. --Allow-other is in the script (provided from the github), and any areas where my naming scheme is required are there. Let me know if any logs would be helpful.
TIA!

Link to comment
On 7/27/2020 at 11:02 AM, rzeeman711 said:

Thanks for sharing this here. Rclone is working for me again. For anyone else that comes here looking for a solution, you just need to reboot unraid and the rclone plugin will be updated with the fix.

how did you update your plugin? mine still shows 

2019.11.01 for the latest version of the beta plugin.

Link to comment
8 minutes ago, rzeeman711 said:

There is no new version, there was a fix applied to that version. Restarting unraid pulls the fix. If you restarted you should be good.

mine still didn't fix anything. plex still can't see anything inside the folder. But I can go into terminal and see everything. despite having --allow-other setup.  also which version of the plugin are you using? as the Waseh version of it isn't fixed for me?

Link to comment
12 minutes ago, rzeeman711 said:

There is no new version, there was a fix applied to that version. Restarting unraid pulls the fix. If you restarted you should be good.

 

3 minutes ago, Millerboy3 said:

mine still didn't fix anything. plex still can't see anything inside the folder. But I can go into terminal and see everything. despite having --allow-other setup.  also which version of the plugin are you using? as the Waseh version of it isn't fixed for me?


nvm. I watched the install this time and it looks like its updated. however.... now I'm getting a critical error in my mount script. -_-  attempting to reboot now to see if that takes care of it.

Link to comment
1 minute ago, Millerboy3 said:

mine still didn't fix anything. plex still can't see anything inside the folder. But I can go into terminal and see everything. despite having --allow-other setup.  also which version of the plugin are you using? as the Waseh version of it isn't fixed for me?

I am using the same version. 2019.11.01 by waseh. I had the same problem as you and it was fixed upon restart after mcrommert's comment. I'm not sure why your setup is different but keep working at it.

Link to comment
1 hour ago, rzeeman711 said:

I am using the same version. 2019.11.01 by waseh. I had the same problem as you and it was fixed upon restart after mcrommert's comment. I'm not sure why your setup is different but keep working at it.

yea I'm still pushing it.  I'll hopefully figure it out soon. this will make my life a heck of a lot easier.

got it working. thanks for the fix. that was a hassle.  I should have left my initial scripts apparently. they worked without a hitch. odd.

Edited by Millerboy3
Link to comment
18 hours ago, Millerboy3 said:

mine still didn't fix anything. plex still can't see anything inside the folder. But I can go into terminal and see everything. despite having --allow-other setup.  also which version of the plugin are you using? as the Waseh version of it isn't fixed for me?

I'm having the same issue. I just rebooted and despite seeing my folders in the terminal, can't view them anywhere else (SMB, Krusader, etc.).

 

This is frustrating.

Link to comment
  • 2 weeks later...

So I thought I had this running. Turns out it wasn't. So I'm back to square one again, and I have moved and cleared all data in the mount locations and tried to do it again, keep getting "failed drive not empty" etc etc.  So I changed the info, and it worked but didn't mount my gdrive share at all. Sort of lost at this point. Followed the github link to a T as far as I know and just can't figure out what I'm doing wrong here.  Should I just attach my config files (sans the creds portion) and see if someone can look over them? 

Config Files

For whatever reason, this also causes my unraid server to hang on unmounting.

I rebooted the server for the last time to remove some dir's and clear it and try again and it finally worked. No changes made. Mind boggling. -_-

 

Edited by Millerboy3
space
Link to comment
So I thought I had this running. Turns out it wasn't. So I'm back to square one again, and I have moved and cleared all data in the mount locations and tried to do it again, keep getting "failed drive not empty" etc etc.  So I changed the info, and it worked but didn't mount my gdrive share at all. Sort of lost at this point. Followed the github link to a T as far as I know and just can't figure out what I'm doing wrong here.  Should I just attach my config files (sans the creds portion) and see if someone can look over them? 

Config Files

For whatever reason, this also causes my unraid server to hang on unmounting.

I rebooted the server for the last time to remove some dir's and clear it and try again and it finally worked. No changes made. Mind boggling.
 
Standard IT...."Did you turn it off and on again?"
Haha
Link to comment
1 hour ago, Dexm57 said:

I went through following the process the best I can. I'm able to access the file structure. See file names. But I cant open any of them or copy from the GDrive.  Any idea where I went wrong?

Ya gotta give more info than that for anyone to help you.

 

  1. What process?
  2. How exactly are you trying to copy from Gdrive? What command?

 

Link to comment
  • 2 weeks later...

Hi, I recently set up rclone with Google Drive as a backup destination using SpaceInvaderOne's guide. While archiving some files, I noticed that my files were being uploaded at around 20MBps despite having a gigabit FiOS connection. Based on some Googling, I'm thinking increasing my chunk size might improve speeds.

 

But how do I go about increasing the chunk size? I've attached my rclone mount script if that's of any help.

Also, how does this affect the items I have already uploaded (if it affects them at all)?

Link to comment
32 minutes ago, rragu said:

Hi, I recently set up rclone with Google Drive as a backup destination using SpaceInvaderOne's guide. While archiving some files, I noticed that my files were being uploaded at around 20MBps despite having a gigabit FiOS connection. Based on some Googling, I'm thinking increasing my chunk size might improve speeds.

 

But how do I go about increasing the chunk size? I've attached my rclone mount script if that's of any help.

Also, how does this affect the items I have already uploaded (if it affects them at all)?

https://rclone.org/commands/rclone_mount/

https://rclone.org/flags/
CTRL+F "chunk"

And FWIW, It is not the best idea to Write to an rclone Google Drive mount. Its just a widely known tip....those rclone mounts are more geared towards reads. The If you want to write something to a Google Drive remote, doing it manually using "rclone copy" command and the flags link I provided has a flag to designate chunk size.

Also, even though you have Gig FIOS connection.....you still may not saturate your connection using rclone/google. That might just be the speed you get with Google. It's different for everyone.

Link to comment
10 minutes ago, Stupifier said:

https://rclone.org/commands/rclone_mount/

https://rclone.org/flags/
CTRL+F "chunk"

And FWIW, It is not the best idea to Write to an rclone Google Drive mount. Its just a widely known tip....those rclone mounts are more geared towards reads. The If you want to write something to a Google Drive remote, doing it manually using "rclone copy" command and the flags link I provided has a flag to designate chunk size.

Also, even though you have Gig FIOS connection.....you still may not saturate your connection using rclone/google. That might just be the speed you get with Google. It's different for everyone.

Thanks! I'll look into the resources you posted.

 

As for not writing to the rclone Google Drive mount, (1) it's a slightly more widely known tip now 😅, (2) while I'll switch to using "rclone copy", is there any particular negative effect to transferring data to Google Drive in the way I've been doing (e.g. data loss/corruption) or is it just lower performance?

Link to comment
5 minutes ago, rragu said:

Thanks! I'll look into the resources you posted.

 

As for not writing to the rclone Google Drive mount, (1) it's a slightly more widely known tip now 😅, (2) while I'll switch to using "rclone copy", is there any particular negative effect to transferring data to Google Drive in the way I've been doing (e.g. data loss/corruption) or is it just lower performance?

The biggest issue with writing to the rclone mount is just flat out reliability. People pretty much ALWAYS complain about it. Either it gives errors, or its slow, or it doesn't copy everything you told it. Pretty much, it isn't something you'd wanna use/trust. I know its convenient......sorry.

It is such a well-known thing that there are a ton of very popular scripts around GitHub that basically monitor directories and perform regular rclone sync/copy jobs for you in the background on a regular schedule (like every 2 minutes or whatever you set). One such script is called "Cloudplow". Very well documented/mature. Easy to find on github.

 

Now, rclone mount is absolutely remarkable as a read source. It is excellent for reading.

Link to comment
2 hours ago, Stupifier said:

The biggest issue with writing to the rclone mount is just flat out reliability. People pretty much ALWAYS complain about it. Either it gives errors, or its slow, or it doesn't copy everything you told it. Pretty much, it isn't something you'd wanna use/trust. I know its convenient......sorry.

It is such a well-known thing that there are a ton of very popular scripts around GitHub that basically monitor directories and perform regular rclone sync/copy jobs for you in the background on a regular schedule (like every 2 minutes or whatever you set). One such script is called "Cloudplow". Very well documented/mature. Easy to find on github.

 

Now, rclone mount is absolutely remarkable as a read source. It is excellent for reading.

Just tried out "rclone copy"....the difference is night and day

 

Test files: 4 files (total of 12.3 GB; between 2.3-3.6GB each)

Average transfer speed using rclone mount: 19.4MB/s

Average transfer speed using "rclone copy": 60.9MB/s

Average transfer speed using "rclone copy" and chunk-size 256M: 78.1MB/s

 

The only drawback is heightened CPU/RAM usage but I'm sure I can manage that with a script like you mentioned.

 

Thanks very much for all your help!

Link to comment
2 hours ago, rragu said:

Just tried out "rclone copy"....the difference is night and day

 

1.) Check the destination documentation of rclone. Every destination has its own special parameters and behaviour like Google Drive has its own, too:

https://rclone.org/drive/

 

2.) Do not copy/sync to the mount of the destination like /mnt/disks/gdrive. Use only the rclone alias like gdrive:subfolder. rclone does not work properly if the target is a local path. Example: If you use the mount path and your destination is a webdav server it does not presere the file modification time.

 

3.) If it does not preserve the file modifcation time it must use checksums to verify that the destination file is the same (or not) before being able to skip (or overwrite it). This means it downloads and calculates the checksums (high cpu usage) and uploads at the same time (rclone uses 8 parallel running --checkers for this).

 

4.) There are minor performance differences between sync and copy as they act a little bit different, but finally this should not influence the transfer speed of files:

sync = deletes already existing files from destination that are not present on source

copy = ignores existing files on destination

 

Conclusion:

Check if your transfer preserves the timestamps. If yes, then disable the checksum check through --ignore-checksum (if you are ok with that) and test different --checkers and --transfers (4 is the default) values until you reach the best performance. For example I use "--checkers 2 --transfers 1" for my Nextcloud destination as there was no benefit uploading multiple files and finally it does not raise the total transfer speed if I have two parallel uploads instead of one.

 

 

 

 

My command as an example:

rclone sync /mnt/user/Software owncube:software --create-empty-src-dirs --ignore-checksum --bwlimit 3M --checkers 2 --transfers 1 -vv --stats 10s

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.