[Plugin] rclone


Waseh

Recommended Posts

don't know if this is possible, but I am encrypting my google drive now and getting rid of my unencrypted files but I still want to keep a local copy on my unraid which right now, I use rclone sync to download from the unencrypted google drive to my unraid.

 

my question now is how do I download my encrypted files to unraid but unencrypted as I don't need encryption on my unraid?

Link to comment
don't know if this is possible, but I am encrypting my google drive now and getting rid of my unencrypted files but I still want to keep a local copy on my unraid which right now, I use rclone sync to download from the unencrypted google drive to my unraid.
 
my question now is how do I download my encrypted files to unraid but unencrypted as I don't need encryption on my unraid?
You can use rclone mount on an encrypted remote. When you do this, it will create a file system for the remote in unencrypted format. You will be able to browse, view, and download your files from this rclone mount filesystem in their unencrypted form.

If that didn't make sense to you, I'd suggest reading more of the official rclone documentation and experimenting.

It seems the great majority of questions here lately aren't exactly questions about this Plugin.....But more about basic usage of rclone itself. Glad to help when I can but I really wonder if this thread is really the place to be discussing these things :/
  • Like 1
Link to comment



Hi another questions
 
is it possible to have 2 rclone mounts? for example one encrypted and one non encrypted to different folders?
 
I get an error when i try to mount the second drive


Provide both of your rclone mount commands and the reported error message.
Also, if you are doing rclone mount within a terminal, the mount will exist only as long as the terminal stay active
Link to comment
15 hours ago, Stupifier said:


 

 


Provide both of your rclone mount commands and the reported error message.
Also, if you are doing rclone mount within a terminal, the mount will exist only as long as the terminal stay active

 

# Local mountpoint
mntpoint="/mnt/disks/gdrive"     # It's recommended to mount your remote share in /mnt/disks/subfolder - 
                                # This is the only way to make it accesible to dockers
mntpoint2="/mnt/disks/gdrive2"
 
# Remote share
remoteshare="gcrypt2:/Plex/"     # If you want to share the root of your remote share you have to 
                                # define it as "remote:" eg. "acd:" or "gdrive:" 
remoteshare2="gdrive:/Plex/"

#---------------------------------------------------------------------------------------------------------------------


mkdir -p $mntpoint
mkdir -p $mntpoint2

 

rclone mount --rc --allow-other --buffer-size 16M --dir-cache-time 2m --drive-chunk-size 64M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off --user-agent="backup" $remoteshare $mntpoint &

 

rclone mount --rc --allow-other --buffer-size 16M --dir-cache-time 2m --drive-chunk-size 64M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off --user-agent="backup" $remoteshare2 $mntpoint2 &
 

I get this error

 

2019/12/27 09:05:22 Failed to start remote control: start server failed: listen                                      tcp 127.0.0.1:5572: bind: address already in use
 

 

also is it possible to mount to a share dir instead? like /mnt/user/gdrive ? so i can see it on windows explorer?

Edited by HarryRosen
Link to comment
1 hour ago, HarryRosen said:

# Local mountpoint
mntpoint="/mnt/disks/gdrive"     # It's recommended to mount your remote share in /mnt/disks/subfolder - 
                                # This is the only way to make it accesible to dockers
mntpoint2="/mnt/disks/gdrive2"
 
# Remote share
remoteshare="gcrypt2:/Plex/"     # If you want to share the root of your remote share you have to 
                                # define it as "remote:" eg. "acd:" or "gdrive:" 
remoteshare2="gdrive:/Plex/"

#---------------------------------------------------------------------------------------------------------------------


mkdir -p $mntpoint
mkdir -p $mntpoint2

 

rclone mount --rc --allow-other --buffer-size 16M --dir-cache-time 2m --drive-chunk-size 64M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off --user-agent="backup" $remoteshare $mntpoint &

 

rclone mount --rc --allow-other --buffer-size 16M --dir-cache-time 2m --drive-chunk-size 64M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off --user-agent="backup" $remoteshare2 $mntpoint2 &
 

I get this error

 

2019/12/27 09:05:22 Failed to start remote control: start server failed: listen                                      tcp 127.0.0.1:5572: bind: address already in use
 

 

also is it possible to mount to a share dir instead? like /mnt/user/gdrive ? so i can see it on windows explorer?

Your error is because you are using the remote control flag (-rc) in both of your mount commands. If you don't know what that is, just remove it and you will no longer see that error. Research "rclone remote control"....most people don't need it or use it. If you DO in fact need the remote control falgs, then you must assign a unique port to your second mount command. Your second mount command is trying to use the same port as your first mount command (the default port 5572).

https://rclone.org/rc/

 

Now, onto your second question. Your mount does NOT need to be in a share in order for it to be seen by Windows Explorer. In your Unraid GUI dashboard, go to Settings ---> SMB Shares. Here is an example of what to put in the Samba Extra Configuration field:

[gdrive]
path = /mnt/disks/gdrive
comment =
browseable = yes
# Public
public = yes
read only=no
writeable=yes
writable=yes
write ok=yes
guest ok = yes
vfs objects =

 

All of that may NOT entirely be necessary, but you get the point.....Also, as it says on that page, obviously your array must be stopped in order to apply these changes. Once you make these changes and applied them, startup your array again. The mount should be available (obviously, be sure to have your rclone mount command active). I typically map them as network drives in Windows.

 

Fair warning though......rclone mounts ARE NOT good targets to write into frequently. People ALWAYS want to try to write stuff straight into their rclone mounts. It is unreliable at best. You should only expect to READ from an rclone mount.

Edited by Stupifier
Link to comment
51 minutes ago, Stupifier said:

Your error is because you are using the remote control flag (-rc) in both of your mount commands. If you don't know what that is, just remove it and you will no longer see that error. Research "rclone remote control"....most people don't need it or use it. If you DO in fact need the remote control falgs, then you must assign a unique port to your second mount command. Your second mount command is trying to use the same port as your first mount command (the default port 5572).

https://rclone.org/rc/

 

Now, onto your second question. Your mount does NOT need to be in a share in order for it to be seen by Windows Explorer. In your Unraid GUI dashboard, go to Settings ---> SMB Shares. Here is an example of what to put in the Samba Extra Configuration field:


[gdrive]
path = /mnt/disks/gdrive
comment =
browseable = yes
# Public
public = yes
read only=no
writeable=yes
writable=yes
write ok=yes
guest ok = yes
vfs objects =

 

All of that may NOT entirely be necessary, but you get the point.....Also, as it says on that page, obviously your array must be stopped in order to apply these changes. Once you make these changes and applied them, startup your array again. The mount should be available (obviously, be sure to have your rclone mount command active). I typically map them as network drives in Windows.

 

Fair warning though......rclone mounts ARE NOT good targets to write into frequently. People ALWAYS want to try to write stuff straight into their rclone mounts. It is unreliable at best. You should only expect to READ from an rclone mount.

Made the changes and all is good, thank you again

Link to comment

guys please help me out. This has been a frustrating week. I went frmo creating VMs for Arq Backup, to duplicacy, and finally decided to use rclone and I'm still hacing issues with this.

 

I am running minio with a letsencrypt docker on a backup unraid server

Whe I run rclone it's rediculously slow at like 400K/s, and then errors out. I'm not sure what's happening. Both servers are on my local network for now, but I have the location set at the https:// endpoint in rclone. These are the errors that I am getting. Wold really appreciate your help.

 

MultipartUpload: upload multipart failed

upload id: 26d9ce71-9b32-4ff1-8670-c20a00d05cae

caused by: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.

status code: 403, request id: 15E44E9A5F3FEDAE, host id: 

2019/12/27 13:43:14 ERROR : Encrypted drive 'secure:': not deleting files as there were IO errors

2019/12/27 13:43:14 ERROR : Encrypted drive 'secure:': not deleting directories as there were IO errors

2019/12/27 13:43:14 ERROR : Attempt 2/3 failed with 3 errors and: MultipartUpload: upload multipart failed

upload id: 26d9ce71-9b32-4ff1-8670-c20a00d05cae

caused by: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.

status code: 403, request id: 15E44E9A5F3FEDAE, host id: 

2019/12/27 13:43:14 INFO  : Encrypted drive 'secure:': Waiting for checks to finish

2019/12/27 13:43:14 INFO  : Encrypted drive 'secure:': Waiting for transfers to finish

2019/12/27 13:44:12 INFO  : 

Transferred:         240M / 4.507 GBytes, 5%, 455.269 kBytes/s, ETA 2h44m1s

Errors:                 0

Checks:                 3 / 3, 100%

Transferred:            0 / 1, 0%

Elapsed time:     8m59.8s

Edited by maxse
Link to comment

okay so I tried just entering the local ip address as the endpoint and the speeds were fast!

Then went back to https:// and even tried to enter the duckdns ip without the https prefix and the speed was pretty much non-existent at like 100k/s. Do you guys think it's because both servers are on the same network and will speed up once I move the backup unraid offsite? Or is this some kind of other issue? I am not specifying the port with the duck dns address, since I am using letsencrypt to forward that to the minio docker. 

 

I have no issues accessing the minio docker in my web browser by going to the https:// link

 

*EDIT* So I just installed rclone on my macbook and enabled VPN, I am still getting 200k/s speed when using the duckdns address. Could this be an issue with letsencrypt and how it processes rclone?

Edited by maxse
Link to comment

Will I have to setup rclone again as I have set up tdrive and really cannot be bothered resetting it all at the moment


Sent from my iPhone using Tapatalk
Uninstalling rclone should not delete your rclone.conf file. But just in case, back it up. You can find the location of your rclone.conf file by typing this command into terminal
"rclone config file"
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.