Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

[NOW RESOLVED]

 

I recently moved hardware (drives remained the same just moved to a new mobo and cpu) and before the move everything ran smoothly and now it keeps on having issues with the mount. Having  said that I believe Rclone also received an update around the same time so I'm not sure what could be the issue. 
 

 

 

Script location: /tmp/user.scripts/tmpScripts/Rclone_mount/script
Note that closing this window will abort the execution of this script
27.09.2020 21:05:38 INFO: Creating local folders.
mkdir: cannot stat '/mnt/user/mount_rclone/gdrive_media_vfs': Transport endpoint is not connected
27.09.2020 21:05:38 INFO: *** Starting mount of remote gdrive_media_vfs
27.09.2020 21:05:38 INFO: Checking if this script is already running.
27.09.2020 21:05:38 INFO: Script not running - proceeding.
27.09.2020 21:05:38 INFO: *** Checking if online
27.09.2020 21:05:39 PASSED: *** Internet online
27.09.2020 21:05:39 INFO: Mount not running. Will now mount gdrive_media_vfs remote.
27.09.2020 21:05:39 INFO: Recreating mountcheck file for gdrive_media_vfs remote.
2020/09/27 21:05:39 DEBUG : rclone: Version "v1.53.1" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "copy" "mountcheck" "gdrive_media_vfs:" "-vv" "--no-traverse"]
2020/09/27 21:05:39 DEBUG : Creating backend with remote "mountcheck"
2020/09/27 21:05:39 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2020/09/27 21:05:39 DEBUG : fs cache: adding new entry for parent of "mountcheck", "/"
2020/09/27 21:05:39 DEBUG : Creating backend with remote "gdrive_media_vfs:"
2020/09/27 21:05:39 DEBUG : Creating backend with remote "gdrive:crypt"
2020/09/27 21:05:41 DEBUG : mountcheck: Modification times differ by -12h2m28.852390437s: 2020-09-27 21:05:39.806390437 +0100 BST, 2020-09-27 08:03:10.954 +0000 UTC
2020/09/27 21:05:43 INFO : mountcheck: Copied (replaced existing)
2020/09/27 21:05:43 INFO :
Transferred: 32 / 32 Bytes, 100%, 13 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 3.9s

2020/09/27 21:05:43 DEBUG : 4 go routines active
27.09.2020 21:05:43 INFO: *** Creating mount for remote gdrive_media_vfs
27.09.2020 21:05:43 INFO: sleeping for 5 seconds
2020/09/27 21:05:44 Fatal error: Can not open: /mnt/user/mount_rclone/gdrive_media_vfs: open /mnt/user/mount_rclone/gdrive_media_vfs: transport endpoint is not connected
27.09.2020 21:05:48 INFO: continuing...
27.09.2020 21:05:48 CRITICAL: gdrive_media_vfs mount failed - please check for problems. Stopping dockers

If I reboot the server then they will mount for a short while before then getting terminated and not mounting again until I restart it. 

 

Any help would be greatly appreciated in trying to figure out what has gone wrong or that I have set incorrectly. 

 

Not sure what I had done wrong but now it seems to work again after being down for a couple of days.

tower-diagnostics-20200927-2104.zip

Edited by GreenGoblin
Resolved itself.
Link to comment
21 hours ago, KeyBoardDabbler said:

Looks like you got it right, changing LocalFilesShare to /mnt/user/local has cleared the error from the logs. I will test the uploading tomorrow as i have work early in the morning.

 

Makes me feel a little better that it wasn't something i changed by mistake, after trying to figure it out for the last week. Any idea why for this has been working, now all of a sudden stopped?

Also i thought it was best practice to download to a unassigned device to prevent unnecessary read/writes on the array.

Dunno - have you upgraded unRAID?  If not, maybe upgrade unRAID and make your download drive a pool drive rather than UD.

Link to comment

Hi guys 

 

I need help please.

 

I'm trying to switch from unionfs to merge fs.

 

But i'm lost with config files etc.

 

Currently i just need an upload folder for the upload script and a gdrive mount.

 

So what should i strip in the config exactly ?

 

 

Is it ok

 

RcloneRemoteName="crypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
LocalFilesShare="/mnt/user/mount_rclone_upload" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MergerfsMountShare="ignore" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="plex" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page

 

Do we really need that ?

MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount 

 

 

thx

Edited by francrouge
Link to comment
On 9/28/2020 at 8:45 PM, DZMM said:

Dunno - have you upgraded unRAID?  If not, maybe upgrade unRAID and make your download drive a pool drive rather than UD.

I am running on version: beta 29, possible something broke in recent update around 25 maybe. Should i report this as a bug?

 

Everything is working now pointed to a user share but nzbget does not like the change. I have notice a big difference in my speeds...

 

I was hoping to eventually move (sonarr, nzbget ect) off my unraid server and onto a nuc, that i have spare. Do you know if this would be a option with the mergerfs mount or would i run into the same issues. So basically my "LocalFilesShare" would point to the nuc running on the network.

something like LocalFilesShare="//192.168.1.10/seedbox/downloads"??

Edited by KeyBoardDabbler
Link to comment
On 9/28/2020 at 8:41 PM, francrouge said:

MergerfsMountShare="ignore" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable

you need to specify wheere you want the Mergerfs mount to be

 

On 9/28/2020 at 8:41 PM, francrouge said:

Do we really need that ?

MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount 

 

you don't have to create your folders this way - I was just trying to make life easier for new users.  I think though leaving this empty does create problems - so, just list two or more of your current folders (again I think there's a bug if you only list one folder)

  • Like 1
Link to comment
59 minutes ago, KeyBoardDabbler said:

I am running on version: beta 29, possible something broke in recent update around 25 maybe. Should i report this as a bug?

 

Everything is working now pointed to a user share but nzbget does not like the change. I have notice a big difference in my speeds...

If it was working before then my suspicion is that beta 29 could be the culprit.  I've just had to rollback from beta29 as I've had 3 machine lockouts in 24 hours.

 

Re speeds - rather than using UDs why not make your download drive a new pool drive with a cache/pool-only share 'downloads'?  For me the biggest benefit of the 6.9 betas are having more options to have drives with shares that don't need to touch the array (even though I don't have a parity drive) and potential R/W slave issues, without having to resort to UD.  E.g. here's my current structure, where only my cache has shares that are moved to the array

 

767097153_FireShotCapture358-Highlander_Main-1d087a25aac48109ee9a15217a105d14c06e02a6.unraid_net.thumb.png.32be437b043d8c437b6f849e84ea6a2a.png

Link to comment
1 hour ago, KeyBoardDabbler said:

I was hoping to eventually move (sonarr, nzbget ect) off my unraid server and onto a nuc, that i have spare. Do you know if this would be a option with the mergerfs mount or would i run into the same issues. So basically my "LocalFilesShare" would point to the nuc running on the network.

something like LocalFilesShare="//192.168.1.10/seedbox/downloads"??

I don't see why it would cause a problem.  To be safe, I'd probably add the SMB share using UD.

Link to comment
On 9/24/2020 at 10:14 AM, DZMM said:

If you just want to sync, then there's no point mounting as you've already got a local copy.  I would just use the upload script but set it to sync not move:

 


RcloneCommand="sync" # choose your rclone command e.g. move, copy, sync

 

EDIT:  I think I solved it, found the video from spaceinvaderone which used a much simpler script,  ended upp with this.
Gonna try syncing the share and doing a reboot to see if it's persistent and new files are uploaded/downloaded correctly 
 

 

#!/bin/bash
#----------------------------------------------------------------------------
#first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access
#there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure
#you only need as many as what you need to mount for dockers or a network share

mkdir -p /mnt/user/NAS/OneDrive

#This section mounts the various cloud storage into the folders that were created above.

rclone sync OneDrive: /mnt/user/NAS/OneDrive

 

 

 

----------------------------------------------------------- 

 

 

 

Sorry for the late reply, didn't have time to focus on my private stuff cause of work =/ 

I now tried using the upload script: https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_upload 

Changed the required settings to my paths, and to sync. 

 

# REQUIRED SETTINGS
RcloneCommand="sync" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="OneDrive" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="OneDrive" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/NAS/OneDrive" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/NAS/OneDrive" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone


Executing this script gives me the error that rclone is not installed.. 

Script location: /tmp/user.scripts/tmpScripts/rclone_Sync/script
Note that closing this window will abort the execution of this script
04.10.2020 08:58:15 INFO: *** Rclone move selected. Files will be moved from /mnt/user/NAS/OneDrive/OneDrive for OneDrive ***
04.10.2020 08:58:15 INFO: *** Starting rclone_upload script for OneDrive ***
04.10.2020 08:58:15 INFO: Script not running - proceeding.
04.10.2020 08:58:15 INFO: Checking if rclone installed successfully.
04.10.2020 08:58:15 INFO: rclone not installed - will try again later.

Executing the command "rclone listremotes" in an ssh to my unRAID server and it works as I think it would

 

root@Unraid:/mnt/user/NAS# rclone listremotes
GoogleDrive:
OneDrive:


Any pointers? 
I dont have any other rclone user script running at the moment, and the paths above are empty, just to try to verify functionallity, dont wanna mess up my onedrive :)

 

Edited by martikainen
Link to comment
2 hours ago, martikainen said:

EDIT:  I think I solved it, found the video from spaceinvaderone which used a much simpler script,  ended upp with this.
Gonna try syncing the share and doing a reboot to see if it's persistent and new files are uploaded/downloaded correctly 
 

 


#!/bin/bash
#----------------------------------------------------------------------------
#first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access
#there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure
#you only need as many as what you need to mount for dockers or a network share

mkdir -p /mnt/user/NAS/OneDrive

#This section mounts the various cloud storage into the folders that were created above.

rclone sync OneDrive: /mnt/user/NAS/OneDrive

 

 

 

----------------------------------------------------------- 

 

 

 

Sorry for the late reply, didn't have time to focus on my private stuff cause of work =/ 

I now tried using the upload script: https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_upload 

Changed the required settings to my paths, and to sync. 

 


# REQUIRED SETTINGS
RcloneCommand="sync" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="OneDrive" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="OneDrive" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/NAS/OneDrive" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/NAS/OneDrive" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone


Executing this script gives me the error that rclone is not installed.. 


Script location: /tmp/user.scripts/tmpScripts/rclone_Sync/script
Note that closing this window will abort the execution of this script
04.10.2020 08:58:15 INFO: *** Rclone move selected. Files will be moved from /mnt/user/NAS/OneDrive/OneDrive for OneDrive ***
04.10.2020 08:58:15 INFO: *** Starting rclone_upload script for OneDrive ***
04.10.2020 08:58:15 INFO: Script not running - proceeding.
04.10.2020 08:58:15 INFO: Checking if rclone installed successfully.
04.10.2020 08:58:15 INFO: rclone not installed - will try again later.

Executing the command "rclone listremotes" in an ssh to my unRAID server and it works as I think it would

 


root@Unraid:/mnt/user/NAS# rclone listremotes
GoogleDrive:
OneDrive:


Any pointers? 
I dont have any other rclone user script running at the moment, and the paths above are empty, just to try to verify functionallity, dont wanna mess up my onedrive :)

 

Ahhh, the logic in my upload script isn't quite right.  It looks for the mountcheck file in the mount location - if you don't mount, then there is no mountcheck file!

Link to comment
4 hours ago, DZMM said:

Ahhh, the logic in my upload script isn't quite right.  It looks for the mountcheck file in the mount location - if you don't mount, then there is no mountcheck file!

Been trying out the sync command now, have I understood it correctly that I'll need to run the sync command on a schedule to download/upload files? It cannot do that automatically when a file is either placed locally or in my onedrive? 

I choose to run the script "in background" thinking it would always be active, but no file changes are being made. 

So to make it always check for new files I shoud use "Custom" schedule and type in 2 * * * *, this would run the script every second minute. What happends if there's a huge file that haven't finished in 2 minutes, will it most likely crash the cron scedule, or just wait for the next time to run? :)
 

Link to comment

#!/bin/bash
#----------------------------------------------------------------------------

#first
section makes the folders for the mount in the /mnt/disks folder so docker containers can have access

#there
are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure

#you
only need as many as what you need to mount for dockers or a network share

mkdir -p /mnt/disks/tdrive
mkdir -p /mnt/disks/gdrive



#This
section mounts the various cloud storage into the folders that were created above.

rclone mount --max-read-ahead 1024k --allow-other tdrive: /mnt/disks/tdrive &
rclone mount --max-read-ahead 1024k --allow-other gdrive: /mnt/disks/gdrive &
This is my tdrive and gdrive mnt script.
I am only going to point plex to my tdrive for me and a few friends to use, uploading is handled by a cloud server. Do I need to use mergerfs or is it ok to carry on like this my drive is
/mnt/disks/tdrive/movies


Sent from my iPhone using Tapatalk

Link to comment

# REQUIRED SETTINGS
RcloneRemoteName="gdrive_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="nzbget plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
MountFo

How should I change this so I can mount my gdrive and tdrive they are both unencrypted


Sent from my iPhone using Tapatalk

Link to comment

Possibly off topic, but has anyone moved to Google workspaces yet? I just got the email and looks like it's an extra $5 a month for unlimited with only one user. I'm guessing everyone with a business plan at the moment will have to upgrade at some stage? Or maybe we'll be grandfathered in?

Edited by tsmebro
Link to comment

Any clue what's going on here? I pulled my server back up and get this error - no changes to my scripts from when it was running...

 

13.10.2020 10:53:56 INFO: Creating local folders.
13.10.2020 10:53:56 INFO: *** Starting mount of remote gdrive_media_vfs
13.10.2020 10:53:56 INFO: Checking if this script is already running.
13.10.2020 10:53:56 INFO: Script not running - proceeding.
13.10.2020 10:53:56 INFO: *** Checking if online
13.10.2020 10:53:57 PASSED: *** Internet online
13.10.2020 10:53:57 INFO: Success gdrive_media_vfs remote is already mounted.
13.10.2020 10:53:57 INFO: Mergerfs already installed, proceeding to create mergerfs mount
13.10.2020 10:53:57 INFO: Creating gdrive_media_vfs mergerfs mount.
* ERROR: unable to parse 'branches' - ignore/gdrive_media_vfs:/mnt/user/mount_rclone/gdrive_media_vfs
* ERROR: mountpoint not set
13.10.2020 10:53:57 INFO: Checking if gdrive_media_vfs mergerfs mount created.
13.10.2020 10:53:57 CRITICAL: gdrive_media_vfs mergerfs mount failed. Stopping dockers.

Link to comment

Hey guys pretty new to unraid ive just setup my server and with a complete fresh setup and rclone as the guide says i cant stop my array 

any ideas

 

Oct 13 13:45:49 Tower emhttpd: shcmd (109): sync
Oct 13 13:45:49 Tower emhttpd: shcmd (110): umount /mnt/user0
Oct 13 13:45:49 Tower emhttpd: shcmd (111): rmdir /mnt/user0
Oct 13 13:45:49 Tower emhttpd: shcmd (112): umount /mnt/user
Oct 13 13:45:49 Tower root: umount: /mnt/user: target is busy.
Oct 13 13:45:49 Tower emhttpd: shcmd (112): exit status: 32
Oct 13 13:45:49 Tower emhttpd: shcmd (113): rmdir /mnt/user
Oct 13 13:45:49 Tower root: rmdir: failed to remove '/mnt/user': Device or resource busy
Oct 13 13:45:49 Tower emhttpd: shcmd (113): exit status: 1
Oct 13 13:45:49 Tower emhttpd: shcmd (115): /usr/local/sbin/update_cron
Oct 13 13:45:49 Tower emhttpd: Retry unmounting user share(s)...
Oct 13 13:45:54 Tower emhttpd: shcmd (116): umount /mnt/user
Oct 13 13:45:54 Tower root: umount: /mnt/user: target is busy.

 

Link to comment
On 10/6/2020 at 9:07 PM, DZMM said:

Looks like existing users/legacy accounts will be ok..

 

I got an email yesterday with my migration options, but it's voluntary at the moment, so I'd advise everyone hangs tight for now.

 

On reddit some users have said the Enterprise unlimited price is $20/mth, although mine was listed as "please contact sales"

Link to comment
On 10/13/2020 at 12:31 AM, tsmebro said:

Possibly off topic, but has anyone moved to Google workspaces yet? I just got the email and looks like it's an extra $5 a month for unlimited with only one user. I'm guessing everyone with a business plan at the moment will have to upgrade at some stage? Or maybe we'll be grandfathered in?

can you post a copy of your email please if you can as mine didn't have that detail

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.