Jump to content
DZMM

Guide: How To Use Rclone To Mount Cloud Drives And Play Files

886 posts in this topic Last Reply

Recommended Posts

There’s been a number of scattered discussions around the forum on how to use rclone to mount cloud media and play it locally via Plex, Emby etc.  After discussions with @Kaizac @slimshizn and a few others, we thought it’d be useful to start a thread where we can all share and improve our setups.

 

Why do this? Well, if set-up correctly Plex can play cloud files regardless of size e.g. I play 4K media with no issues, with start times of under 5 seconds i.e. comparable to spinning up a local disk.  With access to unlimited cloud space available for the cost of a domain name and around $510/pm, then this becomes a very interesting proposition as it reduces local storage requirements, noise etc etc. 

 

At the moment I have about 80% of my library in the cloud and I struggle to tell if a file is local or in the cloud when playback starts.

 

To kick the thread off, I’ll share my current setup using gdrive.  I’ll try and keep this initial thread updated.

 

Update: I've moved my scripts to github to make it easier to keep them updated https://github.com/BinsonBuzz/unraid_rclone_mount

 

Changelog

 

  • 6/11/18 – Initial setup (updated to include rclone rc refresh)
  • 7/11/18 - updated mount script to fix rc issues
  • 10/11/18 - added creation of extra user directories ( /mnt/user/appdata/other/rclone & /mnt/user/rclone_upload/google_vfs) to mount script.  Also fixed typo for filepath
  • 11/11/18 - latest scripts added to https://github.com/BinsonBuzz/unraid_rclone_mount for easier editing
  • 15/12/12 - added teamdrive support to allow faster upload speeds

 

My Setup

 

Plugins needed:

  • Rclone beta – installs rclone and allows the creation of remotes and mounts
  • User Scripts – controls how mounts get created

 

Optional Plugins:

  • Nerd Tools - used to install Unionfs which allows a 2nd mount to be created that merges the rclone mount with files locally e.g. new TV episodes that haven’t been uploaded yet, so that dockers like sonar, radar etc can see that you’ve already got the files and don’t try to add them to your library again.  In the future hopefully this will be replaced with rclone’s new Union allowing for an all-in-one solution

 

  1. Rclone remote setup

 

Install the rclone beta plugin and via command line by running rclone config create 2 remotes:

 

  • gdrive: - a drive remote that connects to your gdrive account.  Recommend creating your own client_id
  • gdrive_media_vfs: - a crypt remote that is mounted locally and decrypts the encrypted files uploaded to gdrive:

 

Optional: to allow an extra 750GB/day upload create 2 additional remotes to support a Team Drive

 

  • tdrive: - a teamdrive remote.  Note: you need to use a different gmail/google account to the one above (which creates and shares the Team Drive) to create the token - any google account will do.  I recommend creating a 2nd client_id using this account
  • tdrive_media_vfs: - a crypt remote that is mounted locally and decrypts the encrypted files uploaded to the team drive:

 

I use a rclone vfs mount as opposed to a rclone cache mount as this is optimised for streaming, has faster media start times, and limits API calls to google to avoid bans.

 

Once done, your rclone config should look something like this:

[gdrive]
type = drive
client_id = ID1.apps.googleusercontent.com
client_secret = secret1
scope = drive
root_folder_id = 
service_account_file = 
token = {"access_token":"token1"}

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
password2 = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

[tdrive]
type = drive
scope = drive
team_drive = xxxxxxxxxxxx
token = {"access_token":"token2"}
client_id = ID2
client_secret = secret2

[tdrive_media_vfs]
type = crypt
remote = tdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
password2 = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

 

2.       Create Mountcheck files

 

This blank file is used in the following scripts to verify if the mounts have been created properly.  Run these commands:

 

touch mountcheck
rclone copy mountcheck gdrive_media_vfs: -vv --no-traverse
rclone copy mountcheck tdrive_media_vfs: -vv --no-traverse

 

3.      Mount script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script

 

Create a new script in user scripts to create the rclone mount, unionfs mount and start dockers that need the mounts.  I run this script on a 10 min */10 * * * * schedule so that it automatically remounts if there’s a problem. 

 

The script:

  • Checks if an instance is already running
  • Update: Mounts rclone gdrive and tdrive remotes
  • Update: Mounts unionfs creating a 3-way union between rclone gdrive remote, tdrive remote and local files stored in /mnt/user/rclone_upload
  • Starts dockers that need the unionfs mount e.g. radarr
  • New: used rclone rc to populate the directory cache

 

I've tried to annotate to make editing easy.  Once the script is added you should have a new folder created at /mnt/user/mount_unionfs.  Inside this folder create your media folders i.e. /mnt/user/mount_unionfs/google_vfs/movies and /mnt/user/mount_unionfs/google_vfs/tv_shows.  These are the folders to add to plex, radarr, sonarr etc.

 

How it works is new files are written to the local RW part of the union mount (/mnt/user/rclone_upload), but the dockers can't distinguish between whether the file is local or in the cloud because they are checking /mnt/user/mount_unionfs. 

 

A later script moves files from /mnt/user/rclone_upload to the cloud; to dockers the files are still in /mnt/user/mount_unionfs, so nothing has changed for them.

 

Update: delete the teamdrive section if you don't need it

 

4. Rclone upload script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script

 

I run this every hour to move files from my local drive /mnt/user/rclone_upload to the cloud.  I have set --bwlimit at 9500K as I find that even that even though this theoretically means I transfer more than google's 750GB/day limit, lower limits don't get me up to this mark.  Experiment with your setup if you've got enough upstream to upload 750GB/day.

 

I've also added --min age 30mins to again stop any premature uploads.

 

The script includes some exclusions to stop partial files etc getting uploaded.

 

Update: I have a 2nd upload script on github to move files to the teamdrive for an extra 750GB/day quota.  My script below only runs against user0 as I use my second script to upload from my cache.  If you using one script, just change user0 to user

 

Optional:  I 'cycle' through my drives one at a time to stop multiple drives spinning up at the same time, and I also check back to my cache drive often, to try and stop files that are eventually going to be uploaded anyway getting moved to the array and then uploaded

 

5. Unionfs cleanup script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script

 

The 'problem' with unionfs is that when it needs to delete a file from the cloud e.g. you have a better quality version of a file, it doesn't actually delete it - it 'hides' it from the mount so it appears deleted, but the file actually still exists.  So, if you in the future create a new mount or access the cloud drive via another means, the files will still be there potentially creating a very messy library.

 

This script cleans up the cloud files and actually deletes them - I run this a few times a day.

 

Update: The script now cleans the teamdrive.  Delete the two lines with newPath2 if you don't use a teamdrive

6. Unmount Script - - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script

 

I use this at array start to make sure all the 'check' files have been removed properly in case of an unclean shutdown, to ensure the next mount goes smoothly.  

 

Update: delete teamdrive fusermount line if don't need

 

In the next post I'll explain my rclone mount command in a bit more detail, to hopefully get the discussion going!

 

 

Edited by DZMM
added support for teamdrive

Share this post


Link to post

Key elements of my rclone mount script:

rclone mount --rc --rc-addr=172.30.12.2:5572 --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs
  • --buffer-size: determines the amount of memory, that will be used to buffer data in advance.  I think this is per stream

 

  • --dir-cache-time: sets how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache, so if you upload via rclone you can set this to a very high number. If you make changes direct to the remote they won't be picked up until the cache expires

 

  • --drive-chunk-size: for files uploaded via the mount.  I rarely do this, but I think I should set this higher for my 300/300 connection

 

  • --fast-list: Improves speed but only in tandem with rclone rc --timeout=1h vfs/refresh recursive=true

 

  • --vfs-read-chunk-size: this is the key variable.  This controls how much data is requested in the first chunk of playback - too big and your start times will be too slow, too small and you might get stuttering at the start of playback.  128M seems to work for most but try 64M and 32M

 

  • --vfs-read-chunk-size-limit: each successive vfs-read-chunk-size doubles in size until this limit is hit e.g. for me 128M, 256M,512M,1G etc.  I've set the limit as off to not cap how much is requested

 

Read more on vfs-read-chunk-size: https://forum.rclone.org/t/new-feature-vfs-read-chunk-size/5683

Edited by DZMM
  • Like 2
  • Upvote 1

Share this post


Link to post

Unionfs works 'ok' but it's a bit clunky as per the scripts above.  Rclone are working on their own union which would hopefully include hardlink support unlike unionfs.  It possibly will also remove the need for a seperate rclone move script, automating transfers from the local drive to the cloud

 

https://forum.rclone.org/t/advantage-of-new-union-remote/7049/1

Edited by DZMM
  • Like 1
  • Upvote 1

Share this post


Link to post
9 hours ago, DZMM said:

--rc --rc-addr=172.30.12.2:5572

I see you added this along with fast list. What is the IP? Is that Plex? Going to try it out myself.

Edit: Found your reasoning in the rclone forums.

Edited by slimshizn

Share this post


Link to post
41 minutes ago, slimshizn said:

I see you added this along with fast list. What is the IP? Is that Plex? Going to try it out myself.

Edit: Found your reasoning in the rclone forums.

It's my unRAID IP address

Share this post


Link to post

Have you tried it without the address bit and just --rc?  My firewall setup is a bit complicated, so I'm not sure if other users need to add the address

Share this post


Link to post

Seems it worked, here's what I got.
 

Quote

2018/11/06 17:56:55 Failed to rc: connection failed: Post http://localhost:5572/vfs/refresh: dial tcp 127.0.0.1:5572: connect: connection refused
2018/11/06 17:56:55 NOTICE: Serving remote control on http://127.0.0.1:5572/

Any need to be able to access that?

Edit: I use the cloud storage purely for backup of everything, and have a copy I just tested both locally and on the cloud. Zero difference.

Edited by slimshizn

Share this post


Link to post
15 minutes ago, slimshizn said:

Seems it worked, here's what I got.
 

Any need to be able to access that?
 

yep it's working.  Eventually it will say:

 

{
“result”: {
“”: “OK”
}
}

What it's doing is loading/pre-populating your local directory cache with all your cloud library folders i.e. you'll get a better browsing experience once it's finished e.g. plex will do library scans faster

 

I haven't used rclone rc yet or looked into it - I think it allows commands that can't be done via command line.

Edited by DZMM

Share this post


Link to post

It never ended up saying result OK, but it seems to be working fine and viewing the share seems quicker than usual which is nice.

Share this post


Link to post
7 hours ago, slimshizn said:

It never ended up saying result OK, but it seems to be working fine and viewing the share seems quicker than usual which is nice.

I just updated the mount script - give it another whirl as it ran quite fast for me.  Try a full Plex scan and you'll see the speed difference

Edited by DZMM

Share this post


Link to post

unbelievable cool stuff. If it would work, we dont even really need any large local storage anymore.

 

What providers u use? If i look for gdrive it says 45€ (3 user min) for unlimited storage... 

Share this post


Link to post
40 minutes ago, nuhll said:

unbelievable cool stuff. If it would work, we dont even really need any large local storage anymore.

 

What providers u use? If i look for gdrive it says 45€ (3 user min) for unlimited storage... 

sorry, it's $10/pm full price - ignore the 3/5 user min for unlimited as they don't enforce it.  I have one account:

root@Highlander:~# rclone about gdrive:
Used:    79.437T
Trashed: 1.158T
Other:   2.756G

There are usually 20% etc coupons for the first year if you shop around.

 

Share this post


Link to post
13 minutes ago, DZMM said:

sorry, it's $10/pm full price - ignore the 3/5 user min for unlimited as they don't enforce it.  I have one account:


root@Highlander:~# rclone about gdrive:
Used:    79.437T
Trashed: 1.158T
Other:   2.756G

There are usually 20% etc coupons for the first year if you shop around.

 

WTF. Thats crazy. But what happens when they enforce it sometime... :/ 

 

SO i could get unlimited for 15€/mon atm

 

how it comes u only pay 10usd? For 10€ it says me 3 TB max.

1.png

Edited by nuhll

Share this post


Link to post
11 minutes ago, nuhll said:

WTF. Thats crazy. But what happens when they enforce it sometime... :/ SO i could get unlimited for 15€/mon atm how it comes u only pay 10usd? FOr 10€ it says me 3 TB max.

Not sure where you're based but this is the UK price - and I got 20% off for the first 12 months:

 

https://gsuite.google.co.uk/intl/en_uk/pricing.html

 

745348802_FireShotCapture67-GSuitePricingPlans-https___gsuite.google_co.uk_intl_en_uk_pricing_html.thumb.png.6dfe823b2a3a383d56b3cb319d4b40c5.png

 

A lot of people have been going for a while on this, so the limit enforcement doesn't seem an immediate threat.  If they do start enforcing the 5 user min, I guess users could combine accounts in blocks of five and pay for one seat each - people do this already just in case.

Edited by DZMM

Share this post


Link to post

One potential drawback is for each stream you have to be able to support the bitrate e.g 20Mbps, so if you don't have decent bandwidth this isn't a go-er - although if you don't it wouldn't be anyway, as you need to be able to upload all your content!

 

I have 300/300 which is good enough for about 4-5x 4K or about 20-30x 1080P streams, although my usage is nowhere near this.

Share this post


Link to post

Ive got around 50 or 16mbits depending on which line it goes... does it support multithread? Anyway plex should regulate quality depending on line speed, correct?

Edited by nuhll

Share this post


Link to post
1 hour ago, nuhll said:

How much RAM does rclone use for u?

 

Up to 1GB per stream:

 

--buffer-size 1G

Lower if you don't have the RAM.  Some users use as little as 100MB.

 

59 minutes ago, nuhll said:

Ive got around 50 or 16mbits depending on which line it goes... does it support multithread? Anyway plex should regulate quality depending on line speed, correct?

Is that up or down?  Because you need to (over the duration of the show or movie - the playback starts streaming straightaway) transfer all the file from gdrive to your local plex for playback, you need to be able to manage the average bitrate - in it's simplest form for a 60min 10GB file giving an average bitrate of 22Mbps you need this much bandwidth (the film won't be a constant 22Mbps - some bits will be higher and lower) on average to play.  With a fast connection, rclone will grab it quicker depending on your chunk settings - so you'll see high usage for a few mins then bursty traffic afterwards.

 

Remote access works the same way from your plex server to wherever - after you've got the file from gdrive.

 

If you don't have enough bandwidth downstream, some users have paid for cheap dedicated servers/VPS with big pipes to host Plex there so they can support lots of users without hitting their local connection.  I think @slimshizn does this

Share this post


Link to post

Thats download. But i get a upgrade, latest in 1-2 Months. Probably atleast 100Mbits.

Upload is slow tho, only 10mbits x 2.

 

Lets say i download a movie, while its uploading to gdrive, its still acessable localy and only gets deleted when the upload is finished?

 

When i start a movie, i can watch before its again complete downloaded, correct?

 

I only need to support 1-2 users max... ^^

Edited by nuhll

Share this post


Link to post

Only thing what im missing is remote encryption of the files, that would be a huge thing.

Share this post


Link to post
36 minutes ago, nuhll said:

Lets say i download a movie, while its uploading to gdrive, its still acessable localy and only gets deleted when the upload is finished?

yes

36 minutes ago, nuhll said:

Upload is slow tho, only 10mbits x 2.

you're no worse off with this setup than before for plex RAS.  Uploading to gdrive will be slow though, but the files stay local until they are complete

 

37 minutes ago, nuhll said:

When i start a movie, i can watch before its again complete downloaded, correct?

yes it streams the movie while in the background rclone downloads in chunks.  For my 

--vfs-read-chunk-size 128M

It downloads a 128M chunk first then starts playing - that's how it starts in seconds. Then it keeps trying to double the next chunk it requests in the background  - 256M, 512M, 1G etc etc

 

16 minutes ago, nuhll said:

Only thing what im missing is remote encryption of the files, that would be a huge thing.

It is encrypted - you are mounting the remote gdrive_media_vfs which encrypts the files when they are actually stored on gdrive.  When you setup the gdrive_media_vfs remote choose crypt.  

 

A good how-to here: https://hoarding.me/rclone/.  Where it mentions two encrypted remotes, I just use one gdrive_media_vfs and then create sub-folders inside the mount for my different types of media:

Quote

 

We’re going to encrypt everything before we upload it so this adds another layer to the process. How this works is you create remotes to your cloud storage, then we create an encrypted remote on top of the normal remote. These encrypted remotes, one for TV and one for movies are the ones we’ll be using for uploading. We’ll then be creating two more remotes afterwards to decrypt the plexdrive mounts. So 5 in total.

 

To run it, rclone config. Select N for a new remote, and just name it ‘gd’ then select 7 for GD. This is the underlying remote we’ll use for our crypts. Follow this link to create a client ID and secret, and use them for the next two prompts in the rclone config. After this, select N, and then copy the link provided and use it in your browser. Verify your google account and paste the code returned, then Y for ‘yes this is ok’ and you have your first remote!

 

Next we’re going to setup two encrypted remotes. Login to GD and create two folders, tv-gd and m-gd.

 

Run rclone config again, N for new remote, then set the name as tv-gd, and 5 for a crypt. Next enter gd:/tv-gd, and 2 for standard filenames. Create or generate password and an optional salt, make sure you keep these somewhere safe, as they’re required to access the decrypted data. Select Y for ‘yes this is ok’. Then you can do the same for the second one, using the name m-gd, and the remote gd:/m-gd. There’s our two encrypted remotes setup

 

 

 

 

Share this post


Link to post
26 minutes ago, DZMM said:

yes

you're no worse off with this setup than before for plex RAS.  Uploading to gdrive will be slow though, but the files stay local until they are complete

 

yes it streams the movie while in the background rclone downloads in chunks.  For my 


--vfs-read-chunk-size 128M

It downloads a 128M chunk first then starts playing - that's how it starts in seconds. Then it keeps trying to double the next chunk it requests in the background  - 256M, 512M, 1G etc etc

 

It is encrypted - you are mounting the remote gdrive_media_vfs which encrypts the files when they are actually stored on gdrive.  When you setup the gdrive_media_vfs remote choose crypt.  

 

A good how-to here: https://hoarding.me/rclone/.  Where it mentions two encrypted remotes, I just use one gdrive_media_vfs and then create sub-folders inside the mount for my different types of media:

 

 

 

So you only use one "remote", good idea , i guess.

 

Thats really awesome, if i find some time, ill implement it.


I guess since i have slower internet ill lower chunk size so it starts faster, only drawdown would be reduced speed (but ii have slower internet speed anyway, and maybe more CPU useage, which should be np for 1 or max 2user)

 

Also i would change your script so it only uplaods files older then e.g. 1 year, so i dont waste time uploading "bad movies".

 

I wonder if it would be possible to uploadonly  files to gdrive when 2 ips (local) are not reachable. So it doesnt interfer with other network activitys.

Edited by nuhll

Share this post


Link to post
7 minutes ago, nuhll said:

So you only use one "remote", good idea , i guess.

 

Thats really awesome, if i find some time, ill implement it.


I guess since i have slower internet ill lower chunk size so it starts faster, only drawdown would be reduced speed (but ii have slower internet speed anyway, and maybe more CPU, which should be np for 1 or max 2user)

Experiment - too low and you'll get buffering/stuttering at the start.  If 128M is too big, try 64M and maybe then even 32M.  I think you won't need lower than 64M.

 

It's light on CPU as it's hardly any different to playing a file off your local drive

Edited by DZMM

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.