Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

3/1/20 UPDATE: TO MIGRATE FROM UNIONFS TO MERGERFS READ THIS POST.  New users continue reading

13/3/20 Update: For a clean version of the 'How To' please use the github site https://github.com/BinsonBuzz/unraid_rclone_mount

 

17/11/21 Update: Poll to see how much people are storing

 

I've added a Paypal.me upon request if anyone wants to buy me a beer.

 

There’s been a number of scattered discussions around the forum on how to use rclone to mount cloud media and play it locally via Plex, Emby etc.  After discussions with @Kaizac @slimshizn and a few others, we thought it’d be useful to start a thread where we can all share and improve our setups.

 

Why do this? Well, if set-up correctly Plex can play cloud files regardless of size e.g. I play 4K media with no issues, with start times of under 5 seconds i.e. comparable to spinning up a local disk.  With access to unlimited cloud space available for the cost of a domain name and around $510/pm, then this becomes a very interesting proposition as it reduces local storage requirements, noise etc etc. 

 

At the moment I have about 80% of my library in the cloud and I struggle to tell if a file is local or in the cloud when playback starts.

 

To kick the thread off, I’ll share my current setup using gdrive.  I’ll try and keep this initial thread updated.

 

Update: I've moved my scripts to github to make it easier to keep them updated https://github.com/BinsonBuzz/unraid_rclone_mount

 

Changelog

 

  • 6/11/18 – Initial setup (updated to include rclone rc refresh)
  • 7/11/18 - updated mount script to fix rc issues
  • 10/11/18 - added creation of extra user directories ( /mnt/user/appdata/other/rclone & /mnt/user/rclone_upload/google_vfs) to mount script.  Also fixed typo for filepath
  • 11/11/18 - latest scripts added to https://github.com/BinsonBuzz/unraid_rclone_mount for easier editing
  • 3/1/20 - switched from unionfs to mergerfs
  • 4/2/20 - updated the scripts to make easier to use and control.  Thanks to @senpaibox for the inspiration

 

My Setup

 

Plugins needed

  • Rclone – installs rclone and allows the creation of remotes and mounts.  New scripts require V1.5.1+
  • User Scripts – controls how mounts get created

 

How It Works

  1. Rclone is used to access files on your google drive and to mount them in a folder on your server e.g. mount a gdrive remote called gdrive_vfs: at /mnt/user/mount_rlone/gdrive_vfs
  2. Mergerfs is used to merge files from your rclone mount (/mnt/user/mount_rlone/gdrive_vfs) with local files that exist on your server and haven't been uploaded yet (e.g. /mnt/user/local/gdrive_vfs) in a new mount /mnt/user/mount_unionfs/gdrive_vfs
  3. This mergerfs mount allows files to be played by dockers such as Plex, or added to by dockers like radarr etc without the dockers even being aware that some files are local and some are remote. It just doesn't matter
  4. The use of a rclone vfs remote allows fast playback, with files streaming within seconds
  5. New files added to the mergerfs share are actually written to the local share, where they will stay until the upload script processes them
  6. An upload script is used to upload files in the background from the local folder to the remote. This activity is masked by mergerfs i.e. to plex, radarr etc files haven't 'moved'

 

Getting Started

 

Install the rclone plugin and via command line run rclone config and create 2 remotes:

  • gdrive: - a drive remote that connects to your gdrive account. Recommend creating your own client_id
  • gdrive_media_vfs: - a crypt remote that is mounted locally and decrypts the encrypted files uploaded to gdrive:

 

 

It is advisable to create your own client_id to avoid API bans.

 

Mount Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script

 

  • Create a new script using the the user scripts plugin and paste in the rclone_mount script
  • Edit the config lines at the start of the script to choose your remote name, paths etc
  • Choose a suitable cron job. I run this script on a 10 min */10 * * * * schedule so that it automatically remounts if there’s a problem.

 

The script:

  • Checks if an instance is already running, remounts (if cron job set) automatically if mount drops
  • Mounts your rclone gdrive remote
  • Installs mergerfs and creates a mergerfs mount
  • Starts dockers that need the mergerfs mount e.g. plex, radarr

 

Upload Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script

 

  • Create a new script using the the user scripts plugin and paste in the rclone_mount script
  • Edit the config lines at the start of the script to choose your remote name, paths etc - USE THE SAME PATHS
  • Choose a suitable cron job e.g hourly

 

Features:

  • Checks if rclone is installed correctly
  • sets bwlimits
  • There is a cap on uploads by google of 750GB/day. I have added bandwidth scheduling to the script so you can e.g. set an overnight job to upload the daily quota at 30MB/s, have it trickle up over the day at a constant 10MB/s, or set variable speeds over the day
  • The script now stops once the 750GB/day limit is hit (rclone 1.5.1+ required) so there is more flexibility over upload strategies
  • I've also added --min age 10mins to stop any premature uploads and exclusions to stop partial files etc getting uploaded.

 

Cleanup Script - see https://github.com/BinsonBuzz/unraid_rclone_mount for latest script

 

  • Create a new script using the the user scripts plugin and set to run at array start (recommended) or array stop

 

In the next post I'll explain my rclone mount command in a bit more detail, to hopefully get the discussion going!

Edited by DZMM
cleaned up so easier to read
  • Like 10
  • Thanks 2
Link to comment

Key elements of my rclone mount command:

rclone mount \
		--allow-other \
		--buffer-size 256M \
		--dir-cache-time 720h \
		--drive-chunk-size 512M \
		--log-level INFO \
		--vfs-read-chunk-size 128M \
		--vfs-read-chunk-size-limit off \
		--vfs-cache-mode writes \
		--bind=$RCloneMountIP $RcloneRemoteName: \
		$RcloneMountLocation &

 

  • --buffer-size: determines the amount of memory, that will be used to buffer data in advance.  I think this is per stream

 

  • --dir-cache-time: sets how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache, so if you upload via rclone you can set this to a very high number. If you make changes direct to the remote they won't be picked up until the cache expires

 

  • --drive-chunk-size: for files uploaded via the mount NOT via the upload script i.e. if you add files directly to /mnt/user/mount_rclone/yourremote.  I rarely do this and it's not a great idea

 

  • --vfs-read-chunk-size: this is the key variable.  This controls how much data is requested in the first chunk of playback - too big and your start times will be too slow, too small and you might get stuttering at the start of playback.  128M seems to work for most but try 64M and 32M

 

  • --vfs-read-chunk-size-limit: each successive vfs-read-chunk-size doubles in size until this limit is hit e.g. for me 128M, 256M,512M,1G etc.  I've set the limit as off to not cap how much is requested so that rclone downloads the biggest chunks it can for my connection

 

Read more on vfs-read-chunk-size: https://forum.rclone.org/t/new-feature-vfs-read-chunk-size/5683

Edited by DZMM
clean up
  • Like 3
  • Upvote 1
Link to comment

Unionfs works 'ok' but it's a bit clunky as per the scripts above.  Rclone are working on their own union which would hopefully include hardlink support unlike unionfs.  It possibly will also remove the need for a seperate rclone move script, automating transfers from the local drive to the cloud

 

https://forum.rclone.org/t/advantage-of-new-union-remote/7049/1

Edited by DZMM
  • Like 1
  • Upvote 1
Link to comment

Seems it worked, here's what I got.
 

Quote

2018/11/06 17:56:55 Failed to rc: connection failed: Post http://localhost:5572/vfs/refresh: dial tcp 127.0.0.1:5572: connect: connection refused
2018/11/06 17:56:55 NOTICE: Serving remote control on http://127.0.0.1:5572/

Any need to be able to access that?

Edit: I use the cloud storage purely for backup of everything, and have a copy I just tested both locally and on the cloud. Zero difference.

Edited by slimshizn
Link to comment
15 minutes ago, slimshizn said:

Seems it worked, here's what I got.
 

Any need to be able to access that?
 

yep it's working.  Eventually it will say:

 

{
“result”: {
“”: “OK”
}
}

What it's doing is loading/pre-populating your local directory cache with all your cloud library folders i.e. you'll get a better browsing experience once it's finished e.g. plex will do library scans faster

 

I haven't used rclone rc yet or looked into it - I think it allows commands that can't be done via command line.

Edited by DZMM
  • Like 1
Link to comment
7 hours ago, slimshizn said:

It never ended up saying result OK, but it seems to be working fine and viewing the share seems quicker than usual which is nice.

I just updated the mount script - give it another whirl as it ran quite fast for me.  Try a full Plex scan and you'll see the speed difference

Edited by DZMM
  • Like 1
Link to comment
40 minutes ago, nuhll said:

unbelievable cool stuff. If it would work, we dont even really need any large local storage anymore.

 

What providers u use? If i look for gdrive it says 45€ (3 user min) for unlimited storage... 

sorry, it's $10/pm full price - ignore the 3/5 user min for unlimited as they don't enforce it.  I have one account:

root@Highlander:~# rclone about gdrive:
Used:    79.437T
Trashed: 1.158T
Other:   2.756G

There are usually 20% etc coupons for the first year if you shop around.

 

  • Like 1
Link to comment
13 minutes ago, DZMM said:

sorry, it's $10/pm full price - ignore the 3/5 user min for unlimited as they don't enforce it.  I have one account:


root@Highlander:~# rclone about gdrive:
Used:    79.437T
Trashed: 1.158T
Other:   2.756G

There are usually 20% etc coupons for the first year if you shop around.

 

WTF. Thats crazy. But what happens when they enforce it sometime... :/ 

 

SO i could get unlimited for 15€/mon atm

 

how it comes u only pay 10usd? For 10€ it says me 3 TB max.

1.png

Edited by nuhll
Link to comment
11 minutes ago, nuhll said:

WTF. Thats crazy. But what happens when they enforce it sometime... :/ SO i could get unlimited for 15€/mon atm how it comes u only pay 10usd? FOr 10€ it says me 3 TB max.

Not sure where you're based but this is the UK price - and I got 20% off for the first 12 months:

 

https://gsuite.google.co.uk/intl/en_uk/pricing.html

 

745348802_FireShotCapture67-GSuitePricingPlans-https___gsuite.google_co.uk_intl_en_uk_pricing_html.thumb.png.6dfe823b2a3a383d56b3cb319d4b40c5.png

 

A lot of people have been going for a while on this, so the limit enforcement doesn't seem an immediate threat.  If they do start enforcing the 5 user min, I guess users could combine accounts in blocks of five and pay for one seat each - people do this already just in case.

Edited by DZMM
  • Like 2
Link to comment

One potential drawback is for each stream you have to be able to support the bitrate e.g 20Mbps, so if you don't have decent bandwidth this isn't a go-er - although if you don't it wouldn't be anyway, as you need to be able to upload all your content!

 

I have 300/300 which is good enough for about 4-5x 4K or about 20-30x 1080P streams, although my usage is nowhere near this.

  • Like 1
Link to comment
1 hour ago, nuhll said:

How much RAM does rclone use for u?

 

Up to 1GB per stream:

 

--buffer-size 1G

Lower if you don't have the RAM.  Some users use as little as 100MB.

 

59 minutes ago, nuhll said:

Ive got around 50 or 16mbits depending on which line it goes... does it support multithread? Anyway plex should regulate quality depending on line speed, correct?

Is that up or down?  Because you need to (over the duration of the show or movie - the playback starts streaming straightaway) transfer all the file from gdrive to your local plex for playback, you need to be able to manage the average bitrate - in it's simplest form for a 60min 10GB file giving an average bitrate of 22Mbps you need this much bandwidth (the film won't be a constant 22Mbps - some bits will be higher and lower) on average to play.  With a fast connection, rclone will grab it quicker depending on your chunk settings - so you'll see high usage for a few mins then bursty traffic afterwards.

 

Remote access works the same way from your plex server to wherever - after you've got the file from gdrive.

 

If you don't have enough bandwidth downstream, some users have paid for cheap dedicated servers/VPS with big pipes to host Plex there so they can support lots of users without hitting their local connection.  I think @slimshizn does this

Link to comment

Thats download. But i get a upgrade, latest in 1-2 Months. Probably atleast 100Mbits.

Upload is slow tho, only 10mbits x 2.

 

Lets say i download a movie, while its uploading to gdrive, its still acessable localy and only gets deleted when the upload is finished?

 

When i start a movie, i can watch before its again complete downloaded, correct?

 

I only need to support 1-2 users max... ^^

Edited by nuhll
Link to comment
36 minutes ago, nuhll said:

Lets say i download a movie, while its uploading to gdrive, its still acessable localy and only gets deleted when the upload is finished?

yes

36 minutes ago, nuhll said:

Upload is slow tho, only 10mbits x 2.

you're no worse off with this setup than before for plex RAS.  Uploading to gdrive will be slow though, but the files stay local until they are complete

 

37 minutes ago, nuhll said:

When i start a movie, i can watch before its again complete downloaded, correct?

yes it streams the movie while in the background rclone downloads in chunks.  For my 

--vfs-read-chunk-size 128M

It downloads a 128M chunk first then starts playing - that's how it starts in seconds. Then it keeps trying to double the next chunk it requests in the background  - 256M, 512M, 1G etc etc

 

16 minutes ago, nuhll said:

Only thing what im missing is remote encryption of the files, that would be a huge thing.

It is encrypted - you are mounting the remote gdrive_media_vfs which encrypts the files when they are actually stored on gdrive.  When you setup the gdrive_media_vfs remote choose crypt.  

 

A good how-to here: https://hoarding.me/rclone/.  Where it mentions two encrypted remotes, I just use one gdrive_media_vfs and then create sub-folders inside the mount for my different types of media:

Quote

 

We’re going to encrypt everything before we upload it so this adds another layer to the process. How this works is you create remotes to your cloud storage, then we create an encrypted remote on top of the normal remote. These encrypted remotes, one for TV and one for movies are the ones we’ll be using for uploading. We’ll then be creating two more remotes afterwards to decrypt the plexdrive mounts. So 5 in total.

 

To run it, rclone config. Select N for a new remote, and just name it ‘gd’ then select 7 for GD. This is the underlying remote we’ll use for our crypts. Follow this link to create a client ID and secret, and use them for the next two prompts in the rclone config. After this, select N, and then copy the link provided and use it in your browser. Verify your google account and paste the code returned, then Y for ‘yes this is ok’ and you have your first remote!

 

Next we’re going to setup two encrypted remotes. Login to GD and create two folders, tv-gd and m-gd.

 

Run rclone config again, N for new remote, then set the name as tv-gd, and 5 for a crypt. Next enter gd:/tv-gd, and 2 for standard filenames. Create or generate password and an optional salt, make sure you keep these somewhere safe, as they’re required to access the decrypted data. Select Y for ‘yes this is ok’. Then you can do the same for the second one, using the name m-gd, and the remote gd:/m-gd. There’s our two encrypted remotes setup

 

 

 

 

Link to comment
26 minutes ago, DZMM said:

yes

you're no worse off with this setup than before for plex RAS.  Uploading to gdrive will be slow though, but the files stay local until they are complete

 

yes it streams the movie while in the background rclone downloads in chunks.  For my 


--vfs-read-chunk-size 128M

It downloads a 128M chunk first then starts playing - that's how it starts in seconds. Then it keeps trying to double the next chunk it requests in the background  - 256M, 512M, 1G etc etc

 

It is encrypted - you are mounting the remote gdrive_media_vfs which encrypts the files when they are actually stored on gdrive.  When you setup the gdrive_media_vfs remote choose crypt.  

 

A good how-to here: https://hoarding.me/rclone/.  Where it mentions two encrypted remotes, I just use one gdrive_media_vfs and then create sub-folders inside the mount for my different types of media:

 

 

 

So you only use one "remote", good idea , i guess.

 

Thats really awesome, if i find some time, ill implement it.


I guess since i have slower internet ill lower chunk size so it starts faster, only drawdown would be reduced speed (but ii have slower internet speed anyway, and maybe more CPU useage, which should be np for 1 or max 2user)

 

Also i would change your script so it only uplaods files older then e.g. 1 year, so i dont waste time uploading "bad movies".

 

I wonder if it would be possible to uploadonly  files to gdrive when 2 ips (local) are not reachable. So it doesnt interfer with other network activitys.

Edited by nuhll
Link to comment
7 minutes ago, nuhll said:

So you only use one "remote", good idea , i guess.

 

Thats really awesome, if i find some time, ill implement it.


I guess since i have slower internet ill lower chunk size so it starts faster, only drawdown would be reduced speed (but ii have slower internet speed anyway, and maybe more CPU, which should be np for 1 or max 2user)

Experiment - too low and you'll get buffering/stuttering at the start.  If 128M is too big, try 64M and maybe then even 32M.  I think you won't need lower than 64M.

 

It's light on CPU as it's hardly any different to playing a file off your local drive

Edited by DZMM
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.