Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

58 minutes ago, DZMM said:

But

 

- the checker files are created and removed by a script. Just because they aren't there, doesn't mean rclone isn't running

- sleeping for a combined 20s isn't going to fix hangs

 

Partly one of the issues I was having - not sure what the best way of checking there's an upload currently running, or whether the mount script is currently running (and attempting to mount a drive - i.e. rclone_mount).

Link to comment

Hi mate. I've read this post and taken a look at the github scripts as well. I've used most of your settings for my rclone mount settings on my seedbox. I'm only doing a read-only mount so I'm not using mergerfs etc. Do your mount settings affect the Plex library loading times when I go to the home page where there is Recommended, On Deck etc? 

 

  --use-mmap \
  --allow-non-empty \
  --dir-cache-time 168h \
  --timeout 1h \
  --umask 002 \
  --tpslimit 10 \
  --tpslimit-burst 10
  --poll-interval=1m \
  --vfs-cache-mode writes \
  --vfs-read-chunk-size 128M \
  --vfs-read-chunk-size-limit 512M \
  --buffer-size 256M \ 
  

These are my settings. I've also read on rclone forums that buffer size should be lesser than vfs-read-chunk-size; in your settings it is twice as much but seems to work for me. Is this okay? I'm using these settings to play high bitrate files. Should I be using it for a mount which has lower bitrate files or is that overkill? 

Link to comment

I don't know I just started, so that's why I'm asking people why has more experience with this.

If Google stops the unlimited service because of people encrypting, would there then be a longer grace period to get the stuff local or will they just freeze peoples things?

Will this be a likely scenario.

Edited by Bjur
Link to comment
Just now, Bjur said:

I don't know I just started, so that's why I'm asking people why has more experience with this.

If Google stops the unlimited service because of people encrypting, would there then be a longer grace period to get the stuff local or will they just freeze peoples things?

Will this be a likely scenario.

More likely would be that they enforce the 5 user requirement to actually have unlimited. And after that they might raise prices. Both scenario's is personal for each person if it's worth it. And I think they will give a grace period if things do drastically change.

 

I'm using my drive both for my work related storage as personal. Don't forget there are many universities and data-driven companies who store TB's of data each day. We're pretty much a drop in the bucket for Google. Same with mobile providers. I have an unlimited plan, extra expensive, but most months I don't even use 1gb (especially now, being constantly at home). And then other days I rake in 30GB per day because I'm streaming on holiday or working without wifi.

 

I did start with cleaning up my media though. I was storing media I will never watch, but because it got downloaded by my automations it got in. It gives too much of that Netflix effect: scrolling indefinitely and never watching an actualy movie or show.

Link to comment

@Bjur

I alluded to that fact about encrypted data and the issues it creates with de-duping in an earlier post. If google drive abuse through unlimited encrypted storage and breaking various upload/download limits gets out of hand, google will likely take action to prevent the abuse. Most likely through enforcing the 5 users for unlimited. That won't completely prevent abuse but it will stop a lot of people from utilizing the service. 

 

In the past *may have* prevented services it thought were rclone but that is no longer an issue.

If you care about not using max space on googles servers then non-encrypted allows google to de-dupe files. That being said, it's understandable why people choose to encrypt even though courts would likely need a subpoena to look at any data and correct me if I'm wrong, but there has never been a case of that with standard google drive users (IE: not resellers of streaming services). 

 

One huge plus for unencrypted data is the use of .strm links. Every video file upload to google gets converted to different qualities (similar to how youtube works). This means that google utilizes it's massive transcoding power to convert your original 4k file into 1080p, 720p, and 480p files. You can then use a media server such as jellyfin to push the .strm links to shared users. The user is piped the video in whatever quality they desire directly from google to their computer. They don't have to connect to you as the middleman so there is 0 overhead for users streaming/transcoding from your database. This means you could theoretically run a server capable of streaming and server-side-transcoding 4k videos from something like a raspberry pi. 

Edited by watchmeexplode5
For clarity and fixed tone
Link to comment
1 hour ago, watchmeexplode5 said:

. If google drive abuse through unlimited encrypted storage and breaking various upload/download limits gets out of hand

what abuse are you referring to?  I don't see where anyone is breaking any of Google's T&Cs.

 

1 hour ago, watchmeexplode5 said:

In the past they have limited rclones abilities and could do so again in the future. 

If you're referring to the user agent change, that was temporary and was reverted

 

1 hour ago, watchmeexplode5 said:

If you care about not abusing the system, the best practice is not to encrypt common files (movies/tv).

I'll ignore the point about abuse, but again has this best practice advice come from Google????

 

How many people store a 'large' amount of media data on Google I don't know and until Google say it's a problem, I'm not going to change my behaviour and stop using the very generous unlimited storage to store my media, backups and personal files.  When you consider the billions of people who use Google, I imagine this is a drop in the ocean compared to the total amount of data stored and over time the cost/GB (and remember Google's £/GB isn't the same as ours) will continue to fall and cloud storage will become the norm for all.

 

Can you please start a new thread if you want to continue discussing this, as this is a support thread for sharing ideas on improving plex and rclone support.

Edited by DZMM
  • Thanks 1
Link to comment

@DZMM

Sorry about moving off topic and I don't mean to offend anybody. For best practice I was referring to best practice at reducing individual user impact on googles end. Technically the majority of gsuite users with using unlimited space are breaking the TOS which outlines that for unlimited drive use 5 paid users are required.  Google just has never enforced this portion of the TOS. Enforcing that is the most likely option they would take to curb single user abuse. Finally I was referring to google blocking user-agent identified as rclone. That may have been a bug on googles end but if I recall correctly the admin initially worried it could be a purposeful block. 

 

No offense meant to anybody. If you pay for unlimited data you are completely in your right to use it however you want. 

K, no more off topic stuff for me. Just wanted to answer your questions. 

Link to comment

@DZMM

Question regarding rclone on full (new) library scans: Do you have any experience with avoiding a 403 - downloadQuotaExceeded? I think it's like 10tb/day per user. 

I think I hit it because of multiple test instances doing an initial scans of my drive library. Would changing chuck size reduce the odds of hitting this limit? Do file probes in the scan get counted as the full file size in my quota or do I get quoted on the data actually downloads? 

Link to comment
15 minutes ago, watchmeexplode5 said:

@DZMM

Question regarding rclone on full (new) library scans: Do you have any experience with avoiding a 403 - downloadQuotaExceeded? I think it's like 10tb/day per user. 

I think I hit it because of multiple test instances doing an initial scans of my drive library. Would changing chuck size reduce the odds of hitting this limit? Do file probes in the scan get counted as the full file size in my quota or do I get quoted on the data actually downloads? 

Not sure - I've done several full 'Scan Library Files' - also when I've changed my directory structure and not had any problems.

Link to comment

@DZMM Sounds good, glad it's not a widespread thing and just me. I wonder if I've got something doing deep scans of the full file rather than a quick touch and go scan.

 

If anybody has a similar issue, a band-aid fix is to share the drive with another "dummy" user and temporarily mount under that user's credentials. That will get it up and running while the ban expires. 

Link to comment

First big Thanks to @DZMM on posting this here. Could've easily just kept it to yourself. Sharing is caring!

 

I've been reading bits and pieces of this topic, and admittedly have not gone through all 68 pages, so please forgive me if these have been covered. My media tool will be Emby. 

 

My primary purpose for this would be to use it as sort of a backup for my physical media in the house. Ideally though, when requesting a show or movie, priority would be given to the uploaded data instead of my local drive spinning up. So it's almost like my UnRaid array would the backup for the cloud data. I saw one of the replies mention setting the option to copy instead of move. I'm guessing that's answers a part of this requirement, but is there a way to make the uploaded data preferential? Is it just a matter of NOT using MergerFS and just using the rclone vfs mount?

 

The trickier part is, I'd like to upload the data as Disk shares However, my Emby server is set to find media via the user shares (Videos, TV Series, etc). Is this too complicated of a scenario? Reason for this is, if I have a local disk failure, I'd want to re-download the data that's been pushed up to Gdrive. It's a lot easier to just grab everything from Disk 3 instead of trying to rebuild from directory structure.  

 

I juggle my data around a lot between disks. Will this end-up with multiple versions on GDrive, or will it just be recognized as a move?

 

Finally @DZMMWhere the hell is the donate button? People that get this thing going should send some kinda fiat or coin your way. This has tremendous implications.

 

Edit:

The readme mentions use of dockers... if my Emby server is on a VM that's on the same network, does that make a difference (meaning can I access the mounts that this script uses)?

Edited by axeman
Link to comment
On 5/9/2020 at 3:25 AM, blizz said:

Hi mate. I've read this post and taken a look at the github scripts as well. I've used most of your settings for my rclone mount settings on my seedbox. I'm only doing a read-only mount so I'm not using mergerfs etc. Do your mount settings affect the Plex library loading times when I go to the home page where there is Recommended, On Deck etc? 

 


  --use-mmap \
  --allow-non-empty \
  --dir-cache-time 168h \
  --timeout 1h \
  --umask 002 \
  --tpslimit 10 \
  --tpslimit-burst 10
  --poll-interval=1m \
  --vfs-cache-mode writes \
  --vfs-read-chunk-size 128M \
  --vfs-read-chunk-size-limit 512M \
  --buffer-size 256M \ 
  

These are my settings. I've also read on rclone forums that buffer size should be lesser than vfs-read-chunk-size; in your settings it is twice as much but seems to work for me. Is this okay? I'm using these settings to play high bitrate files. Should I be using it for a mount which has lower bitrate files or is that overkill? 

Hi mate @DZMM. Any thoughts on this? 

Link to comment
9 hours ago, axeman said:

I've been reading bits and pieces of this topic, and admittedly have not gone through all 68 pages, so please forgive me if these have been covered. My media tool will be Emby. 

Best place to start is the github page, where the instructions are kept fairly up to date and @watchmeexplode5 has helped with that as well.

 

9 hours ago, axeman said:

My primary purpose for this would be to use it as sort of a backup for my physical media in the house. Ideally though, when requesting a show or movie, priority would be given to the uploaded data instead of my local drive spinning up.

I think you can achieve that with with MergerFS.  read up on 'FF' first found (I think) and just put your backup location first.

 

9 hours ago, axeman said:

The trickier part is, I'd like to upload the data as Disk shares However, my Emby server is set to find media via the user shares (Videos, TV Series, etc). Is this too complicated of a scenario? Reason for this is, if I have a local disk failure, I'd want to re-download the data that's been pushed up to Gdrive. It's a lot easier to just grab everything from Disk 3 instead of trying to rebuild from directory structure.  

 

I juggle my data around a lot between disks. Will this end-up with multiple versions on GDrive, or will it just be recognized as a move?

you can choose disk shares as your upload source.  I don't think you'll get multiple copies, but I don't know if you'll waste bandwidth repeatedly uploading the same file.  Why not just use a share aka Unraid so it doesn't matter what disk it's on?

 

9 hours ago, axeman said:

Finally @DZMMWhere the hell is the donate button? People that get this thing going should send some kinda fiat or coin your way. This has tremendous implications.

Yeah, I guess it does save a lot of cash ;-).  I've got over 0.5PB stored between media and backups which is at least £10k in drives, and that's before you factor in replacements, power, hassle, racks etc  I was toying with putting up a donation or patreon or something on github as I read once they exist. 

 

I've just created a paypal.me link if anyone wants to add to my beer fund for when there's a bar I can visit!

 

Edited by DZMM
Link to comment
On 5/8/2020 at 8:25 PM, blizz said:

Do your mount settings affect the Plex library loading times when I go to the home page where there is Recommended, On Deck etc? 

rclone/plex indexes the mount so nothing is streamed when just browsing plex.

 

3 hours ago, blizz said:

These are my settings. I've also read on rclone forums that buffer size should be lesser than vfs-read-chunk-size; in your settings it is twice as much but seems to work for me. Is this okay? I'm using these settings to play high bitrate files. Should I be using it for a mount which has lower bitrate files or is that overkill? 

I remember reading those instructions and it did confuse me, so I'm not sure how accurate they are now.  If your happy with your playback experience I wouldn't rock the boat i.e. if you're getting less than 5s spin-ups I wouldn't waste time experimenting with settings, as you'll probably only save a second or two at best.

Link to comment
9 hours ago, axeman said:

Edit:

The readme mentions use of dockers... if my Emby server is on a VM that's on the same network, does that make a difference (meaning can I access the mounts that this script uses)?

Yes - if the share's shared e.g. I control my shares via file manager on my windows 10 VM.

Link to comment
3 hours ago, DZMM said:

Best place to start is the github page, where the instructions are kept fairly up to date and @watchmeexplode5 has helped with that as well.

Yeah, I went through the Readme... and even glanced through the code and still not sure how it all works. Want to get a good understanding before jumping in. 

 

3 hours ago, DZMM said:

I think you can achieve that with with MergerFS.  read up on 'FF' first found (I think) and just put your backup location first.

Thanks! That would be crazy if i can save some wear/tear on my whole array. 

 

3 hours ago, DZMM said:

you can choose disk shares as your upload source.  I don't think you'll get multiple copies, but I don't know if you'll waste bandwidth repeatedly uploading the same file.  Why not just use a share aka Unraid so it doesn't matter what disk it's on?

So in my use case, I want this to be sort of a second (exact) copy of my array. This way, when I have a disk failure, I can just redownload all the data for that disk. I know that's really not the goal of this project, but I think having a backup of my array, in the "cloud" AND being able to stream off that is some crazy cool stuff. 

 

3 hours ago, DZMM said:

Yeah, I guess it does save a lot of cash ;-).  I've got over 0.5PB stored between media and backups which is at least £10k in drives, and that's before you factor in replacements, power, hassle, racks etc  I was toying with putting up a donation or patreon or something on github as I read once they exist. 

Yes... sort of, at least in my use, I'd still maintain my UnRaid local array. Either way, if/when I get this working, I'll be sending something your way. 

 

3 hours ago, DZMM said:

I've just created a paypal.me link if anyone wants to add to my beer fund for when there's a bar I can visit!

Great - You should post that on GIT ... Probably won't make you a millionaire, but hey, a little bit is better than nothing. 

Link to comment
1 hour ago, axeman said:

So in my use case, I want this to be sort of a second (exact) copy of my array. This way, when I have a disk failure, I can just redownload all the data for that disk. I know that's really not the goal of this project, but I think having a backup of my array, in the "cloud" AND being able to stream off that is some crazy cool stuff. 

If you're backing up a share, it'll be hard to restore a drive unless you make it easy and D1 has /media/movies D2 has media/tv_shows on etc etc so it's obvious which folder from the crypt needs downloading.  To be honest, I think what you'll end up doing is just putting it all in the cloud.  That's what most people have done, even if they only dipped their toe in to start with.

 

1 hour ago, axeman said:

Great - You should post that on GIT ... Probably won't make you a millionaire, but hey, a little bit is better than nothing. 

I did.  I'm kind of curious to see if I get a few beers.  I remember when I first put some ads on a blog I used to run about 15 years ago and how shocked I was initially how much spare change I was making, which eventually became a full-time gig for about 5 years!

Link to comment
7 hours ago, DZMM said:

If you're backing up a share, it'll be hard to restore a drive unless you make it easy and D1 has /media/movies D2 has media/tv_shows on etc etc so it's obvious which folder from the crypt needs downloading.  To be honest, I think what you'll end up doing is just putting it all in the cloud.  That's what most people have done, even if they only dipped their toe in to start with.

 

I did.  I'm kind of curious to see if I get a few beers.  I remember when I first put some ads on a blog I used to run about 15 years ago and how shocked I was initially how much spare change I was making, which eventually became a full-time gig for about 5 years!

Okay - I think I'm over complicating it. Will probably just keep it in copy mode, use the UnRaid shares and then decide what to do from there. Worst case if a drive locally fails, I can make a decision to either manually restore it, or just let the cloud hold that drives data, and shrink the array. 

 

Perhaps like you say, eventually i'll be all cloud and only have a small array for irreplaceable data. 

 

My fear is that Google being the only bastion of hope, puts a limit, or restriction of some sort (like Amazon did) and having to build an array up before the data gets deleted. 

Link to comment
43 minutes ago, axeman said:

 

 

My fear is that Google being the only bastion of hope, puts a limit, or restriction of some sort (like Amazon did) and having to build an array up before the data gets deleted. 

I've been going for 2 years and others for much longer...I think if it was a problem for Google they would have shut it down by now.  In my view, the longer it goes on the less likely they will shut it down.  Our storage is a drop in the ocean compared to some corporates, research universities etc etc

Link to comment
4 minutes ago, DZMM said:

I've been going for 2 years and others for much longer...I think if it was a problem for Google they would have shut it down by now.  In my view, the longer it goes on the less likely they will shut it down.  Our storage is a drop in the ocean compared to some corporates, research universities etc etc

yeah - I agree we are certainly nothing in the grand scheme of things. Just worried about not having a fallback plan. Just like how Wink is not extorting it's users. 

 

I am going to get started, at least at first with a copy mode. I will have actual, real, script related questions when I start down the path. Once again, thank you for your time, and willingness to share. 

Link to comment
20 hours ago, DZMM said:

rclone/plex indexes the mount so nothing is streamed when just browsing plex.

 

I remember reading those instructions and it did confuse me, so I'm not sure how accurate they are now.  If your happy with your playback experience I wouldn't rock the boat i.e. if you're getting less than 5s spin-ups I wouldn't waste time experimenting with settings, as you'll probably only save a second or two at best.

Ok great! Yeah they work great so I guess I won't fix what's not broken. Thank you for the reply.

 

So far I've just been using a simple rclone mount and pointing directly to it in Plex. This has the problem of having to do a full scan to get any changes done to the remote ; the partial scan in Plex settings doesn't work for a rclone remote mount right? Does mergerfs solve my problem, given that I only have read access to the remote gdrive and I won't/can't upload from local to gdrive? In other words, can I use mergerfs to have the benefit of automatic updates when they are done to the remote, and have it show up in Plex without having to do a full scan? 

Link to comment
1 hour ago, blizz said:

the partial scan in Plex settings doesn't work for a rclone remote mount right?

Yes it does if done correctly.  If you're making changes to the mount outside of rclone that will be the source of your problem.  Or, your settings aren't right - best advice I can give is use my scripts.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.