Jump to content
DZMM

Guide: How To Use Rclone To Mount Cloud Drives And Play Files

935 posts in this topic Last Reply

Recommended Posts

10 hours ago, testdasi said:

 

Something is making too many API calls to cause yours to be blocked.  You need to check what your various dockers are doing. From previous posts, it looks like Bazarr and Emby / Plex subtitle searches may be the main contributor.


I run Plex and Sonarr refreshes a few times today already + calculate how much space I'm using (a lot of API calls to count things) and I'm nowhere close to the limit. Even the per 100s limit of 1000, I only get to 20% on the worst day. So your dockers must be doing something very drastic to cause API ban. You might want to separate that docker on its own client_id.

Once banned, there's nothing you can do but to wait till your quota is reset. Usually reset time is midnight US Pacific (where Google HQ is). 

(You can see when it's reset and how many API calls you have done from your API dashboard - https://console.developers.google.com/apis/dashboard then click on quota)

 

That is assuming you have set up your own API + OAUTH client_id + share the team drive with the appropriate account.

Hi I have my own client id but i think its maybe the way i put my files.

 

For movies exemple i only got one main folder. Tv series got there own folder 

 

So maybe i need to create one folder per movies ?

 

whats your setting in plex for scan ?

 

image.png.2ef0bbbd4f79ea6bd6f2e039a3a266c8.png

Share this post


Link to post

I think it's a perfect storm kind of situation.

  • My French is terrible but I'm guessing you set it up to scan on partial change? When you are uploading files to gdrive, every little change will cause a scan of the folder.
  • You have all the movies in the same folder, that probably means Plex will rescan the entire folder, including files that were not changed (it doesn't know what has been changed, just that something was changed so it has to scan to know).

You might want to disable automatic scanning and do it manually while you reorganise your library.

 

 

Share this post


Link to post
27 minutes ago, testdasi said:

I think it's a perfect storm kind of situation.

  • My French is terrible but I'm guessing you set it up to scan on partial change? When you are uploading files to gdrive, every little change will cause a scan of the folder.
  • You have all the movies in the same folder, that probably means Plex will rescan the entire folder, including files that were not changed (it doesn't know what has been changed, just that something was changed so it has to scan to know).

You might want to disable automatic scanning and do it manually while you reorganise your library.

 

 

Yes its in french sorry.

 

So how can i rescan manually ?

 

Should i make one folder per movies inside the main one 

 

Movies

  • folder movie 1
  • folder movie 2

thx a lot

Share this post


Link to post

You untick that partial scan option.

On the main library page, hover your mouse over Libraries, there should be a button that when you click shows the various options, among which is rescan plex library (or something like that). Click on that to manually rescan.

 

It is good practice to have each movie in its own folder under a main Movies folder

e.g. 

Movies\Movie 1

Movies\Movie 2

etc.

 

I don't use Radarr but I think it does all the organisation for you so that's a faster option if you know how to use it. I organise things manually but have always done that for years.

Share this post


Link to post
You untick that partial scan option.

On the main library page, hover your mouse over Libraries, there should be a button that when you click shows the various options, among which is rescan plex library (or something like that). Click on that to manually rescan.

 

It is good practice to have each movie in its own folder under a main Movies folder

e.g. 

Movies\Movie 1

Movies\Movie 2

etc.

 

I don't use Radarr but I think it does all the organisation for you so that's a faster option if you know how to use it. I organise things manually but have always done that for years.

Great thx i will try that

 

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

 

 

I found a little software that creat a folder for each files you have automatic.

 

File2folder ot seem to be working really good because i got like 7000 movies

Share this post


Link to post
1 hour ago, francrouge said:

Great thx i will try that

 

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

 

 

I found a little software that creat a folder for each files you have automatic.

 

File2folder ot seem to be working really good because i got like 7000 movies

Please note that it may or may not resolve your issue until you find out exactly what caused the high API calls. You might want to follow other users on this topic who have multiple client_id for different purposes. Then if one API is banned because of accidental overload, you can switch to something else.

Share this post


Link to post
Please note that it may or may not resolve your issue until you find out exactly what caused the high API calls. You might want to follow other users on this topic who have multiple client_id for different purposes. Then if one API is banned because of accidental overload, you can switch to something else.
Mm i didnt know that i will check thx again

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Share this post


Link to post
2 hours ago, francrouge said:

Should i make one folder per movies inside the main one 

 

Movies

  • folder movie 1
  • folder movie 2

thx a lot

Yes!!  If you organise your files in one big folder, when Plex is told there is a change it will scan all files in that folder.  Having a folder per movie is more efficient as Plex will only scan that folder for changes

Share this post


Link to post
Posted (edited)

Has anyone seen this behaviour?

 

I know if Sonarr tries to do something to a file on a unionfs mount (and because the upload location is RW and the rclone mount is RO) it will make a copy of the file from rclone mount to upload location with whatever it tries to change (e.g. a rename or a date change).

The problem is there is one particular episode (and only that one!) for which the copy and the original are identical in every way (e.g. the filename, dates, the data itself, etc.). So I have no idea what Sonarr is trying to change.

So you say just upload it to update the file? I did! And once it's done, Sonarr would do the exact same thing again.

 

I think this behaviour has always been there, I didn't notice it previously because the write was done live immediately to storage before I moved to the gdrive model.

 

The fact that it's a Doctor Who episode makes it even spookier. I'm considering just sodding it and delete it.

Edited by testdasi

Share this post


Link to post
4 minutes ago, testdasi said:

Has anyone seen this behaviour?

 

I know if Sonarr tries to do something to a file on a unionfs mount (and because the upload location is RW and the rclone mount is RO) it will make a copy of the file from rclone mount to upload location with whatever it tries to change (e.g. a rename or a date change).

The problem is there is one particular episode (and only that one!) for which the copy and the original are identical in every way (e.g. the filename, dates, the data itself, etc.). So I have no idea what Sonarr is trying to change.

So you say just upload it to update the file? I did! And once it's done, Sonarr would do the exact same thing again.

 

I think this behaviour has always been there, I didn't notice it previously because the write was done live immediately to storage before I moved to the gdrive model.

 

The fact that it's a Doctor Who episode makes it even spookier. I'm considering just sodding it and delete it.

Hmm not sure what's going on there.  Maybe set the episode to unmonitored in sonarr?

Share this post


Link to post
On 6/12/2019 at 4:35 PM, DZMM said:

This thread has got a lot more action than I or @Kaizac or @slimshizn probably ever hoped for.  There's been a lot of action getting people up and running, so I'm wondering how some of the people in the background are getting on? 

 

How intensively are other people using rclone?  Have you moved all your media?  How are you finding the playback experience?

 

Personally, I don't have any plex content on my unRAID server anymore, except for my photos and I've now got over 300TB of Plex content stored on gdrive, as well as another big crypt with my backups (personal files, VM images etc).  I don't even notice the impact of streaming anymore and when I do have any skips, I actually think they are because of local wi-fi issues rather than rclone.

 

 

It has been quite busy here with 29 pages now this is something I'd never imagine. Currently I'm using roughly 200TB stored on gdrive, 50TB at home, which will probably be cleared once I switch to a new remote server and shut down for the summer for upgrades, more power efficient parts and such. 

Share this post


Link to post
3 hours ago, slimshizn said:

It has been quite busy here with 29 pages now this is something I'd never imagine. Currently I'm using roughly 200TB stored on gdrive, 50TB at home, which will probably be cleared once I switch to a new remote server and shut down for the summer for upgrades, more power efficient parts and such. 

Damn how can you have 200tb is it all media ? (just curious)

Share this post


Link to post

Love your work @DZMM!

 

Got this working with relative ease.  Have you had any luck with Rclone Union yet?

Share this post


Link to post
46 minutes ago, sauso said:

Love your work @DZMM!

 

Got this working with relative ease.  Have you had any luck with Rclone Union yet?

It got pushed back but it looks like the necessary changes to rclone union allowing unionfs to be dropped will be in the next release - 1.4.9

 

https://github.com/ncw/rclone/milestone/36

Share this post


Link to post
Posted (edited)
20 hours ago, francrouge said:

Damn how can you have 200tb is it all media ? (just curious)

Most of the files are Linux ISO's yes. 😉

Edited by slimshizn

Share this post


Link to post

Thanks for the guide, seem to have got everything working. When I try to move a file straight to the rclone_upload folder for upload, it moves instantly, but when I move a file into the mount_unionfs, it takes a couple of minutes for a 10GB movie. Why is this and how can I speed it up? Any ideas? :)

Share this post


Link to post

 

8 hours ago, guyturner797 said:

Thanks for the guide, seem to have got everything working. When I try to move a file straight to the rclone_upload folder for upload, it moves instantly, but when I move a file into the mount_unionfs, it takes a couple of minutes for a 10GB movie. Why is this and how can I speed it up? Any ideas? :)

 

What is your unionfs line in your mount script?

That sounds like you set your mount_rclone as the RW location so when you copy file over, it tries to upload to gdrive live during the transfer.

rclone_upload is instant because it's local until you run the upload script.

Share this post


Link to post
Posted (edited)
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

Pretty sure i've set it up correctly? I was expecting it to be instant but when using krusader it was slow to copy to the unionfs folder.

Edited by guyturner797

Share this post


Link to post
2 hours ago, guyturner797 said:

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

Pretty sure i've set it up correctly? I was expecting it to be instant but when using krusader it was slow to copy to the unionfs folder.

Oops sorry I missed the "instant" keyword. It is not instant because Unraid will try to copy the file to the unionfs location and then delete the original. Unionfs isn't smart enough to realise the RW location is on the same media as your original.

Share this post


Link to post
On 6/29/2019 at 6:55 PM, Untamedgorilla said:

piggybacking on @guyturner797's question, is it safe to just copy straight to the rclone_upload and bypass the unionfs to transfer items?

 

yes - if you copy a file that already exists in mount_rclone I think unionfs overwrites it.

Share this post


Link to post

Hi guys 

 

quick question  How do you manage your're download with sonar or rssfeed etc.

 

Do you keep a copie of the file offline and on google 

 

thx

 

Share this post


Link to post
2 minutes ago, francrouge said:

Hi guys 

 

quick question  How do you manage your're download with sonar or rssfeed etc.

 

Do you keep a copie of the file offline and on google 

 

thx

 

Not sure if I understand you properly. You point Sonarr to your unionfs folder so it doesn't matter where you store your file.

Share this post


Link to post
Posted (edited)
8 minutes ago, Kaizac said:

Not sure if I understand you properly. You point Sonarr to your unionfs folder so it doesn't matter where you store your file.

I mean like i download a files with sonarr in unraid from there i need to upload them with the script upload ?

 

Is there another way ?

 

 

I'm trying to figure out out to download + post process + upload files automaticly via rssfeed also that why

Edited by francrouge

Share this post


Link to post
2 minutes ago, francrouge said:

I mean like i download a files with sonarr in unraid from there i need to upload them with the script upload ?

 

Is there another way ?

You could use mount_rclone as your RW folder and it will download directly to your Gdrive. However this will slowed by your upload speed. And it will probably also cause problems while direct writing to the mount. Rclone copy/move/etc. is intended to solve this issue by doing file checks.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.