Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

1 hour ago, francrouge said:

Hi i notice that plex dosent auto scan unionfs byitself.

Is it normal ? You have to do it manualy

Thx

Envoyé de mon Pixel 2 XL en utilisant Tapatalk
 

yes it is - you have to tell plex there's something new via 'Connect' in radarr or sonarr.  Otherwise, Plex does a periodic scan on whatever frequency you've set

  • Like 1
Link to comment
6 hours ago, Kaizac said:

Did you setup Traktarr within a VM or is it a plugin/docker?

I have a remote server that does all my downloading, it's just ubuntu. Everything is set up with docker's but traktarr is just a script that talks to sonarr. The other docker that I linked I have not yet set up but I thought it might be easier to do within unraid as it is a docker, but maybe that is just me not understanding how to use unraid.

Link to comment
7 minutes ago, francrouge said:

ok but if i need to do the inital scan the BIG scan what do i do ?

 

 

thx

run scan library files in plex.

 

In library settings also disable create video thumbnails, as plex will download the whole uploaded file.

 

Many people say disable 'perform extensive media analysis' in Extras as it causes API bans, but I've not had any problems and because I run a local server I don't want to waste upload bandwidth if the Plex transcoding brain doesn't have all the data.

Link to comment
17 minutes ago, DZMM said:

run scan library files in plex.

 

In library settings also disable create video thumbnails, as plex will download the whole uploaded file.

 

Many people say disable 'perform extensive media analysis' in Extras as it causes API bans, but I've not had any problems and because I run a local server I don't want to waste upload bandwidth if the Plex transcoding brain doesn't have all the data.

ok so it with that it should be able to scan my 3000 movies ? 

 

 

thx a lot

Link to comment
On 12/21/2018 at 11:22 PM, Kaizac said:

Sorry I didn't see your question to me, got lost to me in the many posts :).

 

I'm currently still using only 1 Team Drive. And using 5-6 API's to upload 24/7 which is still working with the 8000k bwlimit. So for me there is no need to create another TD.

@Kaizac Thanks - I've moved all my files to one team drive with 4 uploads running for over a day now with no problems - each upload using a unique google account and different client credentials.  My backlog will be gone today, which is great.

 

For anyone else wanting to try this, this is how my rclone config looks.  Because the TEAM_DRIVE_ID, PASSWORD_1 and PASSWORD_2 is the same all uploads go the same team drive and then I mount one of the vfs remotes, team_drive1_vfs, and add it to my unionfs mount:

 

[team_drive1]
type = drive
client_id = client_id_google_1
client_secret = client_secret_google_1
scope = drive
team_drive1 = TEAM_DRIVE_ID
token = token1

[team_drive1_vfs]
type = crypt
remote = team_drive1:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD_1
password2 = PASSWORD_2

[team_drive2]
type = drive
client_id = client_id_google_2
client_secret = client_secret_google_2
scope = drive
team_drive1 = TEAM_DRIVE_ID
token = token2

[team_drive2_vfs]
type = crypt
remote = team_drive2:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD_1
password2 = PASSWORD_2

[team_drive3]
type = drive
client_id = client_id_google_3
client_secret = client_secret_google_3
scope = drive
team_drive1 = TEAM_DRIVE_ID
token = token3

[team_drive3_vfs]
type = crypt
remote = team_drive3:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD_1
password2 = PASSWORD_2

[team_drive4]
type = drive
client_id = client_id_google_4
client_secret = client_secret_google_4
scope = drive
team_drive1 = TEAM_DRIVE_ID
token = token4

[team_drive4_vfs]
type = crypt
remote = team_drive4:crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD_1
password2 = PASSWORD_2

 

  • Upvote 1
Link to comment
32 minutes ago, DZMM said:

@Kaizac Thanks - I've moved all my files to one team drive with 4 uploads running for over a day now with no problems - each upload using a unique google account and different client credentials.  My backlog will be gone today, which is great.

 

For anyone else wanting to try this, this is how my rclone config looks.  Because the TEAM_DRIVE_ID, PASSWORD_1 and PASSWORD_2 is the same all uploads go the same team drive and then I mount one of the vfs remotes, team_drive1_vfs, and add it to my unionfs mount:

 

 

Glad you got it working aswell! Really nice that we can just clear out your backlog this easy. The same technique can be used for your Backups. Just create a new Tdrive for Backups and use the same API's as for your other Teamdrive or create new ones.

 

Regarding removing --rc. What exactly are the problems you've run into making you remove it? For me it always seems to succeed with a timeout of 5 minutes.

Link to comment
18 minutes ago, francrouge said:

yes i can see it  with krusader

sometimes I have a very rare error if I've been changing the mounts a lot, which you have in your initial setup.  In plex, go to your library settings and browse (not add) to where one of your movies is e..g mount_unionfs/movies/avengers (2012???).

 

If you can't browse to the folder in plex but you can in windows/krusader then restart the plex docker - for some reasons if you unmount/mount unionfs sometimes while plex, radarr, sonarr etc are running they can't see the folders.  It's why in my script dockers are started after the mounts are running.

Link to comment

@DZMM just noticed my other mounts were not working while the main one (which has --rc) does. When I put in the mount command on itself it works. Was that also what you were experiencing and why you removed --rc?

 

EDIT: seems like it was glitching out that not all mounts commands came through. So I put in a sleep between every mount and rebooted and it worked now. I am always getting a docker daemon error on startup though. I've already put in a sleep of 60 seconds but that doesn't seem to solve it. You don't have this issue?

 

Strangely enough I just got API banned it seems and both API's are not working to stream. So maybe the ban happens on TD level and not on API/user level.

Edited by Kaizac
Link to comment
2 hours ago, Kaizac said:

You only need to add mount_unionfs/google_vfs to your docker template. I add mount_unionfs/ as R/W slave myself so it has access to all my folders. Then in Plex you create a library based on mount_unionfs/google_vfs/Movies.

I did what you told me and plex seem to like that.

 

plex has scan 2100 movies so its good and it continues

 

thx for help

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.