Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

10 hours ago, JohnJay829 said:

I would like for some files to just be on my unRaid server and some on both gdrive and unRaid

To achieve this I would add the local folder you want to appear in mergerfs mount but not uploaded to gdrive as LocalFilesShare2="path to folder you don't want uploading". 

 

If you change your mind, just move the files to LocalFilesShare - nothing will 'move' in the mergerfs mount but files will get uploaded.

Link to comment

This is my config below now:

 

[googleFi]
type = drive
client_id = xxx
client_secret = xxx
scope = drive
service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tfi1.json
team_drive = xxx
server_side_across_configs = true

[googleFi_crypt]
type = crypt
remote = googleFI:GCryptFI
filename_encryption = standard
directory_name_encryption = true
password = xxx
password2 = xxx

 

I have created 600 SA and got them into Group called something.

However in the guide it states:

Team Drive (tdsrc) and destination Team Drive (tddst)

What is that?

 

Can't I just add group email to Team Drive and get all the SA over that way?

 

Where do I see if my service accounts are added correctly to Team Drive?

Link to comment
1 hour ago, Bjur said:

What is that?

 

Can't I just add group email to Team Drive and get all the SA over that way?

 

Where do I see if my service accounts are added correctly to Team Drive?

Ignore the tdsrc and tddst part. They are specific to the AutoReclone and not to the buzz script.

Yes, add the email to the Team Drive (on the Gdrive GUI).

Check the Gdrive GUI, go to the Team Drive and see the access permission list.

Link to comment
On 4/29/2020 at 12:03 PM, testdasi said:

Ignore the tdsrc and tddst part. They are specific to the AutoReclone and not to the buzz script.

Yes, add the email to the Team Drive (on the Gdrive GUI).

Check the Gdrive GUI, go to the Team Drive and see the access permission list.

So I got it almost working 100%.

My setup is I have my media files located locally in /mnt/user/Videos

I have the following Rclone config:

 

My mount script settings are:

RcloneRemoteName="googleFi_crypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
DockerStart="nzbget plex sonarr radarr ombi" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount

 

My upload settings are:

RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="googleFi_crypt" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="googleFi_crypt" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.
LocalFilesShare="/mnt/user/local" # location of the local files without trailing slash you want to rclone to use
RcloneMountShare="/mnt/user/mount_rclone" # where your rclone mount is located without trailing slash  e.g. /mnt/user/mount_rclone
MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y
ModSort="ascending" # "ascending" oldest files first, "descending" newest files first

 

Questions are:

1. If want to have my local folder /mnt/user/Videos uploaded to GCryptFi and still keep locally initially. Should I copy all data to /mnt/user/mount_merger_fs/googleFI_Crypt or how should I handle that? I still want to keep a local backup.

 

2. If I want to mount another googleSh_Crypt can I mount it in the same mountshare script, or do I need to create another?

 

3. I read that people are using separate upload remotes and link to crypt. So upload remote has separate client ID to avoid API ban. Now that I'm using service accounts should I worry about that?

I have created a remote googleUpFi but since googleFi_Crypt is already linked to another remote, how do I get googleFi_Crypt to be linked to googleUpFi?

 

Thanks for the help. I'm almost there but want to get the last help:)

 

Link to comment
16 hours ago, Bjur said:

1. If want to have my local folder /mnt/user/Videos uploaded to GCryptFi and still keep locally initially. Should I copy all data to /mnt/user/mount_merger_fs/googleFI_Crypt or how should I handle that? I still want to keep a local backup.

RcloneCommand="sync" (better) or RcloneCommand="copy"

 

16 hours ago, Bjur said:

2. If I want to mount another googleSh_Crypt can I mount it in the same mountshare script, or do I need to create another?

 

Another script

 

16 hours ago, Bjur said:

3. I read that people are using separate upload remotes and link to crypt. So upload remote has separate client ID to avoid API ban. Now that I'm using service accounts should I worry about that?

You could get API bans for other reasons, so a good idea to isolate your remotes

 

 

Link to comment

Thanks for the answer and this guide.

 

1. I read earlier in the thread that you mentioned your plugin wasn't suppose to copy but only move. Will this be a safe approach or should I manually copy my existing videos folder to merger_fs folder?

 

3. I've read this qouted comment below many times from you in regards to upload remote, but still doesn't get it, sorry.

No tutorial needed.  If you've setup gdrive_media_vfs to be gdrive:crypt, then just create another remote with another name pointing to the same location i.e. gdrive:crypt with the same passwords.  The only difference is create a different client_ID so that one gets the ban, if any.

To be honest I've only had an API ban once I think and that was when I didn't know what I was doing yonks ago.

 

When I have my to crypts defined: 

googleFI_Crypt which links to googleFI

googleSh_Crypt which links to googleSh

 

So my folders containing Fi-videos goes to googleFI_Crypt and folders containing Sh-videos goes to googleSh_Crypt.

When I have a download client and creates a remote for that like gdrive_media_vfs to follow your naming how do I link it to the 2 existing crypts, which already are linked to the other folders?

Where/how do I point the new remote to the same crypt folder?

 

Thanks again for the help much appreciated. My service accounts work, so I'm almost there so want to get the last part right.

Edited by Bjur
Link to comment
12 hours ago, Bjur said:

1. I read earlier in the thread that you mentioned your plugin wasn't suppose to copy but only move. Will this be a safe approach or should I manually copy my existing videos folder to merger_fs folder?

The latest version of the script supports move, copy and sync.

 

12 hours ago, Bjur said:

So my folders containing Fi-videos goes to googleFI_Crypt and folders containing Sh-videos goes to googleSh_Crypt.

When I have a download client and creates a remote for that like gdrive_media_vfs to follow your naming how do I link it to the 2 existing crypts, which already are linked to the other folders?

Where/how do I point the new remote to the same crypt folder?

I'm not really understanding what you want to do.  If you to point an encrypted remote the to same folder on gdrive, just do that in rclone config i.e. 

 

[remote1]
type = crypt
remote = gdrive:crypt

[remote2]
type = crypt
remote = gdrive:crypt

 

Link to comment

Has anyone had issues with Stopping/Rebooting/Shutting down the Array when rclone mount is active?

I've been testing this for the day, and stopping the array after a fresh reboot (before I run rclone_mount) works fine. As soon as I run rclone_mount and I try and stop the array, it doesn't work.

Syslogs

 

May  2 17:34:15 XXX-XXX emhttpd: shcmd (440): rmdir /mnt/user
May  2 17:34:15 XXX -XXX root: rmdir: failed to remove '/mnt/user': Device or resource busy
May  2 17:34:15 XXX -XXX emhttpd: shcmd (440): exit status: 1
May  2 17:34:15 XXX -XXX emhttpd: shcmd (442): /usr/local/sbin/update_cron
May  2 17:34:15 XXX -XXX emhttpd: Retry unmounting user share(s)...
May  2 17:34:20 XXX -XXX emhttpd: shcmd (443): umount /mnt/user
May  2 17:34:20 XXX -XXX root: umount: /mnt/user: target is busy.
May  2 17:34:20 XXX -XXX emhttpd: shcmd (443): exit status: 32
May  2 17:34:20 XXX -XXX emhttpd: shcmd (444): rmdir /mnt/user
May  2 17:34:20 XXX -XXX root: rmdir: failed to remove '/mnt/user': Device or resource busy
May  2 17:34:20 XXX -XXX emhttpd: shcmd (444): exit status: 1
May  2 17:34:20 XXX -XXX emhttpd: shcmd (446): /usr/local/sbin/update_cron
May  2 17:34:20 XXX -XXX emhttpd: Retry unmounting user share(s)...

 

lsof gives nothing back

root@XXX -XXX :~# lsof /mnt/*
root@XXX -XXX :~# lsof /mnt/user/

The only way I can get the array to stop, is by manually running

 

fusermount -uz /mnt/user/

Can someone shed some light please?

Thanks

Link to comment
22 hours ago, DZMM said:

The latest version of the script supports move, copy and sync.

 

I'm not really understanding what you want to do.  If you to point an encrypted remote the to same folder on gdrive, just do that in rclone config i.e. 

 


[remote1]
type = crypt
remote = gdrive:crypt

[remote2]
type = crypt
remote = gdrive:crypt

 

In regards to the remote.

I have to types of videos and have created 2 remotes and 2 crypts. 

1. googleFI_Crypt (movies) links to googleFI containg 1 types of videos I want to stream. 

2. googleSh_Crypt (shows) links to GoogleSH containing another types of videos I want to stream. 

 

I want to upload my videos to these 2 crypts. 

Besides that I have a DL client that I will be using for uploading. So from what I can read I should create another remote in rclone config. I would call that remote googleUP. That remote is not encrypted but I would like to have the content from the DL client Sab/sonarr upload the downloaded content to my 2 already created crypts. And the upload remote should be there so if an API ban comes I won't affect my 2 stream shares. 

But the 2 crypts already created is linked to the original remotes which I did in rclone config. So what should I add to my rclone config file?

Hope it's more clear now. 

 

Link to comment

@Bjur,

 

Personally and to save on a lot of headaches. I would structure it differently. Keep it simple and it should avoid a lot of problems. 

 

Something along these lines is what sonarr and raddar love.  
 

gcrypt/
   └──
      ├── _Movies
      |   └── Movie Name
      |          └── Movie Name.mkv
      ├── _Tv  
      |   └── Tv Show Name
      |          └── Season XX
      |                 └── TV Show SXXEXX.mkv

With this you can freely move files into whatever directory you want at /mnt/user/mergerfs/{movie, tv, whatever - even make a new folder} and also point any downloaders to send files to these locations as well.

 

They will be temporarily copied to /mnt/user/local/[remotename]/{Folder or File}. 

When your run the upload script they will be moved/copied/synced from the local portion to your teamdrive. 

 

Everything you place in the crypt folder will be encrypted by rclone "automagically". So it will look normal and act normal to you but if you go to your drive in something like a web browser, it will be all sorts of gibberish characters.  

 

 

Setup your config something like this.

[gdrive]
type = drive
client_id = xxxxxxxx
client_secret = xxxxxxxxxxxxx
scope = drive
token = {"access_token":"xxxxxxxxxxxxxxxxxx"}
server_side_across_configs = true

[gcrypt]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxxxxxxx
password2 = xxxxxxxxxxxxxxxxxxxxxxxxxxx

 

No need for an upload client. The upload credentials will be handled by the service accounts you created therefore there isn't any positive use case for you to make a separate upload account.

 

The option for additional remote upload is just there encase somebody needs it. When you use service accounts and an upload is called with the SA, that will be used for credentials. So in this case, it will upload through [gdrive] remote  (after passing through the crypt) with the SA credentials. It will ignore your token, client_id, client_secret and only use the service account for credentials. The SA rotates so you will theoretically not get API bans for excessive upload.  That being said, some people like to keep it separate for easier tracking purposes or other individual use cases. 

 

 

Your settings in the scripts would look something like:

Mount:

RcloneRemoteName="gcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data


Upload:

RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gcrypt" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gcrypt" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.

Rest of the values can be left what you have posted above (make sure to fill in your service accounts settings correctly)

 

If you want an unecrypted remote in addition to your crypt. You will have to make another remote in your rclone config and run another copy of the script with it's own settings for that to upload. I'd personally put it on a different team drive just for book keeping sake but you could link it to the same one if you want (but might get confusing with file structure, best to give it it's own home to live at).

 

You can still use your service accounts with the other un-encrypted remote as well. No need to enter them into your rclone.conf. The script applies it automatically if you fill out the settings for it. That way your SA's credentials will be used for uploads and you won't have any api issues for excessive upload. 

 

Edited by watchmeexplode5
Link to comment
33 minutes ago, watchmeexplode5 said:

@teh0wner

I have the same issue/errors on unmount but I haven't looked into too much. Honestly, there is always something holding up my unmount :/ (my fault not unraids).
I'll let you know if I dig anything up on whats causing it.

My suspicion is it's a pending upload, but again I've never bothered to investigate.

Link to comment
13 hours ago, DZMM said:

My suspicion is it's a pending upload, but again I've never bothered to investigate.

I've not even tried uploading.

Fresh reboot, manually run rclone_mount, stop. Always fails 100%. I've not got to the bottom of it either, but will revert if I do.

I was thinking maybe writing another script that checks if an upload if running, and if not, then run fusermount -uz on the rclone mounts, and then stop the array. Do you think that'll cause issues?

Edit: Something along the lines of the below:

 

#!/bin/bash

##########################
### fusermount Script ####
##########################

RcloneRemoteName="google_drive_encrypted_vfs"
RcloneMountShare="/mnt/user/XYZ/remote_storage"
MergerfsMountShare="/mnt/user/XYZ/merged_storage"

echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_fusermount script ***"

while [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; do
echo "$(date "+%d.%m.%Y %T") INFO: mount is running, sleeping"
sleep 5
done

while [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running" ]]; do
echo "$(date "+%d.%m.%Y %T") INFO: upload is running, sleeping"
sleep 15
done

fusermount -uz "$MergerfsMountShare"
fusermount -uz "$RcloneMountShare"

echo "$(date "+%d.%m.%Y %T") INFO: ***rclone_fusermount script finished ***"

exit

 

Edited by teh0wner
Link to comment

Here is a little teaser of an update I've been working on!

 

Addition of the beta Rclone GUI (created by negative0) which can show drive activity.

 

More info about rclones gui project found here: Reacted Rclone WebUi

Project is fairly new but has some pretty active development and official support from Rclone so hopefully it keeps improving rapidly 😛

 

dashboard.thumb.png.172fc6a467a8e22c564188dc2444855c.png

 

Edited by watchmeexplode5
  • Like 2
Link to comment
On 5/2/2020 at 10:59 PM, watchmeexplode5 said:

@Bjur,

 

Personally and to save on a lot of headaches. I would structure it differently. Keep it simple and it should avoid a lot of problems. 

 

Something along these lines is what sonarr and raddar love.  
 


gcrypt/
   └──
      ├── _Movies
      |   └── Movie Name
      |          └── Movie Name.mkv
      ├── _Tv  
      |   └── Tv Show Name
      |          └── Season XX
      |                 └── TV Show SXXEXX.mkv

With this you can freely move files into whatever directory you want at /mnt/user/mergerfs/{movie, tv, whatever - even make a new folder} and also point any downloaders to send files to these locations as well.

 

They will be temporarily copied to /mnt/user/local/[remotename]/{Folder or File}. 

When your run the upload script they will be moved/copied/synced from the local portion to your teamdrive. 

 

Everything you place in the crypt folder will be encrypted by rclone "automagically". So it will look normal and act normal to you but if you go to your drive in something like a web browser, it will be all sorts of gibberish characters.  

 

 

Setup your config something like this.


[gdrive]
type = drive
client_id = xxxxxxxx
client_secret = xxxxxxxxxxxxx
scope = drive
token = {"access_token":"xxxxxxxxxxxxxxxxxx"}
server_side_across_configs = true

[gcrypt]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxxxxxxx
password2 = xxxxxxxxxxxxxxxxxxxxxxxxxxx

 

No need for an upload client. The upload credentials will be handled by the service accounts you created therefore there isn't any positive use case for you to make a separate upload account.

 

The option for additional remote upload is just there encase somebody needs it. When you use service accounts and an upload is called with the SA, that will be used for credentials. So in this case, it will upload through [gdrive] remote  (after passing through the crypt) with the SA credentials. It will ignore your token, client_id, client_secret and only use the service account for credentials. The SA rotates so you will theoretically not get API bans for excessive upload.  That being said, some people like to keep it separate for easier tracking purposes or other individual use cases. 

 

 

Your settings in the scripts would look something like:


Mount:

RcloneRemoteName="gcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data


Upload:

RcloneCommand="move" # choose your rclone command e.g. move, copy, sync
RcloneRemoteName="gcrypt" # Name of rclone remote mount WITHOUT ':'.
RcloneUploadRemoteName="gcrypt" # If you have a second remote created for uploads put it here.  Otherwise use the same remote as RcloneRemoteName.

Rest of the values can be left what you have posted above (make sure to fill in your service accounts settings correctly)

 

If you want an unecrypted remote in addition to your crypt. You will have to make another remote in your rclone config and run another copy of the script with it's own settings for that to upload. I'd personally put it on a different team drive just for book keeping sake but you could link it to the same one if you want (but might get confusing with file structure, best to give it it's own home to live at).

 

You can still use your service accounts with the other un-encrypted remote as well. No need to enter them into your rclone.conf. The script applies it automatically if you fill out the settings for it. That way your SA's credentials will be used for uploads and you won't have any api issues for excessive upload. 

 

@watchmeexplode5 Thank you very much for this elaborating answer/guide and also for @DZMM for making it all possible.

 

My videos are already organized like you describe above with Plex friendly format (Filebot is excellent) and I only have year in title also to make things easier.

 

I will try to explain my thoughts on the first point in regards to 2 remote crypts (Videos & Shows).

The reason for this is, that I read that Team Drive "only" supports 400000 files in total. When you have metadata files it quickly fills up, and when reaching 400000 files, then I will be having a serious problem, I think? What is your thoughts about that?

 

I will not make a remote upload crypt then, if you don't think that will be necessary then. So to understand completely. The right approach would be to make Sab DL client download to a standard user share and make Filebot move it to the /mnt/user/mergerfs/shows folder? I think I would not make sense to make Sab have temp/finished directory in the /mnt/user/mergerfs/ folders correct?

 

A note on the last section. What would be the thoughts on creating a non secure remote? I thought about creating a separate Team Drive for my home pictures, but that would be Google Photos to use for this, and that have a completely own app and Team Drive would not be used for that right.

 

Lastly, are you using sync or move when you use it. I'm a little bit afraid to use sync, because I read that it can cause problems if you are not cautious how use use it. So my thought is that my current /mnt/user/movies piece by piece (since I don't have enough space for duplication), I will copy to /mnt/user/mergerfs/movies and use move and when I see it's working as it should I will delete most of my local except the ones I really want to not risk loosing. Does that sound sane?

 

Again thanks for the help. It's VERY appreciated and really looking to using this solution. It's the best tech addon I've seen in aged and am excited to try using it:) It will really help and make my HDDs obsolete:)

 

PS. Your webGUI sounds looks very interesting and would really be helpful. Nice work.

 

 

 

 

Edited by Bjur
Link to comment

@Bjur

 

Sorry for the long post but....

 

Metadata is a hard one with cloud drives. Some other might chime in on how they handle metadata for large libraries.

  • Currently I store all my metadata locally. When the metadata is stored on the cloud it has a few issues. One being file limit like you said. Another issue is rapid access to the metadata. With rclone overhead, and plex/emby pulling the metadata too fast all the time; you will likely receive some form of an api ban and slow interfaces when browsing.  So most choose to store it locally on something fast like nvme or ssd (default setting in plex). 
  • If you are utilizing chapter/preview images (the thumbnail that appears when you scrub through a video file). Make sure plex/emby generates them BEFORE uploading. If you scrape for preview images once they are on a drive, plex needs to download THE WHOLE file to generate the image. That will result in crazy download amounts and an api ban quickly. 

 

  • Regarding the 400000 limit. Might be best to have @DZMM  chime in here. If I recall he uploads to teamdrive (which allows 750+gb via SA) and later runs another script to move from teamdrive to standard drive (since server ---> server transfers don't count towards quota). Standard drives don't have a file limit but service accounts can't be utilized with them so you need to flip-flop a little to solve the file limit and upload limit issue together. 

For Downloader/Filebot folder location:

  • Again, some users might take a different method, but I point to my ../local/REMOTE/XXXX. The files added there will be included in the merged mount so any application referencing /mergerfs/ will see your cloud files and local files as if they are in the same place. Although you could point to /mergerfs and the files would be initially placed in the /local/ during writing. This introduces more IO overhead and results in slower moves/writes/extractions. Not much slower but its definitely noticeable on any large file.  
  • Regarding temp. folders like incompletes/seeds/ect. I keep those on the /local/REMOTE/downloads/[respected] folder (script automatically makes most frequent user case directories). No need to have them anywhere else on the array and they can quickly be moved from their temp. ---> proper location when needed. Again, writing to /local/ is better practice than writing to /mergerfs/ for reasons stated above. 

 

Photos:

  • I don't know about google photos (I don't use it). But to my knowledge it's completely separate from teamdrive. Not sure if you can link your google photos to a teamdrive. Somebody correct me if I'm wrong though.

 

Encrypt vs Un-encrypted: Kind of a ethical question (Extremely safe and bad for the gdrive community vs Pretty safe and good for the  gdrive community)

  • *To my knowledge* - If this drive is a business gsuite account, Team-drive/google drive files are yours and only yours. Google does not look at these. The only thing google does with regards to the files is create a md5 like fingerprint of the file on upload. This is for de-duping purposes (if two users upload the same file - they only will store one copy on their servers - like hyperlinking on steroids). This only holds true if this is YOUR gsuite account (IE you pay for the business drive). If you are using something like a free edu provided teamdrive then Admins of the institution/business account could have access to your drive. So technically un-encrypted remotes are fine and many many users use un-encrypted without issue. I've never heard of google deleting personal un-encrypted accounts. The only ones that get deleted are people selling streaming services from their gdrive to 100s of users. 
  • Encrypted remotes are great though because of the obvious. It's encrypted and only for your eyes regardless of anything. The downside is that is since your file is now unique to you... google can't de-dupe it. This means google has to store way more duplicates of the exact same data == cost more. Eventually google could take up action and start enforcing the 5 user for unlimited space in the TOS, fingerprinting files for copywrite protection, and more restrictions on uploads/downloads. 
  • Lots choose encrypted for peace of mind but it's a long going debate. 

Can't comment on sync. I don't use it. I only use move and have never had a problem with the DZMM's scripts. You could always use copy and verify that it went as expected by looking at your /local and /rclone mounts. But again, I've never had issues with copy. If they were absolutely irreplaceable files I might keep a backup somewhere on the array while verifying but I personally still keep irreplaceable things on my own drives.

 

Feel free to comment back or message me if you have any more issues/questions. 

Link to comment
4 hours ago, Bjur said:

I will try to explain my thoughts on the first point in regards to 2 remote crypts (Videos & Shows).

The reason for this is, that I read that Team Drive "only" supports 400000 files in total. When you have metadata files it quickly fills up, and when reaching 400000 files, then I will be having a serious problem, I think? What is your thoughts about that?

 

A note on the last section. What would be the thoughts on creating a non secure remote? I thought about creating a separate Team Drive for my home pictures, but that would be Google Photos to use for this, and that have a completely own app and Team Drive would not be used for that right.

 

watchmeexplode5 already provided some answers so I'll just add to some other points.

 

The 400k object count limit (object = file + folder) is the hard limit per team drive. In practice, anything approaching about 150k objects will cause that particular teamdrive to be perceiveably slower (been there, done that).

So keep that in mind e.g. you might want to split TV Shows to smaller libraries.

 

In terms of metadata, what sort of metadata?

Typically Plex db (which includes what I would call metadata), for example, is stored locally (and should be stored locally). You really don't want that stuff on the team drive because of the high latency.

File attributes (also can be considered metadata) depends on the service itself but it's not part of the object limit.

So back to the question, what other kind of metadata?

 

I have unencrypted tdrive for family photos (among other things). All our mobile devices are automatically synced to this tdrive (1 folder per phone) and that is mounted on the server as well. So if I need a photo from a certain phone or if I need to push a file to a certain tablet, I can do it from the server.

The main reason the tdrive is not encrypted is because the Android sync app doesn't support rclone encryption.

Link to comment
1 hour ago, watchmeexplode5 said:

Regarding the 400000 limit. Might be best to have @DZMM  chime in here. If I recall he uploads to teamdrive (which allows 750+gb via SA) and later runs another script to move from teamdrive to standard drive (since server ---> server transfers don't count towards quota). Standard drives don't have a file limit but service accounts can't be utilized with them so you need to flip-flop a little to solve the file limit and upload limit issue together. 

@Bjur I recommend you have just one mergerfs mount and add the second rclone remote as an additional local folder - I think I proposed this earlier (can't remember).  I do however take a slightly different approach for my music which would push my tdrive over 400K because of all the tracks and folders - I add them to my main tdrive, but then do an overnight rclone move to gdrive where there's no object limit.

 

57 minutes ago, testdasi said:

The 400k object count limit (object = file + folder) is the hard limit per team drive. In practice, anything approaching about 150k objects will cause that particular teamdrive to be perceiveably slower (been there, done that).

Now that is interesting!   I've noticed that navigating plex and launching files has been slow of late and I'm wondering if this is the cause.  At the moment I have all my movies and tv shows in one tdrive - I think the time has come to create a couple more.  Do you still aggregate your remotes into one mergerfs mount or do you have multiple mergerfs mounts?

Link to comment
14 minutes ago, DZMM said:

Now that is interesting!   I've noticed that navigating plex and launching files has been slow of late and I'm wondering if this is the cause.  At the moment I have all my movies and tv shows in one tdrive - I think the time has come to create a couple more.  Do you still aggregate your remotes into one mergerfs mount or do you have multiple mergerfs mounts?

I do both. Related stuff is aggregated e.g. media remotes are on the same mergerfs mount, backup remotes are on another mergerfs mount and so on.

  • Like 1
Link to comment
9 minutes ago, testdasi said:

I do both. Related stuff is aggregated e.g. media remotes are on the same mergerfs mount, backup remotes are on another mergerfs mount and so on.

Ok, I'm going to split out my 4K content to start with (easiest) and then some of movies to see if that helps.

Link to comment

@watchmeexplode5: Wow thanks again for a walk-through answer. Much appreciated. And thanks to @testdasi and @DZMM for the answers as well.

 

- If I start with the metadata.

I use Ember to generate metadata files, so I'm sure all data are as I want them also because some are non English .nfos/posters.

I use XBMXnfoMoviesImporter with Plex to add them.

I don't know if my usecase for this are not the smartest thing to do anymore or I should just leave Plex take care of it all it the future... I don't know but much of the non English will be more difficult to get a match I think.

 

- chapter/preview images:

I'm not sure if I use that with Plex now. I get chapters on some titles, but perhaps it because they are embedded in the files. Where do I check that. I guess the scrubbing is more an AppleTV feature with their remote. I have both AppleTV and Shield but am using Logitech Remote.

I have disabled video thumbnail though, because it would take up way to much space for what it does (which shouldn't matter in the future). I don't know how much benefit there is to gain with this.

 

- For Downloader/Filebot folder location:

This is where I get a little confused. 

When you are referring to ../local/REMOTE/xxxx is this in the upload script referred to as LocalFilesShare="/mnt/user/local" option? So that will be my user share folder locally which eventually will get uploaded to the cloud?

 

I have the following now: /mnt/user/local/googleFI_crypt

 

So I should let Sab do the following: 

temp: /mnt/user/local/googleFI_crypt/Downloads/temp

Finished: /mnt/user/local/googleFI_crypt/Downloads/Finished

 

The Downloads folder are excluded in the scripts.

Then I should let Filebot look into /mnt/user/local/googleFI_crypt/Downloads/Finished let it move/renaming to mnt/user/local/googleFi_crypt/Movies folder

 

Is this correct understood?

 

I saw a user mentioning an option to let new files be local for 7 days and then upload, but that shouldn't apply here? The local files will be uploaded to cloud when upload script is run?

 

encrypt vs. unencrypted: thanks for the explanation. I think I will go with the encrypted though. Even though they have looked yet, they could be in the future and peoples stuff gets banned/deleted. 

 

@testdasi: The metadata stuff is only my own created nfo files etc. Not the Plex metadata. That I will be using normally in appdata folder on my cache SSD. That makes sense right?

Thanks for the recommendation to Photos I will look into that after this is up and running:)

 

@DZMM: In regards to only one mergerfs mount. You most likely did. This is very new to me so sorry for the stupid questions.

But wouldn't it then make sense to have 2 mount scripts and 2 upload scripts and 2 crypts then. 1 for movies and 1 for shows to divide it if someday will reach the limit?

 

And last but not least when having a lot of small .nfo metadata files, would it be best practice in the /user/mnt/mount_mergerfs/movies folders where all metadata is at, then in the upload script use Command2="--exclude *.nfo" etc?

 

Thanks again all for the BIG help you are providing:)

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.