Plexdrive


Recommended Posts

17 minutes ago, zirconi said:

DZMM : so are you moving everything to the cloud ? I'm thinking a way to keep my preferred movies/shows locally..maybe creating a new unraid share called preferred with movies and tvshows inside...moving stuff manually of course that i want to keep. 

 

So far I've moved all of my tv shows and a good chunk of my movies.  At first I selected the less important ones by moving to a new share that rclone watches and uploads from and is part of the unionfs folder, but now I'm going to focus on the smaller, less important ones so there's less chance of something going wrong.  I'm not too worried about uploading more important ones though as I'm sure if Google scupper the setup there'll be time to download my content if I want to.

Link to comment
2 hours ago, DZMM said:

I'm currently trying:

 


rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 1G --umask 002 --bind 172.30.12.2 --cache-dir=/mnt/software/rclone/vfs --vfs-cache-mode writes --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs

- 32M to try and speedup library updates.  Launch times seem unaffected

- worked out that --cache-dir is where files go before they are uploaded that are written direct to the remote.  Moved away from ram

- set buffer and read-chunk-size-limit to 1G, so for my max 4 concurrent streams I'll use a max of 8GB of RAM.  Anymore than that and I should have enough ram spare for 1 or 2 more streams.  If not, hopefully the swapfile will kick in - think it'll be rare I'll have 6 streams just from my online content

 

Nice, So should I change my cache-dir location or will a unassigned work. I'm thinking of going back to mnt/user for the SSD pool since those files aren't that large. And agreed, I only allow a couple users to access those shares as I have limited upload anyway.

Link to comment
4 minutes ago, slimshizn said:

Nice, So should I change my cache-dir location or will a unassigned work. I'm thinking of going back to mnt/user for the SSD pool since those files aren't that large. And agreed, I only allow a couple users to access those shares as I have limited upload anyway.

Up to you - depends on how much you are going to write to the mount rather than via a background upload job.  I don't currently write direct to the remote mount, so I've just dumped it somewhere for now.

Link to comment

Where is the updated plugin/how to do this?  I tried getting the plugin from the 1st link, but it comes back generic error in the plugin manager.....

 

I have been uploading to my google plexdrive system manually via web browser and am tired it is stopping or I forget I am uploading and close the browser...

 

Thanks

Myk

Link to comment
10 hours ago, slimshizn said:

I'd like to introduce something like this into the mix as well, to lower api hits. https://github.com/l3uddz/plex_autoscan

 

I'm confused by this one.  According to this thread partial scanning doesn't work with unionfs mounts, but it's working fine for me and new files are showing in plex as soon as sonarr/radarr notifies plex

 

https://forum.rclone.org/t/plex-with-rclone-mount-possible-to-auto-update-partial-scan/5525

  • Like 1
Link to comment
On 7/17/2018 at 5:48 AM, DZMM said:

 

I'm confused by this one.  According to this thread partial scanning doesn't work with unionfs mounts, but it's working fine for me and new files are showing in plex as soon as sonarr/radarr notifies plex

 

https://forum.rclone.org/t/plex-with-rclone-mount-possible-to-auto-update-partial-scan/5525

You have sonarr and radarr notifications setup for plex? Does that help it do a partial?

Link to comment
  • 2 weeks later...
On 7/14/2018 at 8:20 PM, DZMM said:

 

So far I've moved all of my tv shows and a good chunk of my movies.  At first I selected the less important ones by moving to a new share that rclone watches and uploads from and is part of the unionfs folder, but now I'm going to focus on the smaller, less important ones so there's less chance of something going wrong.  I'm not too worried about uploading more important ones though as I'm sure if Google scupper the setup there'll be time to download my content if I want to.

 

Thanks for all your scripts, info and invested time in getting everything right. I've been "following" you here and on the Rclone forums. It's been quite a challenge to get everything working for my set-up based on your scripts and so far I'm at 80% I think. However there is something I don't get. If you point Sonarr/Radarr to your _upload folder (which is empty when there is nothing to be uploaded) how do you make Sonarr/Radarr know what is already in your library and which files still need to be downloaded or can be upgraded?

Link to comment
11 minutes ago, Kaizac said:

 

Thanks for all your scripts, info and invested time in getting everything right. I've been "following" you here and on the Rclone forums. It's been quite a challenge to get everything working for my set-up based on your scripts and so far I'm at 80% I think. However there is something I don't get. If you point Sonarr/Radarr to your _upload folder (which is empty when there is nothing to be uploaded) how do you make Sonarr/Radarr know what is already in your library and which files still need to be downloaded or can be upgraded?

You point them to your unionfs folders, that merge what's been uploaded with what's local

  • Like 1
Link to comment
1 minute ago, DZMM said:

You point them to your unionfs folders, that merge what's been uploaded with what's local

 

Thanks for the quick reply!

 

But how does Radarr/Sonarr then transfer the files to the upload folder? Did you set up Sabnzbd to your upload folder?

Link to comment
Just now, Kaizac said:

 

Thanks for the quick reply!

 

But how does Radarr/Sonarr then transfer the files to the upload folder? Did you set up Sabnzbd to your upload folder?

Unionfs merges the upload folder and the Google folder.  When sonarr tries to add a new file, it is actually written to the upload folder not the Google folder - when you move it to the Google folder, to sonarr it looks like it hasn't moved as sonarr is looking at the unionfs folder.

 

Radarr/sonarr etc should be pulling files in from sab/nzbget etc, not sab writing directly to your media folders

  • Like 1
Link to comment
35 minutes ago, DZMM said:

Unionfs merges the upload folder and the Google folder.  When sonarr tries to add a new file, it is actually written to the upload folder not the Google folder - when you move it to the Google folder, to sonarr it looks like it hasn't moved as sonarr is looking at the unionfs folder.

 

Radarr/sonarr etc should be pulling files in from sab/nzbget etc, not sab writing directly to your media folders

 

Thanks for the clarification, I'm amazed it works like that! Did you change anything in your mounting settings (buffers and such) since you last tried this:

 

rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 1G --umask 002 --bind 172.30.12.2 --cache-dir=/mnt/software/rclone/vfs --vfs-cache-mode writes --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs
Link to comment
12 minutes ago, Kaizac said:

 

Thanks for the clarification, I'm amazed it works like that! Did you change anything in your mounting settings (buffers and such) since you last tried this:

 


rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 1G --umask 002 --bind 172.30.12.2 --cache-dir=/mnt/software/rclone/vfs --vfs-cache-mode writes --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs

 

I've:

  • increased chunk-size-limit and buffer-size to 2G as I'm more confident now that I've got enough memory after running for a few weeks now
  • deleted umask 002 as being honest I don't know what it does
  • removed --bind as I don't think it's needed for my setup
  • removed --cache-dir and --vfs-cache-mode as this is only needed if you write files direct to the vfs mount, which I don't - I use a scheduled rclone move job 
rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 2G --buffer-size 2G --log-level INFO gdrive_media_vfs: /mnt/disks/rclone_vfs --stats 1m

I'm tempted one day to try --vfs-read-chunk-size 64M as sometimes I have to start a file twice, which I think is because it doesn't have enough info to start playing the first time.  The trade-off is my start times might be a little longer

  • Like 1
Link to comment
19 hours ago, DZMM said:

 

I've:

  • increased chunk-size-limit and buffer-size to 2G as I'm more confident now that I've got enough memory after running for a few weeks now
  • deleted umask 002 as being honest I don't know what it does
  • removed --bind as I don't think it's needed for my setup
  • removed --cache-dir and --vfs-cache-mode as this is only needed if you write files direct to the vfs mount, which I don't - I use a scheduled rclone move job 

rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 2G --buffer-size 2G --log-level INFO gdrive_media_vfs: /mnt/disks/rclone_vfs --stats 1m

I'm tempted one day to try --vfs-read-chunk-size 64M as sometimes I have to start a file twice, which I think is because it doesn't have enough info to start playing the first time.  The trade-off is my start times might be a little longer

 

Thanks, I've changed those in my configs as well.


Are you still getting slow writes from Radarr/Sonarr to your unionfs folders? To me it seems slower indeed, so maybe putting it on an SSD cache will help. Will try then when I get a bigger SSD.

 

Another thing which is strange is that I have to run the mount script twice for it to mount the unionfs folders from within my Gdrive. First it mounts my Gdrive but not the folders within and then I have to mount again to also get the subfolders. Maybe the mount is too slow the first time so it needs some time to load the folders within.

 

Sonarr is also giving me permission errors in the logs, are you getting these aswell?

 

18-7-31 17:05:02.2|Warn|MediaFileAttributeService|Unable to apply permissions to: /tv/The Blue Planet/Season 1

[v2.0.0.5228] NzbDrone.Mono.Disk.LinuxPermissionsException: Error setting file owner and/or group: EPERM
  at NzbDrone.Mono.Disk.DiskProvider.SetOwner (System.String path, System.String user, System.String group) [0x00057] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Mono\Disk\DiskProvider.cs:223 
  at NzbDrone.Mono.Disk.DiskProvider.SetPermissions (System.String path, System.String mask, System.String user, System.String group) [0x00008] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Mono\Disk\DiskProvider.cs:75 
  at NzbDrone.Core.MediaFiles.MediaFileAttributeService.SetMonoPermissions (System.String path, System.String permissions) [0x0000f] in C:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Core\MediaFiles\MediaFileAttributeService.cs:88 
Edited by Kaizac
Link to comment
7 hours ago, Kaizac said:

Are you still getting slow writes from Radarr/Sonarr to your unionfs folders? To me it seems slower indeed, so maybe putting it on an SSD cache will help. Will try then when I get a bigger SSD.

 

Yep.  My family are going away next week so I'm going to experiment with mounting unionfs at /mnt/user/rclone_vfs/movies rather than /mnt/disks/rclone_vfs/movies to see if file write speeds improve/hardlinking is supported etc

 

7 hours ago, Kaizac said:

 

 

Another thing which is strange is that I have to run the mount script twice for it to mount the unionfs folders from within my Gdrive. First it mounts my Gdrive but not the folders within and then I have to mount again to also get the subfolders. Maybe the mount is too slow the first time so it needs some time to load the folders within.

 

if you just mount gdrive does it work?  Try putting a pause in - I've added a 5s delay between mounting gdrive and unionfs as I had similar problems.

 

Not sure what the permission errror are

Edited by DZMM
  • Like 1
Link to comment
9 minutes ago, DZMM said:

 

Nope.  My family are going away next week so I'm going to experiment with mounting unionfs at /mnt/user/rclone_vfs/movies rather than /mnt/disks/rclone_vfs/movies to see if file write speeds improve/hardlinking is supported etc

 

if you just mount gdrive does it work?  Try putting a pause in - I've added a 5s delay between mounting gdrive and unionfs as I had similar problems.

 

Not sure what the permission errror are

 

I'm mounting the unionfs on the mnt/user/Media folder (one for Movies one for Series) and that works perfectly, no errors. I think my slow speed has to do with my array being overloaded by io, since I'm also writing a lot of files to an Rclone cache. Add Sabnzbd extraction and Sonarr moving to the mix and it's too much for the regular HDD array.

 

Thanks for the tip about the delay, I saw that in your config but didn't know why you put it in. Will try that later on.

 

Link to comment
2 hours ago, slimshizn said:

Try moving your download folder to an unassigned drive, as well as appdata to the unassigned drive. After I did my io issues are almost non existent.

 

Are you using a regular spinner for this or an SSD? I think a big SSD cache should work the same right?

Link to comment
5 hours ago, slimshizn said:

I use a ssd cache and the unassigned drive is ssd. If you use a single no raid config xfs is said to have the best performance. 

 

Thanks, just upgraded to a 1TB SSD cache using XFS and it seems a lot faster now!

 

@DZMM: how does Sonarr/Radarr handle "upgrades" with the unionfs setup? I think it can only upload, but not delete the worse quality versions right? Or does that work as usual?

Link to comment
8 minutes ago, Kaizac said:

 

@DZMM: how does Sonarr/Radarr handle "upgrades" with the unionfs setup? I think it can only upload, but not delete the worse quality versions right? Or does that work as usual?

 

Yes, it works as normal.  Unionfs hides old files in the mount i.e. they still exist on google, but sonarr and radarr can't see them.  If you want to actually delete them, which is a good idea because if you change your mount settings or rescan the google folders for whatever reason it will pickup these files, then you need to do this:

 

On 6/9/2018 at 7:36 PM, DZMM said:

Just wanted to share something useful I found this afternoon.  I was struggling to get my head around what happens when sonarr/radarr etc delete or upgrade files and I learnt from this post https://enztv.wordpress.com/2017/03/09/unionfs-cleanup/ that UnionFS cleverly hides the files from the fusemount, but doesn't actually delete them on google drive. i.e. if you mounted gd on another system (or had to rebuild your sonarr/radarr library) you'd suddenly find lots of old files being picked up - nightmare!

 

To actually delete the files on gd and to avoid potential conflicts from identical filenames existing, I've added this script to radarr and sonarr so that if existing media changes, then the old copies are deleted from gd as well during post-processing, as well as in an overnight script just to make sure.  To do this I created another mount of my gd at mnt/disks/google_decrypt (mnt/disks/pd_decrypt is what I use for my decrypted plexdrive) for the script to actually delete files from gd.

 


#!/bin/bash

###########TV_KIDS##############

find /mnt/disks/fusion/tv_kids_fuse/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/fusion/tv_kids_fuse/.unionfs}
newPath=/mnt/disks/google_decrypt/tv_kids_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/fusion/tv_kids_fuse/.unionfs" -mindepth 1 -type d -empty -delete

###########TV_ADULTS##############

find /mnt/disks/fusion/tv_adults_fuse/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/fusion/tv_adults_fuse/.unionfs}
newPath=/mnt/disks/google_decrypt/tv_adults_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/fusion/tv_adults_fuse/.unionfs" -mindepth 1 -type d -empty -delete

###########movies_KIDS##############

find /mnt/disks/fusion/movies_kids_fuse/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/fusion/movies_kids_fuse/.unionfs}
newPath=/mnt/disks/google_decrypt/movies_kids_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/fusion/movies_kids_fuse/.unionfs" -mindepth 1 -type d -empty -delete

###########movies_ADULTS##############

find /mnt/disks/fusion/movies_adults_fuse/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/fusion/movies_adults_fuse/.unionfs}
newPath=/mnt/disks/google_decrypt/movies_adults_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/fusion/movies_adults_fuse/.unionfs" -mindepth 1 -type d -empty -delete

exit

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.