Jump to content
Sparkum

Plexdrive

172 posts in this topic Last Reply

Recommended Posts

45 minutes ago, slimshizn said:

Thanks for the scripts! I'm going to get away from plexdrive and try this out today.

Edit: Could you post how your rclone configs are?

These are the ones I'm trying to understand to edit the script.


mkdir -p /mnt/disks/rclone_vfs
mkdir -p /mnt/disks/rclone_cache_old

Sorry, I thought I'd removed bits that weren't relevant.  I was previously using rclone's cache which is great at merging local files not uploaded yet with cloud files a la unionfs, with local files uploaded automatically.  But, media launches were shocking so I moved to vfs in tandem with unionfs.  My /mnt/dicks/rclone_cache_old mount is where I've mounted and decrypted the cache files that hadn't been uploaded yet, so that I can manually upload them.

 

Here's my rclone config with irrelevant bits removed this time: 

 

[gdrive]
type = drive
client_id = xxxxxxxxxxxxxxxx.apps.googleusercontent.com
client_secret = 
scope = drive
root_folder_id = 
service_account_file = 
token = 

[gdrive_media_vfs]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = 
password2 = 

[upload_gdrive_media]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = 
password2 = 

[backup]
type = crypt
remote = gdrive:backup
filename_encryption = standard
directory_name_encryption = true
password = 
password2 = 

 

45 minutes ago, slimshizn said:

Currently I have this sole crypt


[uploadcrypt]
type = crypt
remote = google:crypt
filename_encryption = standard
directory_name_encryption = true
password = *** ENCRYPTED ***
password2 = *** ENCRYPTED ***


This cache which reads both of the Movies and Series folders inside of the crypt

Other than the 'plexcache' I'm pretty much setup the same inside of https://blog.laubacher.io/blog/unlimited-plex-server-on-unraid this guide. So the gdrivecrypt containing /mnt/disks/pd/crypt would now not be in use.

 

Correct - just create a vfs mount with your uploadcrypt: remote and you're good to go.  Mount it at the same location as gdrivecrypt: and you shouldn't have to update anything else.

Edited by DZMM

Share this post


Link to post
3 hours ago, DZMM said:

# Mount rclone vfs mount rclone mount --allow-other --dir-cache-time 24h --cache-dir=/tmp/rclone/vfs --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 256M --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs &


this is giving me Error: unknown flag: --vfs-read-chunk-size
Although it lists it as a flag.

touch: cannot touch '/mnt/user/software/rclone_install_running': No such file or directory
The network is up - installing rclone
plugin: installing: https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg
plugin: downloading https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg
plugin: downloading: https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg ... done
plugin: not reinstalling same version
Error: unknown flag: --vfs-read-chunk-size
Usage:
rclone mount remote:path /path/to/mountpoint [flags]

Flags:
--allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
--attr-timeout duration Time for which file/directory attributes are cached. (default 1s)
--daemon Run mount as a daemon (background mode).
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem.
-h, --help help for mount
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
-o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem.
--umask int Override the permission bits set by the filesystem.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size int Read the source objects in chunks.
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. -1 is unlimited.
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.

Share this post


Link to post

Fixed it, for some reason there was an extra space or something in between there.

Edit: This portion right here I'm just making sure about. I have installed the plugin already so do I need to add the install and uninstall to these scripts to have it work right? 

 

plugin remove rclone.plg
rm -rf /tmp/rclone

if [[ -f "/mnt/user/software/rclone_install_running" ]]; then
rm /mnt/user/software/rclone_install_running
echo "install running - removing dummy file"
else
echo "Passed: install already exited properly"
fi


Also when you have a chance can you post about how you have Sonarr and Radarr run the union cleanup script? Thank you again.

Edited by slimshizn
Question

Share this post


Link to post
1 hour ago, slimshizn said:

Fixed it, for some reason there was an extra space or something in between there.

Edit: This portion right here I'm just making sure about. I have installed the plugin already so do I need to add the install and uninstall to these scripts to have it work right? 

 

plugin remove rclone.plg
rm -rf /tmp/rclone

if [[ -f "/mnt/user/software/rclone_install_running" ]]; then
rm /mnt/user/software/rclone_install_running
echo "install running - removing dummy file"
else
echo "Passed: install already exited properly"
fi


Also when you have a chance can you post about how you have Sonarr and Radarr run the union cleanup script? Thank you again.

I install the rclone plugin via a script as I have a pfsense VM so I can't use the main plugin as it needs connectivity when unraid starts, whereas I don't have connectivity for a min or two until my pfsense VM kicks in.

 

I have that install check there to make sure that I don't run the script twice - it creates a dummy file when the script starts that it checks isn't there before starting, and removes it when the script stops.  Once things settle down a bit, I'm going to set the script to run say every 5 mins so if for some reason the mount drops, it will re-mount it.

 

In radarr/sonarr you can run scripts in settings/connect and then add 'Custom Script'.  I just created an extra docker mapping for /scripts --> /boot/config/plugins/user.scripts/scripts/ .  I did it this way as I might have extra scripts in the future.  

 

663915344_FireShotCapture129-Settings-Sonarr-https___the-shepherds.com_sonarr_settings_connect.thumb.png.bb1763d8154e059c358098c1bcc095a5.png

Share this post


Link to post
Just now, slimshizn said:

Awesome, thank you for all the help. Working great so far. 

 

Brilliant - what kind of launch times are you getting?  How does it compare to PD? 

 

I never really got PD working and whilst investigating the rclone bits, I realised I could reduce the number of moving parts I needed to figure out and the support on the rclone forum is as good as on this one, whereas with PD I couldn't see anywhere to go.

Share this post


Link to post

I had PD working for about 5 minutes then it just wouldn't work for me. Now the launch is pretty quick, barely noticeable at all. If I have any issues I'll be sure to report them here, for now I'm going to let it run and go mow lol.

Share this post


Link to post

DZMM, tried a custom version of your install/mount and umount scripts and everything is working smooth as butter, really thanks. I streamed 30 minutes of a movie, started in 3 seconds and no pauses.

Now i need to think about automation (and unionfs mounts)...decisions, decisions

Share this post


Link to post

Forgot to ask about adding that script location to the docker. Since it's going to there should it be RO/slave? Thanks

Share this post


Link to post
12 minutes ago, slimshizn said:

Forgot to ask about adding that script location to the docker. Since it's going to there should it be RO/slave? Thanks

hmm I hadn't considered that - I'm searching the forum to see if anyone's ever added a script to a user script to a docker before and how

Share this post


Link to post
23 minutes ago, zirconi said:

DZMM, tried a custom version of your install/mount and umount scripts and everything is working smooth as butter, really thanks. I streamed 30 minutes of a movie, started in 3 seconds and no pauses.

Now i need to think about automation (and unionfs mounts)...decisions, decisions

 

It is cool, isn't it?  ?

 

It makes you rethink how to use storage when unlimited cloud storage is so cheap via this method and pretty much indistinguishable from local storage.  If and when any of my local drives die, it's going to be an interesting decision for me what to do.  I've already removed one (small) HDD to free up space for an SSD, and I'm contemplating removing another to make way for another SSD I'm not using.

Share this post


Link to post
18 minutes ago, DZMM said:

It makes you rethink how to use storage when unlimited cloud storage is so cheap via this method and pretty much indistinguishable from local storage.  If and when any of my local drives die, it's going to be an interesting decision for me what to do.  I've already removed one (small) HDD to free up space for an SSD, and I'm contemplating removing another to make way for another SSD I'm not using.

I can't risk it knowing that Google could snap its fingers and force 5 users. Local and cloud stays lol

Share this post


Link to post

One problem I haven't been able to overcome is slow writes by radarr and sonarr to my unionfs mount - anyone else having this problem?

Share this post


Link to post
2 hours ago, slimshizn said:

Forgot to ask about adding that script location to the docker. Since it's going to there should it be RO/slave? Thanks

It's fine

 

43 minutes ago, Squid said:

Nope.  /boot isn't an unassigned device, so you can just leave it at RW.  Not that RW:Slave causes any problems though.

 

Share this post


Link to post
3 hours ago, DZMM said:

One problem I haven't been able to overcome is slow writes by radarr and sonarr to my unionfs mount - anyone else having this problem?

It's working fine for me but I'm mounting my unionfs mount differently than you, it's mounted under /mnt/user/Media/Cloud/Movies instead of your /mnt/disks/Movies

 

#######  Mount unionfs   ##########

# check if mounted

if [[ -f "/mnt/user/Media/Cloud/Series/mountcheck" ]] && [[ -f "/mnt/user/Media/Cloud/Movies/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted."

rm /mnt/user/software/rclone_install_running

exit

else

# Unmount before remounting
fusermount -uz /mnt/user/Media/Cloud/Series
fusermount -uz /mnt/user/Media/Cloud/Movies


unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/Media/Cloud/Seriestmp=RW:/mnt/disks/crypt/Series=RO /mnt/user/Media/Cloud/Series
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/Media/Cloud/Moviestmp=RW:/mnt/disks/crypt/Movies=RO /mnt/user/Media/Cloud/Movies

if [[ -f "/mnt/user/Media/Cloud/Series/mountcheck" ]] && [[ -f "/mnt/user/Media/Cloud/Movies/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted."
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Movies & Series Remount failed."
fi

fi

Edited by slimshizn
spelling

Share this post


Link to post
13 hours ago, slimshizn said:

It's working fine for me but I'm mounting my unionfs mount differently than you, it's mounted under /mnt/user/Media/Cloud/Movies instead of your /mnt/disks/Movies
 

Don't you get errors not mounting at /mnt/disks?  Anyone else mounting at /mnt/user?

 

I might give /mnt/user a test again as maybe something else was causing the problems I was having

 

Edit1: I think I was getting errors because I was mounting at the top-level folder rather than a sub-folder i.e I should try /mnt/user/unionfs/fuse_folder_here not /mnt/user/fuse_folder_here

 

Edit2: Yep, mounting to subfolder in share rather than top-level doesn't throw up an error.  Hopefully this will mean I can hardlink files now!

 

 

Edited by DZMM

Share this post


Link to post

Let me know if you get any errors with that setup, sometimes I have trouble doing a clean reboot and have to ssh in and use top and kill. Not sure if its a container or if it's unionfs.

Share this post


Link to post

I've noticed that media files over 30G tend to skip. Is there a way to edit the mount to have it play the file smoothly? Internet speed is sufficient enough to play.  

rclone mount --allow-other --dir-cache-time 24h --cache-dir=/tmp/rclone/vfs --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 256M --log-level INFO --stats 1m

Share this post


Link to post
9 minutes ago, slimshizn said:

I've noticed that media files over 30G tend to skip. Is there a way to edit the mount to have it play the file smoothly? Internet speed is sufficient enough to play.  

rclone mount --allow-other --dir-cache-time 24h --cache-dir=/tmp/rclone/vfs --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 256M --log-level INFO --stats 1m

 

hmm will try a big file myself tonight, but I haven't had any problems after running for a few weeks.  Does it start skipping at the start or the beginning?  Direct play or transcoding?

 

What's your linespeed?  You could try bumping chunk-size-limit up to 2G, which most users on the rclone forum seem to have.  I've gone for 1G as I'm being a bit cautious as 2G for each concurrent stream could melt my machine at the moment as it's doing a lot of jobs e.g. 8 x2G = 16GB of ram used.

 

Also maybe try vfs-read-chunk-size of 32M.  Other users have this lower amount, but I can't see why as it almost seems like a wasted chunk request to me.  I think 128M might be a better size if you're stuttering at the start.

 

BTW the --cache-dir=/tmp/rclone/vfs bit definitely isn't needed - just me being cautious again.

Share this post


Link to post
25 minutes ago, DZMM said:

 

hmm will try a big file myself tonight, but I haven't had any problems after running for a few weeks.  Does it start skipping at the start or the beginning?  Direct play or transcoding?

 

What's your linespeed?  You could try bumping chunk-size-limit up to 2G, which most users on the rclone forum seem to have.  I've gone for 1G as I'm being a bit cautious as 2G for each concurrent stream could melt my machine at the moment as it's doing a lot of jobs e.g. 8 x2G = 16GB of ram used.

 

Also maybe try vfs-read-chunk-size of 32M.  Other users have this lower amount, but I can't see why as it almost seems like a wasted chunk request to me.  I think 128M might be a better size if you're stuttering at the start.

 

BTW the --cache-dir=/tmp/rclone/vfs bit definitely isn't needed - just me being cautious again.


I'm sitting at 64GB Ram atm so if that's the only issue I should be okay. Yeah stuttering at the start every 10 seconds or so with larger files with both direct play and transcoding, everything else is fine. I can try 2G for the chunk-size-limit and then if that doesn't help I can add the vfs-read-chunk-size 32M.

Share this post


Link to post

maybe try  --vfs-read-chunk-size 128M at the start as well. 

 

My theory is it starts playing once it has the first chunk, so for bigger files maybe the first 64M chunk (and maybe the second at 128MB) is too small?

 

Edit: it might slow your launch down a bit though - dunno

Edited by DZMM

Share this post


Link to post

29GB file stutters almost right off the bat after about a 30 second wait to play, 25GB plays for about 5 minutes till it starts buffering and stuttering every 10-15 seconds.  ( didn't change anything yet)

Share this post


Link to post

ok, I did a quick test on a 35GB file.

 

- I tried  --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 2G --buffer-size 256M first to test my larger chunk size theory and I had the same problems as you.  The device I was using was transcoding the audio but it was running at less the 1x, so I had stutter

- I then tried --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 2G and everything was perfect.  Audio transcoding easily over x1 and got throttled in the end - good

 

I'll try my previous --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G later - I can't now as the family are watching the film I was testing. 

 

I haven't tried that particular film before on those settings, so it'll be interesting to see if the 2G was the fixer - if all is good even with 1G, I'm going to stick with 2G anyway and also trial 32M.  Some of the users over on rclone are using 32M so I'll give it a whirl - I hadn't bothered as 64M was working in all the tests I've done so far.

Share this post


Link to post

I'm currently trying:

 

rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1G --buffer-size 1G --umask 002 --bind 172.30.12.2 --cache-dir=/mnt/software/rclone/vfs --vfs-cache-mode writes --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs

- 32M to try and speedup library updates.  Launch times seem unaffected

- worked out that --cache-dir is where files go before they are uploaded that are written direct to the remote.  Moved away from ram

- set buffer and read-chunk-size-limit to 1G, so for my max 4 concurrent streams I'll use a max of 8GB of RAM.  Anymore than that and I should have enough ram spare for 1 or 2 more streams.  If not, hopefully the swapfile will kick in - think it'll be rare I'll have 6 streams just from my online content

 

Edited by DZMM

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.