Jump to content
Sparkum

Plexdrive

172 posts in this topic Last Reply

Recommended Posts

I've created a plexdrive plugin that uses pre release builds. The reason for this is the release version (version 4) have problem with network causing playback problems for me.

I've tested the pre release builds and find it have solved the network problem. I dont know exactly what, but the problems I had before has vanished. I've just test the pre relase builds for few weeks only and so far no problem. Yours may vary. 

 

If anyone interested(or care), you can grab the plugin here. It is based on starbix release. But uses prerelease builds instead of release.

 

Quote

 

Thanks.

Share this post


Link to post

Hey, I'm still abroad, so I can't test it.

But I also added plexdrive5, so you can also use mine, I also removed mongodb as plexdrive5 won't use mongodb anymore :)

Share this post


Link to post
On 7/30/2017 at 3:01 PM, starbix said:

Hey, I'm still abroad, so I can't test it.

But I also added plexdrive5, so you can also use mine, I also removed mongodb as plexdrive5 won't use mongodb anymore :)

With this I get:

unknown shorthand flag: 't' in -t

The command looks like:

plexdrive -t /mnt/user/appdata/plexdrive/tmp -c /mnt/user/appdata/plexdrive -o allow_other /mnt/disks/plexdrive/ &

EDIT

I think `-t` is deprecated. Removed that part and command now runs fine. Thanks for the update!

Edited by d2dyno

Share this post


Link to post

With the new stable 5.0.0, you need to include 'mount'. IE: 

plexdrive mount -c /mnt/user/appdata/plexdrive -o allow_other /mnt/disks/plexdrive/ &

 

Share this post


Link to post

Thank you for the feedback! I removed the plexdrive bash script alltogether and executing plexdrive will now just be the normal binary.

Share this post


Link to post
On 8/5/2017 at 8:52 AM, starbix said:

Thank you for the feedback! I removed the plexdrive bash script alltogether and executing plexdrive will now just be the normal binary.

I do not see an update available for the plugin?

Share this post


Link to post
On 5/27/2017 at 4:22 PM, starbix said:

Hey I wrote a plexdrive plugin. Here it is: https://raw.githubusercontent.com/Starbix/unRAID-plugins/master/plugins/plexdrive.plg

 

I'm totally new to plugin writing, but I also have a unionfs plugin if anyone else is interested.

 

Hopefully it works for you guys

Hi 

 

I just tried installing this for the first time and I'm getting this error when I try to install:

 

plugin: installing: https://raw.githubusercontent.com/Starbix/unRAID-plugins/master/plugins/plexdrive.plg
plugin: downloading https://raw.githubusercontent.com/Starbix/unRAID-plugins/master/plugins/plexdrive.plg
plugin: downloading: https://raw.githubusercontent.com/Starbix/unRAID-plugins/master/plugins/plexdrive.plg ... done

Warning: simplexml_load_file(): I/O warning : failed to load external entity "/boot/config/plugins/plexdrive.plg" in /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin on line 216
plugin: xml parse error

Help please.  Great guide here btw - I'm working my way through it now https://blog.laubacher.io/blog/unlimited-plex-server-on-unraid

Share this post


Link to post

ok, I've got it all working - it helped that I finally worked out how plexdrive works ;-) 

 

@starbix your uploadscript on https://blog.laubacher.io/blog/unlimited-plex-server-on-unraid is using the wrong crypt - it should be uploadcrypt not gdrivecrypt.

 

@Sparkum can I get some pointers on how you added a temp folder please.  Here's my mount script - I'm not sure how to modify it:

mkdir -p /mnt/disks/pd
mkdir -p /mnt/disks/crypt

#This section mounts the various cloud storage into the folders that were created above.
if [[ -f "/mnt/disks/crypt/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, crypt mounted."
exit
else
echo "$(date "+%d.%m.%Y %T") ERROR: Drive not mounted, remount in progress."
# Unmount before remounting
fusermount -uz /mnt/disks/crypt
fusermount -uz /mnt/disks/pd
/mnt/cache/appdata/plexdrive/plexdrive mount /mnt/disks/pd -c "/mnt/user/appdata/plexdrive" -o allow_other -v 2 &
rclone mount --max-read-ahead 512M --allow-other --allow-non-empty -v --buffer-size 1G gdrivecrypt: /mnt/disks/crypt &
if [[ -f "/mnt/disks/crypt/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Remount successful."
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: Remount failed."
fi
fi
exit

Edit: just seen in an earlier post that -t is depreciated now

Edited by DZMM

Share this post


Link to post

My syslog is getting flooded with these entries every minute:

 

Mar  7 07:47:17 Highlander emhttpd: error: get_filesystem_status, 6475: Operation not supported (95): getxattr: /mnt/user/movies_adults_fuse
Mar  7 07:47:17 Highlander emhttpd: error: get_filesystem_status, 6475: Operation not supported (95): getxattr: /mnt/user/movies_kids_fuse
Mar  7 07:47:17 Highlander emhttpd: error: get_filesystem_status, 6475: Operation not supported (95): getxattr: /mnt/user/tv_adults_fuse
Mar  7 07:47:17 Highlander emhttpd: error: get_filesystem_status, 6475: Operation not supported (95): getxattr: /mnt/user/tv_kids_fuse

Anybody else having this problem?  Here's my fuse commands (I'm using unionfs from nerd pack):

 

if [[ -f "/mnt/user/tv_adults_fuse/mountcheck" ]] && [[ -f "/mnt/user/tv_kids_fuse/mountcheck" ]] && [[ -f "/mnt/user/movies_adults_fuse/mountcheck" ]] && [[ -f "/mnt/user/movies_kids_fuse/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, Series and Movies fuse mounted."
exit
else

echo "$(date "+%d.%m.%Y %T") ERROR: Series and Movies not mounted, remount in progress."

# Unmount before remounting
fusermount -uz /mnt/user/tv_adults_fuse
fusermount -uz /mnt/user/tv_kids_fuse
fusermount -uz /mnt/user/movies_adults_fuse
fusermount -uz /mnt/user/movies_kids_fuse
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/tv_adults_upload=RW:/mnt/disks/crypt/tv_adults_gd=RO /mnt/user/tv_adults_fuse
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/tv_kids_upload=RW:/mnt/disks/crypt/tv_kids_gd=RO /mnt/user/tv_kids_fuse
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/movies_adults_upload=RW:/mnt/disks/crypt/movies_adults_gd=RO /mnt/user/movies_adults_fuse
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/movies_kids_upload=RW:/mnt/disks/crypt/movies_kids_gd=RO /mnt/user/movies_kids_fuse
if [[ -f "/mnt/user/tv_adults_fuse/mountcheck" ]] && [[ -f "/mnt/user/tv_kids_fuse/mountcheck" ]] && [[ -f "/mnt/user/movies_adults_fuse/mountcheck" ]] && [[ -f "/mnt/user/movies_kids_fuse/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Remount of Series and Movies successful."
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: Remount failed."
fi
fi

 

Share this post


Link to post
10 minutes ago, Dimtar said:

Did you get any further @DZMM

yeah, it was because I was putting my shares in mnt/user - moved to mnt/disks

Share this post


Link to post

Mounted plexdrive with my gdrive inside.

 

I can go to my plex drive mount, which has my "secure" folder in it from my rclone mount. since this is obscured, will this cause a problem for Plex? if so, how can I go about it, so this ISNT hidden filenames?

 

EDIT:

 

Tried to follow the above guide, created a new rclone mount as the plex drive location with same KEYs/passwords, not my plexdrive mount is empty wont even show my gdrive information. 

 

Now I have a plex drive mount with the secure folder with NOTHING inside...

 

I then have a decrypt mount of THAT folder, with nothing in side...do I just have to wait for it to cache everything for it to show up?

Edited by Nyghthawk

Share this post


Link to post
On 3/27/2018 at 6:42 AM, Nyghthawk said:

Mounted plexdrive with my gdrive inside.

 

I can go to my plex drive mount, which has my "secure" folder in it from my rclone mount. since this is obscured, will this cause a problem for Plex? if so, how can I go about it, so this ISNT hidden filenames?

 

EDIT:

 

Tried to follow the above guide, created a new rclone mount as the plex drive location with same KEYs/passwords, not my plexdrive mount is empty wont even show my gdrive information. 

 

Now I have a plex drive mount with the secure folder with NOTHING inside...

 

I then have a decrypt mount of THAT folder, with nothing in side...do I just have to wait for it to cache everything for it to show up?

Not sure what's gone wrong, but this is how I'm setup

 

  1. Created a gdrive mount with rclone
  2. then mounted that gdrive mount using plexdrive with the same clientID and ClientSecret as 1
  3. Then created a crypt at gdrive:crypt and an rclone upload to crypt:tv shows and crypt:movies.  In your plexdrive mount you will now see the encrypted movies and tv_shows folders
  4. Now create a new remote with the remote set as remote = /mnt/disks/pd/crypt (or wherever you have your plexdrive mounted) using the same passwords you used for 3.  This is the folder that you point plex at (RO)

Share this post


Link to post

Just wanted to share something useful I found this afternoon.  I was struggling to get my head around what happens when sonarr/radarr etc delete or upgrade files and I learnt from this post https://enztv.wordpress.com/2017/03/09/unionfs-cleanup/ that UnionFS cleverly hides the files from the fusemount, but doesn't actually delete them on google drive. i.e. if you mounted gd on another system (or had to rebuild your sonarr/radarr library) you'd suddenly find lots of old files being picked up - nightmare!

 

To actually delete the files on gd and to avoid potential conflicts from identical filenames existing, I've added this script to radarr and sonarr so that if existing media changes, then the old copies are deleted from gd as well during post-processing, as well as in an overnight script just to make sure.  To do this I created another mount of my gd at mnt/disks/google_decrypt (mnt/disks/pd_decrypt is what I use for my decrypted plexdrive) for the script to actually delete files from gd.

 

#!/bin/bash

###########TV_KIDS##############

find /mnt/disks/fusion/tv_kids_fuse/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/fusion/tv_kids_fuse/.unionfs}
newPath=/mnt/disks/google_decrypt/tv_kids_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/fusion/tv_kids_fuse/.unionfs" -mindepth 1 -type d -empty -delete

###########TV_ADULTS##############

find /mnt/disks/fusion/tv_adults_fuse/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/fusion/tv_adults_fuse/.unionfs}
newPath=/mnt/disks/google_decrypt/tv_adults_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/fusion/tv_adults_fuse/.unionfs" -mindepth 1 -type d -empty -delete

###########movies_KIDS##############

find /mnt/disks/fusion/movies_kids_fuse/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/fusion/movies_kids_fuse/.unionfs}
newPath=/mnt/disks/google_decrypt/movies_kids_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/fusion/movies_kids_fuse/.unionfs" -mindepth 1 -type d -empty -delete

###########movies_ADULTS##############

find /mnt/disks/fusion/movies_adults_fuse/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/fusion/movies_adults_fuse/.unionfs}
newPath=/mnt/disks/google_decrypt/movies_adults_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/fusion/movies_adults_fuse/.unionfs" -mindepth 1 -type d -empty -delete

exit

 

Edited by DZMM

Share this post


Link to post

Anyone who's using https://blog.laubacher.io/blog/unlimited-plex-server-on-unraid

Are you having issues with the media taking forever to load or not load at all? 
I'm thinking it has to do with the rclone mount options listed here...
rclone mount --max-read-ahead 512M --allow-other --allow-non-empty -v --buffer-size 1G gdrivecrypt: /mnt/disks/crypt &
If any rclone/plexdrive pro's can help me out with this one I can't wrap my head around it.

Share this post


Link to post
10 hours ago, slimshizn said:

Anyone who's using https://blog.laubacher.io/blog/unlimited-plex-server-on-unraid

Are you having issues with the media taking forever to load or not load at all? 
I'm thinking it has to do with the rclone mount options listed here...
rclone mount --max-read-ahead 512M --allow-other --allow-non-empty -v --buffer-size 1G gdrivecrypt: /mnt/disks/crypt &
If any rclone/plexdrive pro's can help me out with this one I can't wrap my head around it.

 

I had the same launch problems, which went went away when I moved to the new rcone vfs feature which allows you to directly mount your encrypted remote without incurring any API hits i.e. you don't need plexdrive.  My launches are only a second or two longer than local files i.e. I can't really tell.

 

Assuming your encrypted remote is gdrive as in the guide, try:

 

rclone mount --allow-other --dir-cache-time 24h --cache-dir=/tmp/rclone/vfs --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 256M --log-level INFO gdrive: /mnt/disks/crypt

I'm not sure that:

--cache-dir=/tmp/rclone/vfs

is needed as nothing seems to get written to a cache, but I've left this is in to ensure it goes to ram if it does.   If you can be bothered to try and speed-up launch times, try playing with 

 --vfs-read-chunk-size 64M

This is the size of the first chunk rclone reads, which keeps doubling until the limit is reached - 1G in my case i.e. it reads 64,128,256,512,1024.....1G.  A lower value could mean you have faster launches - didn't seem to work for me, and 64M seems to be the currently recommended number over on the rclone forums.  The buffer is there just in case of any connectivity problems and in my scenario gets filled quickly between the second and third chunk being downloaded.

 

If you're worried about API calls you could increase --dir-cache-time as rclone mounts now poll every min for updates, so you could have an insanely high time.  I've kept mine lowish as I'm still uploading content, and I have had a few problems where if a new series is added, the polling hasn't picked this up.  I don't think this is a real concern as I've uploaded 20TB of my content so far and I've done numerous plex library updates, restarts etc while I was removing duplicates and getting it all working. 

 

you can write direct to the mount, but it's not recommended as if it fails you lose the file.  So, best to keep the scheduled upload job as in the guide and the unionfs mount so that plex can still see files that haven't uploaded yet.

Edited by DZMM

Share this post


Link to post

Thanks for all the info DZMM i will try the rclone way this evening.

Do you really trust google drive  to keep all your movies/tvshows without a local backup ? At the moment i'm simply uploading there everything as a backup with rclone...wondering to take your way with the less preferred stuff if everythings works as it should

Share this post


Link to post
15 minutes ago, zirconi said:

Thanks for all the info DZMM i will try the rclone way this evening.

Do you really trust google drive  to keep all your movies/tvshows without a local backup ? At the moment i'm simply uploading there everything as a backup with rclone...wondering to take your way with the less preferred stuff if everythings works as it should

 

I'm doing exactly that - loading the less vital content, stuff I don't really care about if it gets nuked at a later date by google or content I could replace if I was really bothered. 

 

I've still got my local array that can hold about 30TB for content I can't afford to lose (I'm also backing this up to gd), although if a drive fails in the future I will have to consider whether to replace it or just load the content online.

 

Given some of the insane amounts of storage I've seen on the rclone forums that's being stored, including people using GD for seedboxes....., I don't think this is something google are currently bothered about.  The bigger risk I think is that they will one day enforce the 5 user requirement for unlimited storage, rather than letting people like me through with one account for £6/pm.

Share this post


Link to post
12 minutes ago, DZMM said:

...I don't think this is something google are currently bothered about.  

 

This is the main thing worrying me...you know i was an happy unlimited amazon cloud drive subscriber ;) I'm currently in a gsuite account with other 5-6 friends to avoid the first wave of limits that will surely happens soon.I have ~30TB at the moment..google is surely not making a big profit from us at $10 a month.

 

Can you please make a sum of all your posts into one if you have time to better understand your way of doing things ? I wanna make a test with a good working situation before pulling the plug :)

Thanks a lot

 

BTW : Are you automating your downloads with radarr/sonarr from usenet ? Downloading locally and then uploading stuff removing from local  ? 

Share this post


Link to post
3 hours ago, zirconi said:

 

Can you please make a sum of all your posts into one if you have time to better understand your way of doing things ? I wanna make a test with a good working situation before pulling the plug :)

Thanks a lot

 

BTW : Are you automating your downloads with radarr/sonarr from usenet ? Downloading locally and then uploading stuff removing from local  ? 

 

Ignore anything you've seen me posting here or the rclone forums previously as most of it was when I really didn't know what I was doing or asking - now I'm at about 50% ;-)

 

I use the user scripts plugin to do most of the work, so I'll just post my scripts.

 

Rclone install - mounts rclone, creates unionfs mounts with a few checks built in.  Runs at array start

 

#!/bin/bash

mkdir -p /mnt/disks/rclone_vfs
mkdir -p /mnt/disks/rclone_cache_old

#######  Check if script already running  ##########

if [[ -f "/mnt/user/software/rclone_install_running" ]]; then

exit
else

touch /mnt/user/software/rclone_install_running

fi

#######  Check if rclone vfs mount is mounted  ##########

if [[ -f "/mnt/disks/rclone_vfs/tv_adults_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/tv_kids_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/movies_adults_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/movies_kids_gd/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check rclone_vfs mounted success."
else

#######  Check if internet / pfsense VM has started else add some pauses before installing rclone  ##########

if ping -q -c 1 -W 1 google.com >/dev/null; then
  echo "The network is up - installing rclone"
plugin install https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg
else
  echo "The network is down - pausing for 5 mins"
sleep 5m

if ping -q -c 1 -W 1 google.com >/dev/null; then
  echo "The network is now up - installing rclone"
plugin install https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg
else
  echo "The network is still down - pausing for another 5 mins"
sleep 5m
plugin install https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg
fi

fi

# Mount rclone vfs mount

rclone mount --allow-other --dir-cache-time 24h --cache-dir=/tmp/rclone/vfs --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 256M --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs &

fi

sleep 5

if [[ -f "/mnt/disks/rclone_vfs/tv_adults_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/tv_kids_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/movies_adults_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/movies_kids_gd/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: rclone_vfs mount success."
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone_vfs mount failed - please check for problems."
rm /mnt/user/software/rclone_install_running
exit
fi

#######  Mount unionfs   ##########

# check if mounted

if [[ -f "/mnt/disks/unionfs_tv_adults/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_tv_kids/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_adults/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_kids/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_uhd/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted."

rm /mnt/user/software/rclone_install_running

exit

else

# Unmount before remounting
fusermount -uz /mnt/disks/unionfs_movies_adults
fusermount -uz /mnt/disks/unionfs_movies_kids
fusermount -uz /mnt/disks/unionfs_movies_uhd
fusermount -uz /mnt/disks/unionfs_tv_adults
fusermount -uz /mnt/disks/unionfs_tv_kids


unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/movies_adults_upload=RW:/mnt/disks/rclone_vfs/movies_adults_gd=RO /mnt/disks/unionfs_movies_adults
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/movies_kids_upload=RW:/mnt/disks/rclone_vfs/movies_kids_gd=RO /mnt/disks/unionfs_movies_kids
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/movies_uhd_upload=RW:/mnt/disks/rclone_vfs/movies_uhd_gd=RO /mnt/disks/unionfs_movies_uhd
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/tv_adults_upload=RW:/mnt/disks/rclone_vfs/tv_adults_gd=RO /mnt/disks/unionfs_tv_adults
unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/tv_kids_upload=RW:/mnt/disks/rclone_vfs/tv_kids_gd=RO /mnt/disks/unionfs_tv_kids

if [[ -f "/mnt/disks/unionfs_tv_adults/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_tv_kids/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_adults/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_kids/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_uhd/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted."
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Movies & Series Remount failed."
fi

fi

#######  End Mount unionfs   ##########

rm /mnt/user/software/rclone_install_running

exit

rclone upload - radarr, sonarr etc add files to the unionfs (unionfs_****)  mounts with the files actually getting added to the **_upload folders, not the ***_gd folders that are the rclone folders.  This script moves files to gd i.e. the _gd folder and removes them from the _upload folder.  bwlimit is to try and not upload more than 750GB/day. 

 

I added the exclusions because hidden unionfs folders were getting uploaded as well from the _upload folders.  I'm running this 24/7 at the moment - I'll probably schedule it every couple of hours once the backlog is cleared.

 

I created a new remote upload_gdrive_media: for the background upload, with the same username, password and location gdrive:crypt as my gdrive_media_vfs:  remote

 

#!/bin/bash

#######  Check if script already running  ##########

if [[ -f "/mnt/user/software/rclone_upload" ]]; then
exit
else
touch /mnt/user/software/rclone_upload

fi

# set folders

uploadfolderTVKids="/mnt/user/tv_kids_upload"
uploadfolderTVAdults="/mnt/user/tv_adults_upload"
uploadfolderMoviesKids="/mnt/user/movies_kids_upload"
uploadfolderMoviesAdults="/mnt/user/movies_adults_upload"
uploadfolderMoviesUHD="/mnt/user/movies_uhd_upload"

# move files

rclone move $uploadfolderTVKids upload_gdrive_media:/tv_kids_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k
rclone move $uploadfolderTVAdults upload_gdrive_media:/tv_adults_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k
rclone move $uploadfolderMoviesKids upload_gdrive_media:/movies_kids_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k
rclone move $uploadfolderMoviesAdults upload_gdrive_media:/movies_adults_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k
rclone move $uploadfolderMoviesUHD upload_gdrive_media:/movies_uhd_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k

rm /mnt/user/software/rclone_upload

exit

 

unionfs  cleanup - unionfs hides deleted mount (RO) files rather than deleting them, which would cause major problems if you ever mounted gd differently.  This script cleans up the unionfs folder and actually deletes the old mount files e.g. upgraded or deleted files.  I run this script overnight and I also run the script from radarr and sonarr whenever they upgrade a file

 

#!/bin/bash

###########TV_KIDS##############

find /mnt/disks/unionfs_tv_kids/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/unionfs_tv_kids/.unionfs}
newPath=/mnt/disks/rclone_vfs/tv_kids_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/unionfs_tv_kids/.unionfs" -mindepth 1 -type d -empty -delete

###########TV_ADULTS##############

find /mnt/disks/unionfs_tv_adults/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/unionfs_tv_adults/.unionfs}
newPath=/mnt/disks/rclone_vfs/tv_adults_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/unionfs_tv_adults/.unionfs" -mindepth 1 -type d -empty -delete

###########movies_KIDS##############

# find /mnt/disks/unionfs_movies_kids/.unionfs -name '*_HIDDEN~' | while read line; do
# oldPath=${line#/mnt/disks/unionfs_movies_kids/.unionfs}
# newPath=/mnt/disks/rclone_vfs/movies_kids_gd${oldPath%_HIDDEN~}
# rm "$newPath"
# rm "$line"
# done
# find "/mnt/disks/unionfs_movies_kids/.unionfs" -mindepth 1 -type d -empty -delete

###########movies_ADULTS##############

find /mnt/disks/unionfs_movies_adults/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/unionfs_movies_adults/.unionfs}
newPath=/mnt/disks/rclone_vfs/movies_adults_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/unionfs_movies_adults/.unionfs" -mindepth 1 -type d -empty -delete

###########movies_UHD##############

find /mnt/disks/unionfs_movies_uhd/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/disks/unionfs_movies_uhd/.unionfs}
newPath=/mnt/disks/rclone_vfs/movies_uhd_gd${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/disks/unionfs_movies_uhd/.unionfs" -mindepth 1 -type d -empty -delete

exit

rclone backup - backs up my local folders to a new remote backup: .  It syncs files to backup: and moves deleted files to backup:old with old files deleted after 365 days ( rclone delete --min-age 365d backup:old) I'm not quite sure what happens with versioning - will check one day.  I run this daily at the moment and I've excluded the bigger shares until my vfs uploads have finished.

 

#!/bin/bash


#!/bin/bash

#######  Check if script already running  ##########

if [[ -f "/mnt/user/software/rclone_backup_running" ]]; then
exit
else
touch /mnt/user/software/rclone_backup_running

fi

######## ENABLED  ############

rclone sync /mnt/user/dzs backup:dzs --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k
rclone sync /mnt/user/nextcloud backup:nextcloud --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k
rclone sync /mnt/user/public backup:public --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k

rclone sync /mnt/disks/sm961/iso backup:sm961/iso --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k

######## DISABLED  ############

# rclone sync /mnt/user/backup backup:backup --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k
# rclone sync /mnt/user/movies_adults backup:movies_adults --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k
# rclone sync /mnt/user/movies_kids backup:movies_kids --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k
# rclone sync /mnt/user/movies_uhd backup:movies_uhd --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k
# rclone sync /mnt/user/other_media backup:other_media --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k
# rclone sync /mnt/user/software backup:software --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k
# rclone sync /mnt/user/tv_recordings backup:tv_recordings --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k

rclone delete --min-age 365d backup:old

rm /mnt/user/software/rclone_backup_running

exit

rclone uninstall - runs at array stop to make sure everything is ready to start again at array start

 

#!/bin/bash

fusermount -uz /mnt/disks/rclone_vfs
fusermount -uz /mnt/disks/unionfs_movies_adults
fusermount -uz /mnt/disks/unionfs_movies_kids
fusermount -uz /mnt/disks/unionfs_movies_uhd
fusermount -uz /mnt/disks/unionfs_tv_adults
fusermount -uz /mnt/disks/unionfs_tv_kids
plugin remove rclone.plg
rm -rf /tmp/rclone

if [[ -f "/mnt/user/software/rclone_install_running" ]]; then
rm /mnt/user/software/rclone_install_running
echo "install running - removing dummy file"
else
echo "Passed: install already exited properly"
fi

if [[ -f "/mnt/user/software/rclone_upload" ]]; then
echo "upload running - removing dummy file"
rm /mnt/user/software/rclone_upload
else
echo "rclone upload already exited properly"
fi

if [[ -f "/mnt/user/software/rclone_backup_running" ]]; then
echo "backup running - removing dummy file"
rm /mnt/user/software/rclone_backup_running
else
echo "backup already exited properly"
fi

exit

 

Edited by DZMM

Share this post


Link to post

Thanks for the scripts! I'm going to get away from plexdrive and try this out today.

Edit: Could you post how your rclone configs are?

These are the ones I'm trying to understand to edit the script.

mkdir -p /mnt/disks/rclone_vfs
mkdir -p /mnt/disks/rclone_cache_old

Currently I have this sole crypt


[uploadcrypt]
type = crypt
remote = google:crypt
filename_encryption = standard
directory_name_encryption = true
password = *** ENCRYPTED ***
password2 = *** ENCRYPTED ***


This cache which reads both of the Movies and Series folders inside of the crypt

[plexcache]
type = cache
remote = uploadcrypt:/
plex_url = http://192.168.0.223:32400
plex_username = slimshizn
plex_password = *** ENCRYPTED ***
chunk_size = 5M
info_age = 24h
chunk_total_size = 10G


Other than the 'plexcache' I'm pretty much setup the same inside of https://blog.laubacher.io/blog/unlimited-plex-server-on-unraid this guide. So the gdrivecrypt containing /mnt/disks/pd/crypt would now not be in use.

Edited by slimshizn
Need help

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.