Plexdrive


Recommended Posts

23 minutes ago, Kaizac said:

 

Have you been able to test it already? I'm very pleased with these settings. Both movies and series start in Plex after around 3 seconds. Emby seems a bit slower with around 4 seconds. Still a big improvement from my previous start times. Curious what your start times are (without any caching beforehand).

Fast enough that I can't tell if local or remote. The main benefit is the occasional pause at the start, that required a a pause/play to resume, seem to have gone

Link to comment

Trying something new, going to have both local and cloud media due to never knowing what google is going to do. So I'm going to sync folders, local to cloud. At least that's the idea, syncing between the /mnt/users/Media/Movies --> /mnt/users/Media/Cloud/Movies and /mnt/users/Media/TVShows -->  /mnt/users/Media/Cloud/Series. I think that will stop issues with radarr acting up when it finds there are two  media files in a folder.

Edited by slimshizn
Link to comment
45 minutes ago, slimshizn said:

Trying something new, going to have both local and cloud media due to never knowing what google is going to do. So I'm going to sync folders, local to cloud. At least that's the idea, syncing between the /mnt/users/Media/Movies --> /mnt/users/Media/Cloud/Movies and /mnt/users/Media/TVShows -->  /mnt/users/Media/Cloud/Series. I think that will stop issues with radarr acting up when it finds there are two  media files in a folder.

Radarr should never see two files in the folder if it's looking at the unionfs mount - when file1.mkv moves from the local upload folder to the cloud it will always appear to radarr that that it was always in the same place in the unionfs mount

Link to comment
2 minutes ago, DZMM said:

Radarr should never see two files in the folder if it's looking at the unionfs mount - when file1.mkv moves from the local upload folder to the cloud it will always appear to radarr that that it was always in the same place in the unionfs mount

Right, I'm talking about when there's an upgrade and it didn't sync or remove the old media yet.

Link to comment
2 minutes ago, slimshizn said:

Right, I'm talking about when there's an upgrade and it didn't sync or remove the old media yet.

 

Unionfs hides the old media immediately creating a behaviour just like a normal drive, which keeps radarr happy.  The cleanup script only ensures the file is actually deleted rather than hidden in case you rebuild your unionfs mount or mount differently and suddenly find all the hidden files re-appearing

Link to comment

Does it hide the old media when it's actually transferred over to the cloud? I just had something "upgraded" and it never removed or deleted the old media/created a unionfs folder to "hide" the old one.

Edit: Just realized that radarr never actually saw the old file that was there. So that's probably why it didn't happen.

Edited by slimshizn
Link to comment
4 minutes ago, slimshizn said:

Does it hide the old media when it's actually transferred over to the cloud? I just had something "upgraded" and it never removed or deleted the old media/created a unionfs folder to "hide" the old one.

 

yes - if radarr tells unionfs to delete a file from the unionfs mount which is in the local RW folder it does it straightaway, if it's a file that's already been uploaded to the RO folder it hides it so radarr (and everything else) thinks it's been deleted

Link to comment
1 minute ago, DZMM said:

 

yes - if radarr tells unionfs to delete a file from the unionfs mount which is in the local RW folder it does it straightaway, if it's a file that's already been uploaded to the RO folder it hides it so radarr (and everything else) thinks it's been deleted

So media on the RO(cloud)folder will be hidden from plex as well correct?

Link to comment
19 minutes ago, slimshizn said:

So media on the RO(cloud)folder will be hidden from plex as well correct?

yes if it has been deleted. 

 

I think if you rename a RO file via unionfs - unionfs creates a new file in the RW folder to be uploaded and hides the old file in the RO folder i.e you have to upload it all again, so it's best to make sure you're happy with everything before uploading and why it's probably best not to run your upload script too often to let things 'settle down'

 

Edit: actually I think it does the same as below - hides the old name and when an app clicks on the new named file, it just opens the old file.  I do think there's a risk that when the upload script comes around it might download the old copy so it can upload the new copy i.e. wasted effort

 

Ditto with moving.  I think moving is safe if you don't use the cleanup script as I think unionfs will hide the old location and pretend the file is in the new location.  However the cleanup script will mess things up I think as it only works for files in the original folder location, and doesn't read unionfs data in the _HIDDEN file that gives the new location.

 

Edited by DZMM
Link to comment

I've updated my script post with a few updates I've made:

  1. new rclone mount settings to improve start times and reduce API calls
  2. I run uninstall script at array start as well in case of a unclean shutdown
  3. upload script now excludes .unionfs/ folder ( @slimshizn I think this might be your problem)
  4. upload script alternates between cache and one array drive at a time, to try and reduce pointless transfers to the array and also multiple array drives spinning up at the same time for the 4 transfers
On 8/3/2018 at 2:58 PM, DZMM said:

 

 

Edit: 08/10/2018 - Updated rclone mount, upload script, uninstall script

 

I think I've managed to crack my slow write speeds to my unionfs mounts.  I tried ud-->cache and that didn't speed matters up, and neither did moving my mounts to /mnt/user.

 

What worked for me was putting everything inside a /unionfs mapping for my dockers - not just the actual unionfs mount, but also the download files that needed to be imported.  I've also mounted my local movie folders (all tv in the cloud) in /unionfs, so that any download files that need moving there shift quickly as well (I haven't tested if hardlinking works yet).


Before I was getting around 5MB/s and now I'm getting 50MB/s+ when drives are busy, and over 100MB/s at times - more than fast enough to stop bottlenecks as I download at max 25MB/s.

 

Sharing below what I've got in case it helps anyone else.

 

 

Edited by DZMM
Link to comment

I do like this idea of activating cache first and then moving on to each disk, smart and efficient. I'll add the --exclude unionfs section and see how that helps. 

I see you have your plugin location is different than mine. So would this location be correct? /tmp/user.scripts/tmpScripts/Rclone_Upload/script*


Also every time you have the rm, is that going to keep the mount still active? I thought it had to be running in the background?

Edited by slimshizn
Link to comment
47 minutes ago, slimshizn said:

I do like this idea of activating cache first and then moving on to each disk, smart and efficient. I'll add the --exclude unionfs section and see how that helps. 

 

I'm amazed how much difference it's made to my spinups and I wish I'd thought of it before

 

50 minutes ago, slimshizn said:

I see you have your plugin location is different than mine. So would this location be correct? /tmp/user.scripts/tmpScripts/Rclone_Upload/script*

 

I think that's where it keeps running logs etc in memory.  The actual scripts are at /boot/config/plugins/user.scripts/scripts/ which is the path you should add to dockers when you call the script - I've started doing this again as I'm confident it's all working now

51 minutes ago, slimshizn said:

Also every time you have the rm, is that going to keep the mount still active? I thought it had to be running in the background?

I do a check at the start of the script to see if an instance is already running, if rclone is already mounted etc etc follow the flow and you'll see it works

 

Link to comment
2 hours ago, DZMM said:

 

I'm amazed how much difference it's made to my spinups and I wish I'd thought of it before

 

 

I think that's where it keeps running logs etc in memory.  The actual scripts are at /boot/config/plugins/user.scripts/scripts/ which is the path you should add to dockers when you call the script - I've started doing this again as I'm confident it's all working now

I do a check at the start of the script to see if an instance is already running, if rclone is already mounted etc etc follow the flow and you'll see it works

 

Right okay so I don't want to remove /boot/config/plugins/user.scripts/scripts/Rclone_Upload/script so how would I "remove" it without actually destroying the script?

Link to comment
44 minutes ago, slimshizn said:

Right okay so I don't want to remove /boot/config/plugins/user.scripts/scripts/Rclone_Upload/script so how would I "remove" it without actually destroying the script?

 

Ahh I just realised what you mean.  I'm creating my own check if the script is running by creating a temporary dummy file when it starts, not by querying the actual script file - I borrowed the idea from the mountcheck file in the original script.  I have a share /mnt/user/mount_rclone on my server so I dump it there - you were getting the errors before because you need to choose a real location on your own server

 

#######  Check if script already running  ##########

if [[ -f "/mnt/user/mount_rclone/rclone_install_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/mount_rclone/rclone_install_running

fi

#######  End Check if script already running  ##########

 

Link to comment

/tmp/user.scripts/tmpScripts/Rclone_Upload/script* should work then if that's the case, I'm not sure how you have your setup with a dummy file being created. But before I started entering all the information I wanted to see if it was correct. I know that scripts go to tmp after they are started and are ran from there, so with you using a dummy file I could just remove the "rm" portion and just check if that script is being ran at the location I posted.

Edit: I could also remove the "touch" command as well then right?


 

Quote

if [[ -f "/tmp/user.scripts/tmpScripts/Rclone_Upload/script" ]]; then
echo "upload running - removing dummy file"
rm /tmp/user.scripts/tmpScripts/Rclone_Upload/script
else
echo "rclone upload already exited properly"
fi

Assuming this would be correct for the unmount then as well.

Edited by slimshizn
Link to comment
23 minutes ago, slimshizn said:

/tmp/user.scripts/tmpScripts/Rclone_Upload/script* should work then if that's the case, I'm not sure how you have your setup with a dummy file being created. But before I started entering all the information I wanted to see if it was correct. I know that scripts go to tmp after they are started and are ran from there, so with you using a dummy file I could just remove the "rm" portion and just check if that script is being ran at the location I posted.

Edit: I could also remove the "touch" command as well then right?


 

Assuming this would be correct for the unmount then as well.

 

'touch' just creates a blank file at the start that's removed.

 

I don't think checking the tmp directory will work as scripts don't seem to be removed straightaway when they've finished - I've just looked in my /tmp/user.scripts/tmpScripts directory and there are residual files still there for scripts that have ended.

  • Like 1
Link to comment
20 hours ago, DZMM said:

I've updated my script post with a few updates I've made:

  1. new rclone mount settings to improve start times and reduce API calls
  2. I run uninstall script at array start as well in case of a unclean shutdown
  3. upload script now excludes .unionfs/ folder ( @slimshizn I think this might be your problem)
  4. upload script alternates between cache and one array drive at a time, to try and reduce pointless transfers to the array and also multiple array drives spinning up at the same time for the 4 transfers

 

 

Thanks for the updates. Some questions:

- I can't see what you've changed on the rclone mount. I've put your previous version next to the new one but I can't see a difference. Could you direct me to where/what you changed?

- Did you just make extra scripts in User Scripts which you time at start/stop of array? Won't the uninstall and install conflict when you're running both at start of array?

- For the upload script, why did you go with checkers 10 and transfers 4? Default is 8 and 4, but I don't really understand what checkers do and what raising the amount accomplishes. For the transfers 4 I'm wondering why Rclone puts that as default. It seems to me when the script suddenly stops during an upload you only lose progress of one upload. If you transfer 4 at the same time, all 4 are wasted. The time in transit is longer with 4 which seem undesired to me. But I don't know if multiple transfers give some benefits I can't see now.

- For the cleanup script, why do you look for .unionfs? I've noticed this file is often lacking which makes the cleanup fail on the gdrive part (local still works).

Link to comment
On 8/11/2018 at 8:06 AM, Kaizac said:

Thanks for the updates. Some questions:

- I can't see what you've changed on the rclone mount. I've put your previous version next to the new one but I can't see a difference. Could you direct me to where/what you changed?

- Did you just make extra scripts in User Scripts which you time at start/stop of array? Won't the uninstall and install conflict when you're running both at start of array?

- For the upload script, why did you go with checkers 10 and transfers 4? Default is 8 and 4, but I don't really understand what checkers do and what raising the amount accomplishes. For the transfers 4 I'm wondering why Rclone puts that as default. It seems to me when the script suddenly stops during an upload you only lose progress of one upload. If you transfer 4 at the same time, all 4 are wasted. The time in transit is longer with 4 which seem undesired to me. But I don't know if multiple transfers give some benefits I can't see now.

- For the cleanup script, why do you look for .unionfs? I've noticed this file is often lacking which makes the cleanup fail on the gdrive part (local still works).

- I just updated my main post with my latest scripts

- sorry, I wasn't clear.  my install script runs every 5 mins to automatically remount if there's a problem.  because of the 5 min delay it runs after the 'uninstall' script that runs at start

- checkers 10 just a random number to be honest.  I kept them low to start with as I had ram problems when I first started using rclone, but now that you've reminded me I'm going to try increasing to a much higher number to ensure I hit the bwlimt 247

- if there are no .unionfs files or folders, yes it throws up an 'error', but the script still works fine

Edited by DZMM
Link to comment
  • 2 weeks later...

@DZMM Thanks a lot for all the info,  I've set up the scripts and they work really well.  I was surprised how few modifications were required to get things working. 

 

Have a odd problem, wondering if anyone had encountered it. It's to do with Direct Play on Plex. Direct streaming on Plex works fine, either because the file is an MKV or if I turn off direct play in the app settings.  However, when direct playing all that happens is a black screen with the loading icon. I've played with various settings on buffer and chunk size and limit. It makes no difference. The exact same files direct play when played locally through the exact same Plex server.  In a nutshell, to play files from gdrive I have to direct stream. Any attempt to direct play them causes a black screen with infinite load. However, these same files direct play when accessed locally through that same Plex server.  Client - Apple TV 4K , All files are formatted as MP4, with ac3 and mov_text - compatible with direct play. 

 

Please let me know if anyone has encountered this or have found a solution. Thanks. 

Link to comment
2 hours ago, Letchemanen Vassou said:

@DZMM Thanks a lot for all the info,  I've set up the scripts and they work really well.  I was surprised how few modifications were required to get things working. 

 

Have a odd problem, wondering if anyone had encountered it. It's to do with Direct Play on Plex. Direct streaming on Plex works fine, either because the file is an MKV or if I turn off direct play in the app settings.  However, when direct playing all that happens is a black screen with the loading icon. I've played with various settings on buffer and chunk size and limit. It makes no difference. The exact same files direct play when played locally through the exact same Plex server.  In a nutshell, to play files from gdrive I have to direct stream. Any attempt to direct play them causes a black screen with infinite load. However, these same files direct play when accessed locally through that same Plex server.  Client - Apple TV 4K , All files are formatted as MP4, with ac3 and mov_text - compatible with direct play. 

 

Please let me know if anyone has encountered this or have found a solution. Thanks. 

Hmm I've never come across that.  The only problem I've come across is sometimes with the web app if I start, stop and then immediately restart a file it errors out.

 

I'd post in the rclone forums as that sounds odd.

Link to comment
  • 2 months later...

Today I finally managed to get Plexdrive5 working quite well with a little help of cross referencing from the Cloudbox project and seeing how they did things. I had the rclone plugin working but it just was not working all that great. I would get freezes and sometimes things just wouldn't play and the kids would complain.

 

First thing I did was to create a new project called plexdrive at https://console.developers.google.com to get a client id and client secret. You can follow this guide for reference on how to do it. https://github.com/Cloudbox/Cloudbox/wiki/Google-Drive-API-Client-ID-and-Client-Secret Once you have your client id and secret keep them handy in a notepad file or something for later.

 

Next create a folder in appdata called plexdrive so you will a path of /mnt/user/appdata/plexdrive

 

Download plexdrive5 using https://github.com/dweidenfeld/plexdrive/releases/download/5.0.0/plexdrive-linux-amd64 and rename the file just to plexdrive and place it into /mnt/user/appdata/plexdrive.

 

Open a terminal to unraid and run chmod -R 777 /mnt/user/appdata/plexdrive/  (I'm not sure if 777 is OK to use and maybe 775 would be better).

 

Now we will run plexdrive for the first time using.

/mnt/user/appdata/plexdrive/plexdrive mount -v 3 --refresh-interval=1m --chunk-check-threads=8 --chunk-load-threads=8 --chunk-load-ahead=4 --max-chunks=100 --fuse-options=allow_other,read_only --config=/mnt/user/appdata/plexdrive --cache-file=/mnt/user/appdata/plexdrive/cache.bolt /mnt/cache/plexdrive

My mount point in unraid is /mnt/cache/plexdrive change this in the above command to your needs but it must not point to a array disk or share that is on the array. Unraid doesnt not like it and will throw a bunch of errors in the logs.

 

On first run it will ask you for you client id and secret and it will give you a link to copy from terminal into your browser to auth it, copy the auth code into terminal to complete. Plexdrive will then do its thing and mount to location you added as above. When you see "First cache build process started..." crtl c to stop plexdrive.

 

Next create a user script I called mine plexdrive and copy the same command we used above to to install plexdrive into the user script and save. Then click run script in background and your mount should be working. I have set this to run on array start but I haven't tested it yet but my content loads fast and plays really well. 

  • Like 1
Link to comment
  • 2 weeks later...
  • 1 year later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.