Video Preloader (avoids HDD spinup latency when starting a Movie or Episode through Plex, Jellyfin or Emby)


Recommended Posts

1 hour ago, DJ-BrianC said:

My drives are rarely spun down there's usually activity going on.

Click on the "status" icon. This spins down the disk. But finally I don't really understand why do you preload if you don't use spindown. As the disk is already spinning, the starting time of a movie should be relatively low.

Link to comment
  • 4 weeks later...
  • 3 weeks later...

I tried to implement this script and I had a couple of issues.

  • I have about 4x srt-files for every Film/Episode. As the script doesn't take the filename into account it still loads tons of subtitles, usually from all episodes in a season hence taking very long to finish. My solution is to disable srt preloading and move all srt files to ssd.
  • The script also takes files on the ssd pool into account. This makes the script basically useless in my case whrere I store new films and episodes on ssd pool. I editet the script on line 89/90 to only take files from array (user0) into account:
      video_files+=("${file/\/user0\//\/user\/}")
    done < <(find "${video_paths[@]/\/user\//\/user0\/}" -not -path '*/.*' -size +"$video_min_size"c -regextype posix-extended -regex ".*\.($video_ext)" -printf "%T@ %p\0")

     

  • It seems my RAM is quiet slow, longest time needed is 317ms for fetching a preloaded file from RAM after multiple runs (increased to 0.330 instead of 0.150 per default settings)
    It's even worse. Sometimes it takes up to 1.25s for Preloading a file from RAM, for whatever reason.
Edited by madejackson
Link to comment

With 6.12 and its ZFS-Introduction, this script could get much more interesting.

In Theory, it should be possible to cache the beginning of every single movie and episode into L2ARC, hence onto an SSD-Cache. So we should be able to use much more space as cache than just 50% of free RAM.

 

As soon as 6.12 is released I'm gonna try it and see if it's possible in practice.

  • Like 1
Link to comment
  • 3 months later...
On 9/16/2023 at 6:54 AM, mgutt said:

This shouldn't work as the movie itself has a complete different inode id on a complete different device (disk in array). 

Not sure exactly what you're saying but yeah, you cannot have one single L2ARC for all ZFS-pools.

You can partition an ssd into multiple smaller L2ARC-caches though. One partition for every ZFS-Disk. Of course, this wastes some space, but that's how L2ARC works right now and I don't think that is gonna change anytime soon.

Even though there is a commit on github to make L2ARC unified for multiple pools.

Edited by madejackson
Link to comment

am i missing where it was said how the script should be configured to run?  meaning manually or can it be scheduled and how often if so (asking since i saw it mentioned it doesnt run in the background.

 

greaty work by the way.  i just added another 128gb :)

Edited by morreale
Link to comment
  • 2 months later...
On 9/15/2023 at 11:14 PM, maxse said:

Anyone know if it’s possible to cache this to an SSD instead of RAM? That would be amazing as I have a rather large SSD drive currently. I read through most of the the thread and I didn't see that this was possible? Unless I missed it?

Hi folks, any updates on this?

Link to comment
  • 2 weeks later...

Stupid question on this one, but if the Plex server and NAS are on two separate machines, would it be beneficial to run this on the NAS?

I ask because I have much more RAM on my NAS than I do my Plex server to accomplish this.


Second question, and trying to not be a choosing beggar here, but has anyone figured out an elegant way to limit the scope of media for the preloader? For example On Deck for all users +5 episodes instead of the whole Movies/TV library folder

  • Like 1
Link to comment
  • 2 weeks later...
  • 2 months later...

I just wanted to comment and say that this script is brilliant and it taught me a lot about how caching works in general.  I was originally very confused as I was reading through the script since nothing really seems to be "writing the file to cache".  I thought that there would be some cache handle or an API, but all I found was:

 

seconds=$( { time head -c "$preload_head_size" "$file" >/dev/null; } 2>&1 )

 

Surely, I thought, this can't be it since it's reading the first $preload_head_size into /dev/null.  Eventually, I realized that the act of reading the header at all causes the operating system to immediately read it into cache on it's own.  

 

Thus @onesbug, I believe the answer to your question is that it already works for Jellyfin.  In fact, if I've understood this correctly, this is platform agnostic.  The OP is reading all the newest files N bytes into the void, of which the act itself is causing the operating system to 'store' it in memory.  The operating system doesn't know we're trying to preload video files.  All it knows is that you've read "something" and it's keeping a copy of it in memory just in case you want to read that "something" again.  This is also probably why it isn't "reserving memory".  The contents are "cached" for as long as nothing else is read and requires that space.  

 

You simply need to omit the entire section:

# check if paths are used in docker containers

 

I actually don't know why you need to know about the docker container.  It seems to be largely a sanity check as none of the information from 'docker container inspect' is used later in the script.

 

In fact, if you want to get this to work with cache pools using SSDs (I suspect that OP originally wrote this script before cache pools were introduced), you can change the first part of the script like so:

 

video_paths=(
  "/mnt/user0/Movie"
  "/mnt/user0/TV"
)

and modify the sanity check as:

# check if paths are used in docker containers
if docker info > /dev/null 2>&1; then
  # get docker mounts of all running containers
  # shellcheck disable=SC2016
  docker_mounts=$(docker ps -q | xargs docker container inspect -f '{{$id := .Id}}{{range .Mounts}}{{if .Source}}{{printf $id}}:{{.Source}}{{println}}{{end}}{{end}}' | grep -v -e "^$")
#  for path in "${video_paths[@]}"; do
#    if [[ $docker_mounts != *"$path"* ]]; then
#      /usr/local/emhttp/webGui/scripts/notify -i alert -s "Plex Preloader failed!" -d "$path is not used by a docker container!"
#      exit 1
#    fi
#  done
fi

 

Since the script will complain that /mnt/user0 (the non-cache pool) is not part of any docker container (which makes sense as it wouldn't).

 

Hope this helps anyone else.

  • Upvote 1
Link to comment
  • 1 month later...

I just added this script and looking forward to using this. Thank you so much for the good work @mgutt I did want to confirm after running the script the ram usage didn't actually go up a whole lot in the Dashboard page (just to about 14%), I've 128gb of ram, and I'm hoping to have as much of it filled to use this script probably close to 80gb. But As the script runs the ram starts filling but then the usage drops down slowly after. Is this an expected behavior?

 

How often is the script meant to be run, I set it for once daily right now to see how this goes. I am noticed once the script runs, it will also clear up the ram or could it fill the ram again with the same data if thats even possible?

 

Thanks again.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.