mgutt Posted April 13, 2023 Author Share Posted April 13, 2023 1 hour ago, DJ-BrianC said: My drives are rarely spun down there's usually activity going on. Click on the "status" icon. This spins down the disk. But finally I don't really understand why do you preload if you don't use spindown. As the disk is already spinning, the starting time of a movie should be relatively low. Quote Link to comment
DJ-BrianC Posted April 13, 2023 Share Posted April 13, 2023 A 2 second startup is better than a 10 second startup right? I just figured RAM is a hell of a lot quicker than the drive even if it's not spun down you can still save considerable amount of time Quote Link to comment
ezek1el3000 Posted May 5, 2023 Share Posted May 5, 2023 The script works flawlessly. Thank you. Is the plex "on deck" feature still planned? Quote Link to comment
madejackson Posted May 26, 2023 Share Posted May 26, 2023 (edited) I tried to implement this script and I had a couple of issues. I have about 4x srt-files for every Film/Episode. As the script doesn't take the filename into account it still loads tons of subtitles, usually from all episodes in a season hence taking very long to finish. My solution is to disable srt preloading and move all srt files to ssd. The script also takes files on the ssd pool into account. This makes the script basically useless in my case whrere I store new films and episodes on ssd pool. I editet the script on line 89/90 to only take files from array (user0) into account: video_files+=("${file/\/user0\//\/user\/}") done < <(find "${video_paths[@]/\/user\//\/user0\/}" -not -path '*/.*' -size +"$video_min_size"c -regextype posix-extended -regex ".*\.($video_ext)" -printf "%T@ %p\0") It seems my RAM is quiet slow, longest time needed is 317ms for fetching a preloaded file from RAM after multiple runs (increased to 0.330 instead of 0.150 per default settings) It's even worse. Sometimes it takes up to 1.25s for Preloading a file from RAM, for whatever reason. Edited May 26, 2023 by madejackson Quote Link to comment
madejackson Posted May 31, 2023 Share Posted May 31, 2023 With 6.12 and its ZFS-Introduction, this script could get much more interesting. In Theory, it should be possible to cache the beginning of every single movie and episode into L2ARC, hence onto an SSD-Cache. So we should be able to use much more space as cache than just 50% of free RAM. As soon as 6.12 is released I'm gonna try it and see if it's possible in practice. 1 Quote Link to comment
maxse Posted September 16, 2023 Share Posted September 16, 2023 Anyone know if it’s possible to cache this to an SSD instead of RAM? That would be amazing as I have a rather large SSD drive currently. I read through most of the the thread and f Dr it don’t see this was possible? Unless I missed it? Quote Link to comment
mgutt Posted September 16, 2023 Author Share Posted September 16, 2023 On 5/31/2023 at 9:07 AM, madejackson said: In Theory, it should be possible to cache the beginning of every single movie and episode into L2ARC This shouldn't work as the movie itself has a complete different inode id on a complete different device (disk in array). Quote Link to comment
madejackson Posted September 18, 2023 Share Posted September 18, 2023 (edited) On 9/16/2023 at 6:54 AM, mgutt said: This shouldn't work as the movie itself has a complete different inode id on a complete different device (disk in array). Not sure exactly what you're saying but yeah, you cannot have one single L2ARC for all ZFS-pools. You can partition an ssd into multiple smaller L2ARC-caches though. One partition for every ZFS-Disk. Of course, this wastes some space, but that's how L2ARC works right now and I don't think that is gonna change anytime soon. Even though there is a commit on github to make L2ARC unified for multiple pools. Edited September 18, 2023 by madejackson Quote Link to comment
morreale Posted September 23, 2023 Share Posted September 23, 2023 (edited) am i missing where it was said how the script should be configured to run? meaning manually or can it be scheduled and how often if so (asking since i saw it mentioned it doesnt run in the background. greaty work by the way. i just added another 128gb Edited September 28, 2023 by morreale Quote Link to comment
maxse Posted December 9, 2023 Share Posted December 9, 2023 On 9/15/2023 at 11:14 PM, maxse said: Anyone know if it’s possible to cache this to an SSD instead of RAM? That would be amazing as I have a rather large SSD drive currently. I read through most of the the thread and I didn't see that this was possible? Unless I missed it? Hi folks, any updates on this? Quote Link to comment
intertet Posted December 22, 2023 Share Posted December 22, 2023 Stupid question on this one, but if the Plex server and NAS are on two separate machines, would it be beneficial to run this on the NAS? I ask because I have much more RAM on my NAS than I do my Plex server to accomplish this. Second question, and trying to not be a choosing beggar here, but has anyone figured out an elegant way to limit the scope of media for the preloader? For example On Deck for all users +5 episodes instead of the whole Movies/TV library folder 1 Quote Link to comment
onesbug Posted January 5 Share Posted January 5 (edited) The script is excellent, but unfortunately, I am using Jellyfin. I would like to know if this script supports Jellyfin or if there are any plans to support Jellyfin on synology? Edited January 5 by onesbug Quote Link to comment
ronia Posted March 14 Share Posted March 14 I just wanted to comment and say that this script is brilliant and it taught me a lot about how caching works in general. I was originally very confused as I was reading through the script since nothing really seems to be "writing the file to cache". I thought that there would be some cache handle or an API, but all I found was: seconds=$( { time head -c "$preload_head_size" "$file" >/dev/null; } 2>&1 ) Surely, I thought, this can't be it since it's reading the first $preload_head_size into /dev/null. Eventually, I realized that the act of reading the header at all causes the operating system to immediately read it into cache on it's own. Thus @onesbug, I believe the answer to your question is that it already works for Jellyfin. In fact, if I've understood this correctly, this is platform agnostic. The OP is reading all the newest files N bytes into the void, of which the act itself is causing the operating system to 'store' it in memory. The operating system doesn't know we're trying to preload video files. All it knows is that you've read "something" and it's keeping a copy of it in memory just in case you want to read that "something" again. This is also probably why it isn't "reserving memory". The contents are "cached" for as long as nothing else is read and requires that space. You simply need to omit the entire section: # check if paths are used in docker containers I actually don't know why you need to know about the docker container. It seems to be largely a sanity check as none of the information from 'docker container inspect' is used later in the script. In fact, if you want to get this to work with cache pools using SSDs (I suspect that OP originally wrote this script before cache pools were introduced), you can change the first part of the script like so: video_paths=( "/mnt/user0/Movie" "/mnt/user0/TV" ) and modify the sanity check as: # check if paths are used in docker containers if docker info > /dev/null 2>&1; then # get docker mounts of all running containers # shellcheck disable=SC2016 docker_mounts=$(docker ps -q | xargs docker container inspect -f '{{$id := .Id}}{{range .Mounts}}{{if .Source}}{{printf $id}}:{{.Source}}{{println}}{{end}}{{end}}' | grep -v -e "^$") # for path in "${video_paths[@]}"; do # if [[ $docker_mounts != *"$path"* ]]; then # /usr/local/emhttp/webGui/scripts/notify -i alert -s "Plex Preloader failed!" -d "$path is not used by a docker container!" # exit 1 # fi # done fi Since the script will complain that /mnt/user0 (the non-cache pool) is not part of any docker container (which makes sense as it wouldn't). Hope this helps anyone else. 1 Quote Link to comment
i_max Posted April 19 Share Posted April 19 I just added this script and looking forward to using this. Thank you so much for the good work @mgutt I did want to confirm after running the script the ram usage didn't actually go up a whole lot in the Dashboard page (just to about 14%), I've 128gb of ram, and I'm hoping to have as much of it filled to use this script probably close to 80gb. But As the script runs the ram starts filling but then the usage drops down slowly after. Is this an expected behavior? How often is the script meant to be run, I set it for once daily right now to see how this goes. I am noticed once the script runs, it will also clear up the ram or could it fill the ram again with the same data if thats even possible? Thanks again. Quote Link to comment
FlyingTexan Posted July 30 Share Posted July 30 Few months and no reply I'm guessing this is a dead project? I'm wondering why to use ram vs a NVME drive? I have a spare 2TB nvme drive sitting in my system I'd use. Quote Link to comment
nick5429 Posted September 6 Share Posted September 6 (edited) I have a proof-of-concept script here which pulls info from Plex's "on deck" API, and remaps the file paths from my docker's paths back into my unraid filesystem paths. I haven't integrated any of this into the actual preloading script You'll need to get your plex API token; a "temporary" token is pretty easy and straightforward to get, see https://support.plex.tv/articles/204059436-finding-an-authentication-token-x-plex-token/ IIRC, these temporary tokens still maybe last 'a while'/long enough to be useful beyond just development? #!/bin/bash # Plex server details PLEX_URL="http://192.168.1.10:32400" API_KEY="" # Define the path mappings declare -A path_map=( ["/Movies/"]="/mnt/user/Movies/" ["/TV-Kids/"]="/mnt/user/TV-Kids/" ["/TV-CurrentShows/"]="/mnt/user/TV-CurrentShows/" # Add more mappings as needed ) # Define the video file extensions video_ext='avi|mkv|mov|mp4|mpeg' # Function to remap file paths remap_path() { local original_path="$1" for prefix in "${!path_map[@]}"; do if [[ "$original_path" == "$prefix"* ]]; then # Replace the prefix with the mapped path echo "${original_path/$prefix/${path_map[$prefix]}}" return fi done # If no mapping found, return the original path echo "$original_path" } # Function to get "On Deck" items from Plex and process file paths get_ondeck() { local plex_url="$1" local api_key="$2" # Query the On Deck list from Plex local plex_on_deck_url="$plex_url/library/onDeck?X-Plex-Token=$api_key" # Get the list of On Deck items (assumed response in XML format) local response=$(curl -s "$plex_on_deck_url") # Parse the XML response and extract the full file paths from <Part> tags local video_files=() while IFS= read -r line; do # Dynamically create a regular expression using the video_ext variable local file_path=$(echo "$line" | grep -oP "(?<=file=\")[^\"]+\.($video_ext)\"") if [[ -n "$file_path" ]]; then # Remove trailing quote that gets included file_path=$(echo "$file_path" | sed 's/"$//') # Remap the file path using the remap_path function local remapped_file_path=$(remap_path "$file_path") video_files+=("$remapped_file_path") fi done < <(echo "$response" | grep -oP '<Part[^>]*file="[^"]*"') # Output the array elements, one per line printf "%s\n" "${video_files[@]}" } # Get and print the remapped On Deck video files echo "Remapped On Deck video files:" # Capture the output of get_ondeck into an array called "ondeck_files" mapfile -t ondeck_files < <(get_ondeck "$PLEX_URL" "$API_KEY") # Print the remapped file paths printf "%s\n" "${ondeck_files[@]}" Edited September 6 by nick5429 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.