Unraid tweaks for Media server performance?


Recommended Posts

Hi Everyone

 

Unraid is great and I am like many others using my Unraid server for Plex (And of course other things)

So I would really like to collect all the tweaks and "Hack" done by others to increase performance of large Media servers doing transcoding

 

First just to get the "normal" recommendation listed:

  • Appdata on cache drive (Fast as possible - SSD/M2)
  • HW encoding using Unraid Nvidia plugin + GPU
  • Structure of media in each folder/optimize files for transcoding?

 

Specific Unraid tweaks:

  • Moving transcoding to RAM (Update better guide for doing this)

 

 

What other things or tips do you have to speed things up?

Speed up the UI?  have anyone tried to move the DB to RAM? and did it help?

 

Looking forward to getting some input from the power users! 👍 and updating this post with new things!

Thanks for a really great forum with so many help-full people

 

(I placed this post here because it should not be about the dockers but things around it and Unraid, please move it if you find a better place for it!)

Edited by casperse
  • Like 3
Link to comment
  • 2 weeks later...

Found a new useful powerful setting if you have a very large Plex/Media Library....

Install the Tweaks plugin and increase the: fs.inotify.max_user_watches

image.png.45c1736e8f396160ac782d2cd2cf7457.png

You can read more about the problem here: LINK

 

BTW: I would really like to hear from more power users about their tweaks and settings, please share if you have some tweaks or performance improvements

Link to comment
  • 6 months later...

I found by accident another tweak:

 

Direct disk access (Bypass Unraid SHFS)

Usually you set your Plex docker paths as follows:

/mnt/user/Sharename

 

For example this path for your Movies

/mnt/user/Movies

 

and this path for your AppData Config Path (which contains the thumbnails, frequently updated database file, etc):

/mnt/user/appdata/Plex-Media-Server

 

But instead, you should use this as your Config Path:

/mnt/cache/appdata/Plex-Media-Server

 

By that you bypass unraid's overhead (SHFS) and write directly to the cache disk.

 

Requirements

 

1.) Create a backup of your appdata folder! You use this tweak on your own risk!

 

2.) Before changing a path to Direct Disk Access you need to stop the container and wait for at least 1 minute or even better, execute this command to be sure that all data is written from the RAM to the drives:

sync; echo 1 > /proc/sys/vm/drop_caches

If you are changing the path of multiple containers, do this every time after you stopped the container, before changing the path!

 

3.) This works only if appdata is already located on your SSD which happens only if you used the cache modes "prefer" or "only":

1687078754_2020-09-3023_50_22.png.a553c91bba61f33a0148e9bc02ce3c10.png

 

4.) To be sure that your Plex files are only on your SSD, you must open "Shares" and Press "Compute" for your appdata share. It shows if your data is located only on the SSD or on SSD and Disk. If its on the Disk, too, you must stop the docker engine, execute the mover and recheck through "Compute", after the mover has finished its work. You can not change the path to Direct SSD Access as long files are scattered or you will probably loose data!

 

5.) And you should set a minimum free space in your Global Share Settings for your SSD cache:

686897491_2020-09-3023_52_36.png.0848145f341c40fa4e6027e6e66570be.png

 

This setting is only valid for Shared Access Paths and ignored by the new Direct Access Path. This means it reserves up to 100GB for your Plex container, no matter how many other processes are writing files to your SSD.

 

Whats the benefit?

After setting the appdata config path to Direct Access, I had a tremendous speed gain while loading covers, using the search function, updating metadata etc. And its even higher if you have a low power CPU as SHFS produces a high load on single cores.

 

Shouldn't I update all path to Direct Access?

Maybe you now think about changing your movies path as well to allow Direct Disk Access. I don't recommend that because you would need to add multiple paths for your movies, tv shows, etc as they are usually spreaded across multiple disks like:

  • /mnt/disk1/Movies
  • /mnt/disk2/Movies
  • /mnt/disk3/Movies
  • ...

And if you move movies from one disk to another or add new disks etc this probably cause errors inside of Plex. Furthermore this complicates moving to a different server if this maybe uses a different disk order or an smaller amount of bigger disks. In short: Leave the other Shared Access paths as they are.

 

Does this tweak work for other Containers?

Yes. It even works for VM and docker.img paths. But pay attention to the requirements (create backup, flush the linux write cache, check your file locations, etc) before applying the Direct Access path. And think about, if it could be more useful to stay with the Shared Access Path. The general rule is: If a Share uses multiple Disks, do not change this path to Direct Access.

Edited by mgutt
  • Like 2
  • Thanks 4
Link to comment
  • 2 weeks later...

I was really interested in running the PlexDB to ramdisk, that i wrote a script to do it.  I used /dev/shm instead of /tmp (/tmp appears to be set to 1MB)
It was really fast, i would say almost 5x faster at loading content,  but during heavy operations, plex would complain about db corruption, and Transactions not existing.  I do not recommend this approach, but if anyone wants to try out the script, it should be easy to get up and running:
https://github.com/vaparr/plex-ramdisk
you have to map /dev/shm to /dev/shm in your docker configuration.

Only use on a test instance of plex, as it will likely work fine for a while, then corrupt your database.
 

Link to comment

Maybe a race condition, which means your script is executed twice somehow and your docker start/stop becomes mixed up while rsync is running? Create a lock mechanism to be sure that your script is not executed parallel. I use this for my CA User Scripts:

# make script race condition safe
if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then
    exit 1
fi
trap 'rmdir "/tmp/${0///}"' EXIT

But finally you need to think about, if it's a good idea to put everything into the RAM. Isn't it sufficient to put only the database or covers into the RAM? Depending on the collection size and settings (like video preview thumbnails), this can result a folder size, which is bigger than the free RAM.

 

I would add only the database to a ramdisk as follows (which doesn't need to change the path to dev/shm)

 

Intial Setup:

- check if Plex is in idle

- stop Plex container

- mv "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases" "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases_backup"

- mkdir "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases"

- mount -t tmpfsmount -t tmpfs -o size=50% tmpfs "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases"

- cp -av "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases_backup/.“ "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases"

- start Plex container

 

Now, only the ".../Databases" subdir is a ramdisk (which is allowed to use up to 50% of the total RAM).

 

Backup:

- as you're already doing it (idle check, stop container, rsync, etc), target would be ".../Databases_backup" as used by the initial setup.

 

On Unraid reboot:

- check if ".../Databases_backup" exists

- if no, start initial setup

- if yes, stop Plex container (Plex does not work as ".../Databases" is empty)

- delete content of ".../Databases" (to be sure that Plex didn't create something)

- create ramdisk with tmpfs as through Initial Setup

- cp -av from "..../Databased_backup"

- start Plex container

 

But still dangerous as the user could change the path or Unraid crashes while rsync is running etc

 

Maybe vmtouch is the better idea. It allows preloading complete folders into the RAM and is able to lock them. This means you can preload all the covers, while they still exist on the disk. But I'm not sure about file modifications like with the database. Wouldn't need further investigation. I asked for it through the Nerd Pack.

Edited by mgutt
  • Like 1
Link to comment

The script does most of that, it will check if plex is idle, stop the container, copy the databases to /dev/shm/folder, validate the databases are not corrupted (notify you if they are),  use fusefs to mount the ramdisk over the plex database location, and start plex.
It also has protections in place to not mount twice, using the mountpoint command.


The script only puts the database in ramdisk (about 860MB in my case)

after it validates the database, it will copy it to a Last_Known_Good folder, so you should be able to recover from that incase it detects corruption.

Even after all that, the database got randomly corrupted during a massive metadata import.  It could have just been something I did, which is why i provided the script, but i will not be using it anymore.  The difference was about 5 seconds vs 1 second to start playing content, and i can live with that extra delay knowing my database is less likely to be corrupted.

 

Edited by infi704
Link to comment

Not 100% sure, but ram usage on unraid runs about 11gig/32 used, and ramdisk usage never exceeded 5gig, but i wasnt monitoring it that closely, so it is possible. Are you running your plex db's on ramdisk? I would like to hear about a success story before I try again, the errors i was getting (sorry didnt save the logs, was trying to bring plex back as top priority), when researched, came back to forum results about DB corruption on NFS when strict locking was not implemented, so i was under the assumption that many tmpfs doesnt have the locking capabilities other real FS's have.

Link to comment

No, as my Plex config folder is located on the NVMe and I use the direct access tweak for the config folder AND the docker.img, everything feels really fast (/mnt/cache instead of /mnt/user):

Screenshot_2020-10-09-20-33-15-160_com.android.chrome.thumb.jpg.479e80bc127dbd0243416bc981e00f31.jpg

 

Next step would be locking the covers in the RAM through vmtouch, but as the server has really much free RAM, they should be already cached, thanks to Linux.

Edited by mgutt
Link to comment

I use the Mover Scheduler script to keep my newest content on my SSD for 30days so I'm not spooling up my HDD's except for content that is older than 30days. So if anything that is newer "needs" to be transcoded its accessed nearly instantly coming from the SSD and Transcoded into Ram. My Kids seem to love it, because they often watch the same newest stuff over and over. 

 

I also often copy my newest Movies I add to a portable to take on Road Trips, but I want the files to be 2Channels. I added a Sync on my account to automatically Optimize the 10 newest movies to my profile using the Filters within the Plex app on my phone so when I add something new it fires off a Transcode and stores it. Of course it'll delete files if I don't copy them fast enough since I have it set to 10, but I'm normally pretty good about catching it. 

 

Link to comment

 

32 minutes ago, kizer said:

keep my newest content on my SSD

Also a nice idea. Do you move it manually in a cache only share or how do you do this?

 

33 minutes ago, kizer said:

but I'm normally pretty good about catching it.

Why not using rsync? By default it does not delete files from the target, so it will only add the new movies. You would only need to exclude files that are transcoded at the moment (older than x minutes)

Link to comment

I just have my Movie and TV shares setup to Cache "Yes" and use the Mover Tuning Plugin below that I have setup to check at 20% and move files that are older than 30days. Of course you can use whatever number of days you like or % of files.

 

Why 20% well that means unless the Share is at a minimum of %20 Full it won't even look at any files that are at least (Specified Date Old Here) on the drive. I had it set to 50% and felt that was keeping files to long on the drive and wasn't moving them.

 

30 days might be two long and that can easily be adjusted. I had it set to 15 and found it was just a bit two short. I'm guessing 21 days might be the magic number, but everybody is different. 

 

I also have the mover set to Daily trigger at 3AM so it doesn't interrupt anything. I might change to around 6AM since that is when I have Plex Maintenance setup and drives are spinning

Looking for at least 20% which I'm normally at 47% because of Docker content and a smaller SSD

It'll look for files that are 30days and move them 

 

Keep in mind the below script only works on the "Cache Pool" It will not work on other pools people create. Might work in the Future, but currently it does not. 

 

 

I use the Feature of only Optimizing 10 Movies that are Transcoded onto a SSD and later moved to a Share by unraid simply so I don't add a bunch and forget to clean house. Keep in mind those files are only for a Portable drive I Dock using Unassigned Devices. I don't add a lot of movies, but when I do its pretty easy to keep track of them. 

 

TV shows I rip are a different story. I don't Optimize them so I don't worry about them other than keeping them on the SSD for a period of time. 

 

I totally get I could just us Rsync or find with a time in days or whatever, but some basic plugins that solve some basic needs work just great for me. 

Link to comment
16 hours ago, mgutt said:

No, as my Plex config folder is located on the NVMe and I use the direct access tweak for the config folder AND the docker.img, everything feels really fast (/mnt/cache instead of /mnt/user):

Screenshot_2020-10-09-20-33-15-160_com.android.chrome.thumb.jpg.479e80bc127dbd0243416bc981e00f31.jpg

 

Next step would be locking the covers in the RAM through vmtouch, but as the server has really much free RAM, they should be already cached, thanks to Linux.

Hi thanks for the Tips!

 

I have always had both Docker and Appdata on the same Nvme and because of the size I had to upgrade to a 2TB 😞

I just can't see any difference in the loading time of the posters etc...

Between the above and the default

image.png.666e5e540c11969f6f50e49ee08685d3.png

 

This could be because I always had everything on the NVme?

Is there any down size of keeping the changes using cache path?

 

 

 

 

 

 

 

Link to comment
5 hours ago, frodr said:

Is it possible make a script that add the first x MB of each file in a folder to the cache?

 

Yes it is possible, if you mean Memory when you say "cache".

 

Use the User Scripts plugin for managing and controlling the scripts you write. I think it would be a combination of find and dd. 

Link to comment
4 minutes ago, BRiT said:

 

Yes it is possible, if you mean Memory when you say "cache".

 

Use the User Scripts plugin for managing and controlling the scripts you write. I think it would be a combination of find and dd. 

I mean cache/ssd. When push play today it takes  some extra seconds before anything happends on the tv. My understanding is that loading from a hdd takes a bit time. If the start of the media file would load from a cache/ssd, the time from pushed play to actual film starting could be redused. Is this a correct antisipation?

Link to comment
6 hours ago, frodr said:

I mean cache/ssd. When push play today it takes  some extra seconds before anything happends on the tv. My understanding is that loading from a hdd takes a bit time. If the start of the media file would load from a cache/ssd, the time from pushed play to actual film starting could be redused. Is this a correct antisipation?

Cool idea, but would only work with the RAM cache and needs testing to find the best size as Plex will fill the Clients buffer and we need to overcome up to 14 seconds in which the HDD spins up. @BRiT It should work. Test:

# create random file
dd if=/dev/urandom iflag=fullblock of=/mnt/disk8/Marc/1GB.bin bs=100M count=10

# it needs time to write the file from the write cache to the HDD
sleep 90

# clean the read cache
sync; echo 1 > /proc/sys/vm/drop_caches

# wait for cache cleaning
sleep 10

# benchmark the time to read the first 500 MB of the file this fills the cache
echo "$(time ( head -c 500000000 /mnt/disk8/Marc/1GB.bin ) 2>&1 1>/dev/null )"

# wait for processing all I/O 
sleep 30

# additional test, to check if the disk spins up
mdcmd spindown 8

# wait for full spindown (view at the dashboard!)
sleep 10

# benchmark the read time again
echo "$(time ( head -c 500000000 /mnt/disk8/Marc/1GB.bin ) 2>&1 1>/dev/null )"

First benchmark:

real    0m1.964s
user    0m0.077s
sys     0m0.200s

Second benchmark:

real    0m0.139s
user    0m0.062s
sys     0m0.077s

Conclusion:

It is possible to cache the first x MB of a file in the RAM. But after the second benchmark the disk is still sleeping. That means if the client starts a movie and the buffer is filled through the cached movie leader, it will spinup the HDD while the client is already draining the buffer. As far as I know the Plex clients buffer has a total size of 75 MB. Let's say our movies have a bitrate of 50 Mbit/s. We multiply this with 15 seconds (maximum HDD spinup time ) and by that we get 100 Mbyte. This means a sleeping HDD could be a problem if it spins up very slowly. But if its active and/or fast, this trick should work. I will test that with movies on one drive and compare it with the latency of movies of an uncached drive.

 

Edited by mgutt
  • Like 1
  • Thanks 1
Link to comment

Only as a reminder for myself:

 

Monitor user

Check if it's possible to monitor if the user has opened a movie page in the Plex client to spinup the HDD and preload the movie file before the user starts it.

 

Cache movie leader on SSD

Maybe we could even use the SSD instead of the RAM as cache. This plugin should help and a different vm.swappiness value. The target would be to add from all movies the first 100MB to the RAM page cache and linux hopefully moves it to the SSD page cache (swap).

Edited by mgutt
  • Like 1
Link to comment

Test results:

 

Cache the first 200 MB of all movies in folder "09":

ls /mnt/user/Movie/09/*/*.mkv | xargs head -c 200000000 > /dev/null

Benchmark:

echo "$(time ( head -c 200000000 "/mnt/disk4/Movie/09/12 Monkeys (1995)/12 Monkeys (1995) FSK16 DE EN IMDB8.0.mkv" ) 2>&1 1>/dev/null )"
real    0m0.063s
user    0m0.015s
sys     0m0.047s

While executing the benchmark disk4 still sleeps.

 

Starting the movie through Plex... disk spins up... Does Plex use IO_DIRECT, which bypasses any caching? Let's check that. We clean the cache:

sync; echo 1 > /proc/sys/vm/drop_caches

RAM stats before starting a movie:

free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1022       61371         761        1964       61974
Swap:             0           0           0

Started a movie in Plex. While the movie is playing the cache usage rises:

free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1050       60141         761        3165       61945
Swap:             0           0           0
free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1050       59400         761        3907       61944
Swap:             0           0           0
free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1051       57769         761        5537       61942
Swap:             0           0           0

After stopping it:

free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1043       59559         761        3755       61951
Swap:             0           0           0

Hmmm.. looks like it uses the cache. Spun down disk. Play the movie again from the beginning. Aha. Disk is still sleeping. So Plex should be cachable. But why didn't it work? Ahh.. I forget something I think. The external subtitle file. Ok, let's cache all files on the disk:

ls /mnt/user/Movie/09/*/*.* | xargs head -c 200000000 > /dev/null

Let's stop the disk and start a movie again. Nope. Still spinning up first. Ok, clean cache and cache a full movie:

sync; echo 1 > /proc/sys/vm/drop_caches
cat "/mnt/disk4/Movie/09/127 Hours (2010)/127 Hours (2010) FSK12 DE EN TR IMDB7.6.mkv" > /dev/null
cat "/mnt/disk4/Movie/09/127 Hours (2010)/127 Hours (2010) FSK12 DE EN TR IMDB7.6.ger.forced.srt" > /dev/null

Spin down disk, start movie in Plex and... ha.. starts directly and disk stays sleeping. Ok, maybe we need to cache more of the movie leader?1GB... 2GB... 3GB... 4GB... 5GB... nothing works. Does Plex read something from the end of the file? Let's read 5GB of the beginning and 5GB of the end of the file... aha. Movie starts directly. Puuhh... we are on the right way. Now 100MB from the beginning and 100MB from the end:

head -c 100000000 "/mnt/disk2/Movie/WX/Wall Street (1987)/Wall Street (1987) FSK12 DE EN IMDB7.4.mkv" > /dev/null
head -c 100000000 "/mnt/disk2/Movie/WX/Wall Street (1987)/Wall Street (1987) FSK12 DE EN IMDB7.4.ger.forced.srt" > /dev/null
tail -c 100000000 "/mnt/disk2/Movie/WX/Wall Street (1987)/Wall Street (1987) FSK12 DE EN IMDB7.4.mkv" > /dev/null

Ding Ding Ding Ding! It works. And the disk spins up while the movie plays and... no buffering. Nice! Ok. Now lets find out how much is really needed from the end of the file. 100/10... works. 100/1... works. 100/0.1... does not work. Ok, we need 1MB of the end of the file.

 

Let's check with a 4K movie. 100/1... works. Yeah, baby!

 

Ok, let's check out how much we need to read from the beginning until the buffer becomes empty. We still use the 4K movie. 50/1... works. 30/1... works... 10/1... complete fail ^^ 20/1... buffers. 25/1... works. Let's test wifi... 25/1... buffers. Wait.. my phone has only 65 mbit/s wifi and the movie has 107 mbit/s. Can't work ^^ Better position... now 866 Mbit/s connection. 25/1... buffers, 30/1... works. 25/1... buffers. This time I waited 2 minutes after each disk spun down. 30/1... buffers. Right.. I need to wait more until the disk completely stopped. 40/1... buffers. 50/1... buffers. 60/1... works.

 

Ok, now let's test 4K to 1080P Transcoding. 60/1.. works.

 

Good. So this would be the way to put our movies in the cache, but it works only if there is enough RAM for all movies:

find /mnt/user/Movies/ -iname '*.mkv' -print0 | xargs -0 head -c 60000000 > /dev/null
find /mnt/user/Movies/ -iname '*.mkv' -print0 | xargs -0 tail -c 1000000 > /dev/null
find /mnt/user/Movies/ -iname '*.srt' -print0 | xargs -0 cat > /dev/null

At the moment I'm stuck to create a command which sorts and uses head and tail at the same time, so it will fill the cache with the recent movies. head alone works:

find /mnt/user/Movies/ -iname '*.mkv' -printf "%T@ %p\n" | sort -n | sed -r 's/^[0-9]+.[0-9]+ //' | tr '\n' '\0' | xargs -0 head -c 60000000 > /dev/null

But adding tail does not work. Piping isn't my favorite ^^

find /mnt/user/Movies/ -iname '*.mkv' -printf "%T@ %p\n" | sort -n | sed -r 's/^[0-9]+.[0-9]+ //' | tr '\n' '\0' | tee >(xargs -0 head -c 60000000 > /dev/null) >(xargs -0 tail -c 1000000 > /dev/null)

 

Edited by mgutt
  • Like 1
  • Thanks 1
Link to comment
20 hours ago, casperse said:

This could be because I always had everything on the NVme?

Is there any down size of keeping the changes using cache path?

Changing the path of the docker.img influences all reads and writes, that are done inside of this image file. This means all Plex processes like the crawler, transcoder, etc and not the config files like the cover, database, etc. As these processes are relatively small files, the impact isn't great, but even if you save 1% cpu load, why not. The greatest impact comes from changing the appdata path to cache. But this can't be done through the settings as the Plex container is already installed. This means you need to change the appdata path through Docker > Plex > Edit. After this was made, all covers, database reads/writes will be faster. As I described in the tweak, the impact is higher if you have a low power cpu (as a high power cpu can process the Unraid SHFS overhead fast enough). And it will be higher if your cover etc aren't already in the Linux Read Cache. But leave it. It will save you a huge load. Especially if the crawler becomes active. Maybe you like to measure the time to process a full media library scan with and without docker+appdata direct disk access.

 

The downsize is described in the tweak above: Plex is not able to read/write files outside of the cache drive. This will be a problem if your cache drive becomes full. So with this tweak, it's like setting the "appdata" share to "cache only", even if you change the setting to "no". Plex does not respect this anymore. So if you change the caching to "no" and start the mover, Plex can't start as it does not find the files. But if you move the files back to the SSD or you change the path back to "/mnt/user" it will work like before. So this tweak can be easily reverted without problems.

  • Like 1
Link to comment
On 10/1/2020 at 12:54 AM, mgutt said:

I found by accident another tweak:

 

Direct disk access (Bypass Unraid SHFS)

Usually you set your Plex docker paths as follows:

/mnt/user/Sharename

 

For example this path for your Movies

/mnt/user/Movies

 

and this path for your AppData Config Path (which contains the thumbnails, frequently updated database file, etc):

/mnt/user/appdata/Plex-Media-Server

 

But instead, you should use this path as your Config Path:

/mnt/cache/appdata/Plex-Media-Server

 

By that you bypass unraid's overhead (SHFS) and write directly to the cache disk.

 

UPDATE:

This works! - My high memory usage was related to the encoding to RAM and thanks to @mgutt this is not a problem anymore!

Actually I have never seen such speed scrolling through covers like I have now, both for Plex and Emby

(I will make a post about this just to share my experience running both Emby and Plex in this thread soon)

 

Again! thank you so much @mgutt your tweak really helped me allot - And also the trouble shooting here:

 

 

 

 

Edited by casperse
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.