Unraid tweaks for Media server performance?


Recommended Posts

3 minutes ago, mgutt said:

@casperse

This has nothing to do with the tweak as it does not even touch your RAM:

https://forums.unraid.net/topic/97726-memory-usage-very-high/?tab=comments#comment-901835

 

 

I can just say that changing it back from cache to user my memory dropped and I have rebooted my servers more than 10 times and this is the only change?

Could it be because I have a plex appdata that is near 800G?

Link to comment
19 minutes ago, casperse said:

Could it be because I have a plex appdata that is near 800G?

No. As I said. Changing a path has absolutetly no influence regarding your RAM usage (as long this path does not target your RAM). But lets discuss this in your other thread. There I provided you a command to check the detailed RAM usage.

Edited by mgutt
Link to comment

The transcoding dir is only a temporary dir. The different resolutions of a movie generated through "Optimize" are located under the movie path itself. Depending on your selected storage location under "/Movies/Plex Versions" or "/Movies/Moviename (2020)/Plex Versions"

 

optimize_version_storage_location.png.d00f6e9d6d794b371b9b05d8123a26bd.png

 

Example:

ls -l "/mnt/user/Movie/CD/Casino (1995)/Plex Versions/"*/*
-rw-rw-rw- 1 nobody users 24512056092 Jan  7  2020 /mnt/user/Movie/CD/Casino\ (1995)/Plex\ Versions/Original\ Quality/Casino\ (1995).mp4

 

The content of the "/Transcode" folder looks completely different and contains only small temporary video files as long a transcoding is running:

ls -l /tmp/plextranscode/Transcode/Sessions/plex-transcode-id123/
total 435752
-rw-r--r-- 1 nobody users    2499 Oct 14 15:21 chunk-stream0-00001.m4s
-rw-r--r-- 1 nobody users 1034440 Oct 14 15:21 chunk-stream0-00002.m4s
-rw-r--r-- 1 nobody users 2781109 Oct 14 15:21 chunk-stream0-00003.m4s
-rw-r--r-- 1 nobody users 2486741 Oct 14 15:21 chunk-stream0-00004.m4s
....
-rw-r--r-- 1 nobody users   24730 Oct 14 15:22 chunk-stream1-00244.m4s
-rw-r--r-- 1 nobody users   27713 Oct 14 15:22 chunk-stream1-00245.m4s
-rw-r--r-- 1 nobody users   27288 Oct 14 15:22 chunk-stream1-00246.m4s
-rw-r--r-- 1 nobody users   28020 Oct 14 15:22 chunk-stream1-00247.m4s
-rw-r--r-- 1 nobody users   27956 Oct 14 15:22 chunk-stream1-00248.m4s
-rw-r--r-- 1 nobody users   30500 Oct 14 15:22 chunk-stream1-00249.m4s
-rw-r--r-- 1 nobody users   31674 Oct 14 15:22 chunk-stream1-00250.m4s
-rw-r--r-- 1 nobody users       0 Oct 14 15:22 chunk-stream1-00251.m4s.tmp
-rw-r--r-- 1 nobody users     806 Oct 14 15:21 init-stream0.m4s
-rw-r--r-- 1 nobody users     741 Oct 14 15:21 init-stream1.m4s
-rw-r--r-- 1 nobody users      65 Oct 14 15:21 sub-chunk-00000
-rw-r--r-- 1 nobody users      76 Oct 14 15:21 sub-chunk-00001
-rw-r--r-- 1 nobody users     547 Oct 14 15:21 sub-header
-rw-r--r-- 1 nobody users    1826 Oct 14 15:21 temp-0.srt

But they are directly deleted, when you stop the movie:

ls -l /tmp/plextranscode/Transcode/Sessions/plex-transcode-id123/
/bin/ls: cannot access '/tmp/plextranscode/Transcode/Sessions/plex-transcode-id123/': No such file or directory

 

Edited by mgutt
Link to comment
On 10/10/2020 at 6:35 PM, mgutt said:

Test results:

 

Cache the first 200 MB of all movies in folder "09":


ls /mnt/user/Movie/09/*/*.mkv | xargs head -c 200000000 > /dev/null

Benchmark:


echo "$(time ( head -c 200000000 "/mnt/disk4/Movie/09/12 Monkeys (1995)/12 Monkeys (1995) FSK16 DE EN IMDB8.0.mkv" ) 2>&1 1>/dev/null )"
real    0m0.063s
user    0m0.015s
sys     0m0.047s

While executing the benchmark disk4 still sleeps.

 

Starting the movie through Plex... disk spins up... Does Plex use IO_DIRECT, which bypasses any caching? Let's check that. We clean the cache:


sync; echo 1 > /proc/sys/vm/drop_caches

RAM stats before starting a movie:


free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1022       61371         761        1964       61974
Swap:             0           0           0

Started a movie in Plex. While the movie is playing the cache usage rises:


free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1050       60141         761        3165       61945
Swap:             0           0           0
free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1050       59400         761        3907       61944
Swap:             0           0           0
free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1051       57769         761        5537       61942
Swap:             0           0           0

After stopping it:


free -m
              total        used        free      shared  buff/cache   available
Mem:          64358        1043       59559         761        3755       61951
Swap:             0           0           0

Hmmm.. looks like it uses the cache. Spun down disk. Play the movie again from the beginning. Aha. Disk is still sleeping. So Plex should be cachable. But why didn't it work? Ahh.. I forget something I think. The external subtitle file. Ok, let's cache all files on the disk:


ls /mnt/user/Movie/09/*/*.* | xargs head -c 200000000 > /dev/null

Let's stop the disk and start a movie again. Nope. Still spinning up first. Ok, clean cache and cache a full movie:


sync; echo 1 > /proc/sys/vm/drop_caches
cat "/mnt/disk4/Movie/09/127 Hours (2010)/127 Hours (2010) FSK12 DE EN TR IMDB7.6.mkv" > /dev/null
cat "/mnt/disk4/Movie/09/127 Hours (2010)/127 Hours (2010) FSK12 DE EN TR IMDB7.6.ger.forced.srt" > /dev/null

Spin down disk, start movie in Plex and... ha.. starts directly and disk stays sleeping. Ok, maybe we need to cache more of the movie leader?1GB... 2GB... 3GB... 4GB... 5GB... nothing works. Does Plex read something from the end of the file? Let's read 5GB of the beginning and 5GB of the end of the file... aha. Movie starts directly. Puuhh... we are on the right way. Now 100MB from the beginning and 100MB from the end:


head -c 100000000 "/mnt/disk2/Movie/WX/Wall Street (1987)/Wall Street (1987) FSK12 DE EN IMDB7.4.mkv" > /dev/null
head -c 100000000 "/mnt/disk2/Movie/WX/Wall Street (1987)/Wall Street (1987) FSK12 DE EN IMDB7.4.ger.forced.srt" > /dev/null
tail -c 100000000 "/mnt/disk2/Movie/WX/Wall Street (1987)/Wall Street (1987) FSK12 DE EN IMDB7.4.mkv" > /dev/null

Ding Ding Ding Ding! It works. And the disk spins up while the movie plays and... no buffering. Nice! Ok. Now lets find out how much is really needed from the end of the file. 100/10... works. 100/1... works. 100/0.1... does not work. Ok, we need 1MB of the end of the file.

 

Let's check with a 4K movie. 100/1... works. Yeah, baby!

 

Ok, let's check out how much we need to read from the beginning until the buffer becomes empty. We still use the 4K movie. 50/1... works. 30/1... works... 10/1... complete fail ^^ 20/1... buffers. 25/1... works. Let's test wifi... 25/1... buffers. Wait.. my phone has only 65 mbit/s wifi and the movie has 107 mbit/s. Can't work ^^ Better position... now 866 Mbit/s connection. 25/1... buffers, 30/1... works. 25/1... buffers. This time I waited 2 minutes after each disk spun down. 30/1... buffers. Right.. I need to wait more until the disk completely stopped. 40/1... buffers. 50/1... buffers. 60/1... works.

 

Ok, now let's test 4K to 1080P Transcoding. 60/1.. works.

 

Good. So this would be the way to put our movies in the cache, but it works only if there is enough RAM for all movies:


find /mnt/user/Movies/ -iname '*.mkv' -print0 | xargs -0 head -c 60000000 > /dev/null
find /mnt/user/Movies/ -iname '*.mkv' -print0 | xargs -0 tail -c 1000000 > /dev/null
find /mnt/user/Movies/ -iname '*.srt' -print0 | xargs -0 cat > /dev/null

At the moment I'm stuck to create a command which sorts and uses head and tail at the same time, so it will fill the cache with the recent movies. head alone works:


find /mnt/user/Movies/ -iname '*.mkv' -printf "%T@ %p\n" | sort -n | sed -r 's/^[0-9]+.[0-9]+ //' | tr '\n' '\0' | xargs -0 head -c 60000000 > /dev/null

But adding tail does not work. Piping isn't my favorite ^^


find /mnt/user/Movies/ -iname '*.mkv' -printf "%T@ %p\n" | sort -n | sed -r 's/^[0-9]+.[0-9]+ //' | tr '\n' '\0' | tee >(xargs -0 head -c 60000000 > /dev/null) >(xargs -0 tail -c 1000000 > /dev/null)

 

I like your experimenting here. How about the last 10 Movies?

Link to comment

It said 110 in the script output from User.Plugin.  I'm only running with 16GB of ram. Depending on how this goes I wouldn't have an issue with upgrading my RAM some. 

 

I had to run to work while it was running, but I'll try and give it a try this evening. I guess it will get beat up some by my kids while I'm at work so It'll be interesting to see what remains in Cache/Buffer while I'm gone. 

Link to comment
On 9/30/2020 at 6:54 PM, mgutt said:

I found by accident another tweak:

 

Direct disk access (Bypass Unraid SHFS)

Usually you set your Plex docker paths as follows:

/mnt/user/Sharename

 

For example this path for your Movies

/mnt/user/Movies

 

and this path for your AppData Config Path (which contains the thumbnails, frequently updated database file, etc):

/mnt/user/appdata/Plex-Media-Server

 

But instead, you should use this path as your Config Path:

/mnt/cache/appdata/Plex-Media-Server

 

By that you bypass unraid's overhead (SHFS) and write directly to the cache disk.

 

Requirements

Of course this works only if you are using an SSD cache and set "Prefer" for your appdata share:

1687078754_2020-09-3023_50_22.png.a553c91bba61f33a0148e9bc02ce3c10.png

 

And you must set a minimum free space in your Global Share Settings for your SSD cache:

686897491_2020-09-3023_52_36.png.0848145f341c40fa4e6027e6e66570be.png

 

The reason for this is, that writing directly to the cache disk means bypassing the minimum free space setting and loosing the ability to write additional data to the HDD array. And this causes a positive side-effect: Plex will never write metadata or database updates to your slow HDD array, no matter how full your SSD cache is.

 

And finally you must be sure that all your Plex Data is located on the SSD and not scattered across SSD and HDD array (check with Unbalance, if you are unsure).

 

Why 100GB?

I'm using this value because if my 1TB SSD cache is filled up to 899GB and I then start three Blu-Ray Rips at the same time (= three parallel running uploads) it writes these files completely to the SSD cache and as one Rip could have a size of ~30GB, the SSD cache will be finally filled up to ~989GB although we set 100GB minimum free space. Note, this is not a bug. The server is just not able to know how big the file will finally become. As you can see I calculated this value very well, to have all time >10GB space left for Plex. Your value could be much smaller if you don't upload huge files in parallel and/or use a very small mover intervall so the SSD is never filled up.

 

Whats the benefit?

After setting the appdata config path to direct cache disk access, I had a tremendous speed gain while loading covers, using the search function, updating metadata etc. And its even higher if you have a low power CPU as SHFS produces a high load on single cores.

 

Shouldn't I update all path to direct disk access?

Maybe you now think about changing your movies path as well to allow direct disk access. I don't recommend that because you would need to add multiple paths for your movies, tv shows, etc as they are usually located on different disks:

  • /mnt/disk1/Movies
  • /mnt/disk2/Movies
  • /mnt/disk3/Movies
  • ...

And if you move movies from one disk to another or add new disks etc this could cause errors inside of Plex. Although the benefit is not really high because Unraid's overhead is much more present for writes than for reads. Furthermore this will make a docker container movement to a complete new server with maybe much less, but bigger disks, much more complicated. In short: Leave the other paths as they are.

 

I just tried this with Emby and when the container restarted it was like a fresh install with no data or settings.  I figure no big deal and set it back to /mnt/user/appdata thining all would return to normal but no dice.  Emby acts like it was just installed.  I am reloading a backup but why did this happen?

Link to comment
20 minutes ago, RockDawg said:

I just tried this with Emby and when the container restarted it was like a fresh install with no data or settings.  I figure no big deal and set it back to /mnt/user/appdata thining all would return to normal but no dice.  Emby acts like it was just installed.  I am reloading a backup but why did this happen?

This can only happen if some files were spread on SSD cache and HDD array, so after changing the path to SSD direct access, emby is not able to find all files (as some are located on an HDD). That's the reason why this is a requirement:

22 minutes ago, RockDawg said:

And finally you must be sure that all your Plex Data is located on the SSD and not scattered across SSD and HDD array (check with Unbalance, if you are unsure).

After you started with the direct access path, it created those missing files again like it is done on a fresh install and after you reset the path to Shared Access, Unraid prefers all files, that are located on the SSD, so its still a fresh install. Solution: Move all files from the HDD to the SSD (overwrite all existing files) and now your back on the old status. After that Direct Access will work as well.

Link to comment
3 minutes ago, RockDawg said:

It's always been set to cache

If it ever happens that your cache become full, all new files are written to the HDD array if you've set the appdata caching to "prefer". After that they stay there "forever" as the mover is not allowed to move files that are in use.

Edited by mgutt
Link to comment
5 minutes ago, trurl said:

cache-prefer can overflow to the array.

 

You can see how much of each disk each user share is using by going to User Shares and clicking Compute... for the share.

I just did that and it's only showing on cache.  Thanks for that tip.  I never knew you could see that info there.  I just assumed it refreshed the total.

Edited by RockDawg
Link to comment
1 minute ago, RockDawg said:

So setting it to only will prevent that, right? 

Setting it to only will prevent it overflowing to the array. Instead you would get an error when you filled cache.

 

But setting it to only won't do anything to get files that are already on the array moved to cache.

Link to comment
1 hour ago, RockDawg said:

So setting it to only will prevent that, right? 

Yes and no. If you set it to only and your cache becomes full again, emby will crash (because emby runs out of storage). Thats the reason why I set a minimum free space on my SSD before changing the path to Direct Access as this minimum free space is only valid for Shared Access. By that I have up to 100GB free storage exclusively for Plex no matter how much data is written to the SSD through other processes.

1 hour ago, RockDawg said:

I just did that and it's only showing on cache

This is strange. One moment, I will test if this feature shows files that exist twice... Hmm no it shows disk8 and cache if the same file is located on both:

2019773998_2020-10-2401_33_56.png.a42cd1c197ea922119e2d22f8a796c35.png

454899251_2020-10-2401_34_10.thumb.png.891dd5edb88711da768e670b804f79b9.png

 

So what happened with your Emby installation 🤔

 

Emby itself has usually no clue about the local path. But maybe the docker container itself resets after changing the appdata path? Hopefully not, but else I'm out of ideas.

 

Which emby container are you using? I want to test this scenario.

 

EDIT: I'm having an additional idea. Execute this command please:

sysctl -a | grep dirty

Is "vm.dirty_expire_centisecs" set to 3000? If yes, the following could happen:

- emby wrote new data to the database

- because of "vm.dirty_expire_centisecs" this new database is still located in the RAM for 30 seconds

- now you changed the path, emby restarts and does not find the database file anymore as its not already written to the SSD cache

 

If this is the reason, I have to change the manual as we have to disable the container and wait more than 30 seconds before changing the path.

 

What it means for you: File is lost (sorry for that).

Edited by mgutt
  • Like 1
Link to comment
root@Tower:~# sysctl -a | grep dirty
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200

 

I think it's the official Emby docker but I don't remember for certain and I rename my containers so I can't tell from that. Repo is emby/embyserver:latest

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.