Unraid tweaks for Media server performance?


Recommended Posts

Oh yes Stuff happens especially when we like us - go and do special tweaks (Stuff also happened for me, but the backup script from @mgutt is great and I use that for both emby and Plex (Appdata is huge! incremental backup is great) and of course the VM/Docker backup apps from the Unraid app community)

 

This issue got me thinking, what would happen if you have the appdata folder as:

image.thumb.png.05648be2aa6454beed59b05a93a8635f.png

And you have all your Appdata path in the docker changed to /cache/

image.png.fddbe47415c0d25844d21e7f3579a87c.png

But you do have the global share settings set:

image.thumb.png.ddd02d35e055014b73e93892c669cf46.png

And you have a overflow of Plex metadata to appdata? (Prefer setting!)

Would it write to the array?

The path from the docker would only see the cache drive, so new data would not be available?

How would this work? (Don't really want to test this in practice :))

Sorry if this is a stupid Q...

 

 

 

 

 

Edited by casperse
Link to comment

Overflow is only valid for Shared Paths. A Direct Path (/mnt/cache) does not overflow to the array. This means, if the cache becomes full and you use /mnt/cache, the Plex Container will produce errors, because it is out of storage. That is the reason why I suggest in my guide to set "Min. free space" in your Global Share Settings.

 

Example:

  • 1TB SSD cache
  • /mnt/cache is your SSD
  • /mnt/disk1 is your array
  • /mnt/user/Movies uses disk1 and uses the cache setting "yes"

Scenario 1

  • Plex appdata is set to /mnt/user/...
  • You upload more than 1TB of movies to your server. After occupying 1TB of your SSD, further movie files are written to the array. 
  • As the SSD is full, Plex writes to the array

 

Scenario 2

  • Plex appdata is set to /mnt/cache...
  • You  upload more than 1TB of movies to your server. After occupying 1TB of your SSD, further movie files are written to the array. 
  • As the SSD is full, Plex produces errors

Scenario 3

  • You set 100GB Min. Free Space
  • Plex appdata is set to /mnt/cache...
  • You upload more than 1TB of movies to your server. After occupying 900GB of your SSD, further movie files are written to the array. 
  • As there are still 100GB left, Plex works as usual

 

 

Link to comment

I finally got this working.  Nice tip!  I only use Emby as a backend so I don't have a lot of experience with how slow it usually loads covers remotely but, from what I can tell, it does seem faster.  Hopefully my friends and family will notice and benefit from the improvement.

 

Is this the sort of thing where the extra speed of an NVME over SSD will improve things further or not really worth it?

Link to comment
8 minutes ago, RockDawg said:

Is this the sort of thing where the extra speed of an NVME over SSD will improve things further or not really worth it?

It depends. After the covers are loaded into the RAM, they won't be loaded from the drive again, until you upload/download/execute something that overwrites those RAM cache. But I would say it improves, as every database update, cover access, search query, WebGUI page load... will be faster because of the lower latency of an SSD. In addition if your dockers run all on an SSD your HDDs can sleep all the time.

Link to comment
On 10/30/2020 at 2:21 AM, mgutt said:

It depends. After the covers are loaded into the RAM, they won't be loaded from the drive again, until you upload/download/execute something that overwrites those RAM cache. But I would say it improves, as every database update, cover access, search query, WebGUI page load... will be faster because of the lower latency of an SSD. In addition if your dockers run all on an SSD your HDDs can sleep all the time.

I did notice a big improvement in speed (Fast NVMe!) having Plex metadata and thousands of covers scrolling through a media library I have a near instant load - I also like the animation of scrolling through e media file (Generated for each media file if you have it enabled - hundred of Gigs). I can recommend buying the biggest NVMe you can afford - I went through 3 increasing the size (Would do it again if it was possible! but 2TB is max today)

@mgutt is correct I have now have placed the following on my NVMe cache drive:

 

appdata

domains

and the /mnt/cache/system/docker/docker.img

Works great!

 

Link to comment
  • 11 months later...
8 hours ago, ChatNoir said:

Simply click on the first drive of the pool and you will have the Minimum free space field. It can only be adjusted with the Array stopped.

 

Got it. Thanks. I couldn't find anything that said "Global Share Settings" and I found a few different places to set minimum free space so I wasn't sure. Plus what I see on that screen doesn't look like his screenshots so I wanted to make sure. Thanks!

Link to comment
On 10/28/2020 at 1:49 AM, mgutt said:

Overflow is only valid for Shared Paths. A Direct Path (/mnt/cache) does not overflow to the array. This means, if the cache becomes full and you use /mnt/cache, the Plex Container will produce errors, because it is out of storage. That is the reason why I suggest in my guide to set "Min. free space" in your Global Share Settings.

 

Example:

  • 1TB SSD cache
  • /mnt/cache is your SSD
  • /mnt/disk1 is your array
  • /mnt/user/Movies uses disk1 and uses the cache setting "yes"

Scenario 1

  • Plex appdata is set to /mnt/user/...
  • You upload more than 1TB of movies to your server. After occupying 1TB of your SSD, further movie files are written to the array. 
  • As the SSD is full, Plex writes to the array

 

Scenario 2

  • Plex appdata is set to /mnt/cache...
  • You  upload more than 1TB of movies to your server. After occupying 1TB of your SSD, further movie files are written to the array. 
  • As the SSD is full, Plex produces errors

Scenario 3

  • You set 100GB Min. Free Space
  • Plex appdata is set to /mnt/cache...
  • You upload more than 1TB of movies to your server. After occupying 900GB of your SSD, further movie files are written to the array. 
  • As there are still 100GB left, Plex works as usual

 

 

 

I did all this when we first discussed the /mnt/cache and after some time you really get used to having all the appdata and VM's on your fast cache drive.

 

My 2TB Nvme drive is reaching the 100G limit (Now defined on the cache drive and not the Gloval share settings)

If money wasn't an issue I would upgrade to a nvme 4TB - but they are incredible expensive (My IRQ doesn't allow for a second nvme)

 

So my option is to do some split of VM's some on the Cache drive and others on the Array

How would this best be done in practice? I gues it isnt enought to remove the /mnt/cache/ path when the "domains" folder is set to: Prefer

 

Would the best option keeping performance be to create a new domains share on the array? and just split them (Fast & Slow)

image.thumb.png.4ed48172d994070db1e9e587376736d8.png

 

@mgutt Also I am still using many of your great scripts (Also in this thread) the one you did for doing backups of the "Plex appdata backup" - Having everything on cache for speed requires good backup 🙂

I was wondering if you have a similar solution doing VM backups? (I found the solution in the app store to be buggy and cause to many problems)  again thanks for all your help and especially all the great posts that you share with all of us in the forum!

  • Like 1
Link to comment
7 hours ago, casperse said:

Would the best option keeping performance be to create a new domains share on the array? and just split them (Fast & Slow)

There is no rule to put all vdisks into the "domain" share. So feel free to create as many shares as you like and set their caching rules as you prefer it. For example add the share "vmarray" and set its caching to "no". Now move the vdisk file (of course as long the vm isn't running) into this new share, edit the vm to update the vdisk path and you're done. PS you don't need to use /mnt/diskX/sharename as path in your VM config. You can use /mnt/user/sharename. Unraid automatically replaces /mnt/user against /mnt/diskX in the background, before starting the VM.

 

Another solution could be to add multiple HDDs to a second pool. Its not as fast as the SSD cache pool, but would be a lot faster than your array. Some unraid users even create ZFS pools (through the command line) and use those paths for their VMs (or even for all data).

  • Thanks 1
Link to comment

Hi All - I think below topic is relevant under our "Tweaks for media server and performance"

 

Currently there is allot of talk about the video done by: IBRACORP

And the guide made by TRaSH on Unraid setup of Media shares on Unraid

 

I followed another recommendation and created a UAD for my downloads and then move them.

Downloads to Unassigned drive --> Cache --> Array seperate shared folders for media

 

I would like to hear what you have done?

My setup is the "old school" with a different share for each media and IMPORTANT for me the correct split level!

 

Music

     Albums

TV Shows

     TV shows kids

     TV shows

Movies

     Animation

     Stand-up

     Concerts

     Movies 4K

     Movies

     Movies Kids

 

My setup required different split levels (Mainly in case of a crashed drive its much easier to see what's lost if data is grouped at leas by Seasons and a movie folder keept on one drive. Not to menschen spinning up drives if you are bench watching something

 

The structure recommended in the video and the webpage is all under one parent share called DATA:

(I actual like the simplicity of this directory setup)  - just not sure its the best on UNRAID

image.png.53194b4da930ad701ea3c1423eef6687.png

 

The media folder at the bottom would then have subfolders for each of your media folder if you like me split them up....

 

QUESTION: Do you use this new setup? (I can see how this would simplify the paths between all the dockers and *.rr's apps and make atomic move/hardlink possible)

If you are running this setup then what split level are you using?

What would be the optimal settings?

Not sure how this works (I dont understand the new split level headers the old one with level was much easier to understand!

 

I can see the recommended share settings for data is this:image.png.d7d33828fc376fc44640ad072fda7d38.png

 

One more interesting Tweak I found on the webpage is for the new PLEX scanner is the IMDB ID support on the folder names:

(Could be handy if you need to rescan your collection or move the file structure like above?)

image.png.2ffce3ea2c749a627daacc33fbff3984.png

 

As always if you think this is off topic I will move my post 🙂

 

 

 

 

 

Link to comment
  • 8 months later...
On 2/23/2020 at 2:50 AM, casperse said:

Hi Everyone

 

Unraid is great and I am like many others using my Unraid server for Plex (And of course other things)

So I would really like to collect all the tweaks and "Hack" done by others to increase performance of large Media servers doing transcoding

 

First just to get the "normal" recommendation listed:

  • Appdata on cache drive (Fast as possible - SSD/M2)
  • HW encoding using Unraid Nvidia plugin + GPU
  • Structure of media in each folder/optimize files for transcoding?

 

Specific Unraid tweaks:

  • Moving transcoding to RAM (Update better guide for doing this)

 

 

What other things or tips do you have to speed things up?

Speed up the UI?  have anyone tried to move the DB to RAM? and did it help?

 

Looking forward to getting some input from the power users! 👍 and updating this post with new things!

Thanks for a really great forum with so many help-full people

 

(I placed this post here because it should not be about the dockers but things around it and Unraid, please move it if you find a better place for it!)

Thanks so much for this! Plex went from very slow and gave errors because it timed out and said waiting for busy database, to super responsive and no more errors.

Link to comment

Hi all.

 

I've made these settings to my Plex docker to great effect, so thank you!

 

I have one question I was hoping someone could explain to me. It's regarding using "Direct disk access", with the instructions here:

 

So, I did this and I started thinking.  I have two nvme drives in BTRFS raid 1 setup.  If I do this direct disk access (/mnt/cache/appdata/plex) and bypass the Unraid SHFS, does this also bypass the BTRFS raid 1 so that my database is only on one nvme and not two?

 

I really don't want my database on just one drive (I've spent a lot of time getting my database customized and don't want to lose it. I just had a scare where I thought I lost it, but I was able to recover it)

 

Thank you all for the help, these instructions are amazing!

 

John

Edited by DepthVader
Link to comment
7 hours ago, DepthVader said:

does this also bypass the BTRFS raid 1 so that my database is only on one nvme and not two?

Not, it's on the whole RAID pool.

Doing this only bypasses the aggregation of Array and Pools, not the inner working of BTRFS RAID. (not sure that would even be possible)

Link to comment
  • 5 months later...

This is a great thread. Are there any issues with running some of the above ideas concurrently?

IE - Hardlinking for the storage folders in the pool and the Direct disk access for the cache?

 

I've read a bunch of forums but its hard to find the "best method" answer.  With 2 x 1TB NVMe drives is it better to dedicate one for Appdata chache and the other for separate Plex app data/metadata/etc cache? I know I would lose parity but I would use CA Appdata Backup/Restore plugin for backing up both drives. Or is it better to keep it as RAID 0 and all files on the single cache? My server will be 90% Plex with the odd UHD disc being backed up with makemkv and backing up of computers in our house. No other VMs. 

 

Last question - is there any benefit of combing Nvidia GPU and RAM transcoding? I have a p2000 that will be for transcoding and 64 GB of ECC. I will have a max of 4 video steams at any given moment. There was a comment in the ibracorp youtube video - 

Managed to combine this RAM technique with GPU transcoding enabled and has significantly sped things up. For anyone else wondering, to enable GPU transcoding (assuming you have an Nvidia card installed) you must first add the extra parameter "--runtime=nvidia" (without quotations) and then separated by a space from the RAM parameters/any other paramaters. You'll of course also need to have set up the appropriate drivers which you can find in Community Applications. A working config will look a little something like this "--runtime=nvidia --device=/dev/dri --mount type=tmpfs,destination=/tmp,tmpfs-size=20000000000 --no-healthcheck". This will still use your GPU to transcode but will instead offload that data to RAM instead of smashing your CPU and Cache resources. Happy homelab-ing! Edit: I should add that I have this tried and tested on linuxserver's repository but YMMV.

"--device=/dev/dri" is only for CPU transcoding. Since a recent update, the container will fail to spin up if you use this with GPU transcoding, so just remove it to get "--runtime=nvidia --mount type=tmpfs,destination=/tmp,tmpfs-size=20000000000 --no-healthcheck". And of course, adjust "tmpfs-size=xxxxxxxxxxx" to be appropriate for for how many bytes of RAM you're able to allocate in your server or you might exceed your total RAM which may lead to containers/vm's etc. starting to crash.

Link to comment
1 hour ago, sittingmongoose said:

If I move my Docker.img to direct access the cache, do I need to move all my dockers as well?  Or can some of my dockers use direct access and some use the default method?

What do you mean with "move" dockers?! The appdata dir? No.

 

But if you want the best performance and most efficient setup, then move the complete appdata dir to the cache as well and change as many paths as possible to /mnt/cache. But don't forget the risks. If the cache becomes full, this can cause container crashes / corrupt data. That's why it is so important setting free min space, so usual shares are not able to fully utilize the cache.

Link to comment
8 minutes ago, mgutt said:

What do you mean with "move" dockers?! The appdata dir? No.

 

But if you want the best performance and most efficient setup, then move the complete appdata dir to the cache as well and change as many paths as possible to /mnt/cache. But don't forget the risks. If the cache becomes full, this can cause container crashes / corrupt data. That's why it is so important setting free min space, so usual shares are not able to fully utilize the cache.

I’m sorry, when I said move I meant change the directory to direct cache.  According to your guide from a few years ago.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.