Unraid tweaks for Media server performance?


Recommended Posts

Oh yes Stuff happens especially when we like us - go and do special tweaks (Stuff also happened for me, but the backup script from @mgutt is great and I use that for both emby and Plex (Appdata is huge! incremental backup is great) and of course the VM/Docker backup apps from the Unraid app community)

 

This issue got me thinking, what would happen if you have the appdata folder as:

image.thumb.png.05648be2aa6454beed59b05a93a8635f.png

And you have all your Appdata path in the docker changed to /cache/

image.png.fddbe47415c0d25844d21e7f3579a87c.png

But you do have the global share settings set:

image.thumb.png.ddd02d35e055014b73e93892c669cf46.png

And you have a overflow of Plex metadata to appdata? (Prefer setting!)

Would it write to the array?

The path from the docker would only see the cache drive, so new data would not be available?

How would this work? (Don't really want to test this in practice :))

Sorry if this is a stupid Q...

 

 

 

 

 

Edited by casperse
Link to comment

Overflow is only valid for Shared Paths. A Direct Path (/mnt/cache) does not overflow to the array. This means, if the cache becomes full and you use /mnt/cache, the Plex Container will produce errors, because it is out of storage. That is the reason why I suggest in my guide to set "Min. free space" in your Global Share Settings.

 

Example:

  • 1TB SSD cache
  • /mnt/cache is your SSD
  • /mnt/disk1 is your array
  • /mnt/user/Movies uses disk1 and uses the cache setting "yes"

Scenario 1

  • Plex appdata is set to /mnt/user/...
  • You upload more than 1TB of movies to your server. After occupying 1TB of your SSD, further movie files are written to the array. 
  • As the SSD is full, Plex writes to the array

 

Scenario 2

  • Plex appdata is set to /mnt/cache...
  • You  upload more than 1TB of movies to your server. After occupying 1TB of your SSD, further movie files are written to the array. 
  • As the SSD is full, Plex produces errors

Scenario 3

  • You set 100GB Min. Free Space
  • Plex appdata is set to /mnt/cache...
  • You upload more than 1TB of movies to your server. After occupying 900GB of your SSD, further movie files are written to the array. 
  • As there are still 100GB left, Plex works as usual

 

 

Link to comment

I finally got this working.  Nice tip!  I only use Emby as a backend so I don't have a lot of experience with how slow it usually loads covers remotely but, from what I can tell, it does seem faster.  Hopefully my friends and family will notice and benefit from the improvement.

 

Is this the sort of thing where the extra speed of an NVME over SSD will improve things further or not really worth it?

Link to comment
8 minutes ago, RockDawg said:

Is this the sort of thing where the extra speed of an NVME over SSD will improve things further or not really worth it?

It depends. After the covers are loaded into the RAM, they won't be loaded from the drive again, until you upload/download/execute something that overwrites those RAM cache. But I would say it improves, as every database update, cover access, search query, WebGUI page load... will be faster because of the lower latency of an SSD. In addition if your dockers run all on an SSD your HDDs can sleep all the time.

Link to comment
On 10/30/2020 at 2:21 AM, mgutt said:

It depends. After the covers are loaded into the RAM, they won't be loaded from the drive again, until you upload/download/execute something that overwrites those RAM cache. But I would say it improves, as every database update, cover access, search query, WebGUI page load... will be faster because of the lower latency of an SSD. In addition if your dockers run all on an SSD your HDDs can sleep all the time.

I did notice a big improvement in speed (Fast NVMe!) having Plex metadata and thousands of covers scrolling through a media library I have a near instant load - I also like the animation of scrolling through e media file (Generated for each media file if you have it enabled - hundred of Gigs). I can recommend buying the biggest NVMe you can afford - I went through 3 increasing the size (Would do it again if it was possible! but 2TB is max today)

@mgutt is correct I have now have placed the following on my NVMe cache drive:

 

appdata

domains

and the /mnt/cache/system/docker/docker.img

Works great!

 

Link to comment
  • 11 months later...
8 hours ago, ChatNoir said:

Simply click on the first drive of the pool and you will have the Minimum free space field. It can only be adjusted with the Array stopped.

 

Got it. Thanks. I couldn't find anything that said "Global Share Settings" and I found a few different places to set minimum free space so I wasn't sure. Plus what I see on that screen doesn't look like his screenshots so I wanted to make sure. Thanks!

Link to comment
On 10/28/2020 at 1:49 AM, mgutt said:

Overflow is only valid for Shared Paths. A Direct Path (/mnt/cache) does not overflow to the array. This means, if the cache becomes full and you use /mnt/cache, the Plex Container will produce errors, because it is out of storage. That is the reason why I suggest in my guide to set "Min. free space" in your Global Share Settings.

 

Example:

  • 1TB SSD cache
  • /mnt/cache is your SSD
  • /mnt/disk1 is your array
  • /mnt/user/Movies uses disk1 and uses the cache setting "yes"

Scenario 1

  • Plex appdata is set to /mnt/user/...
  • You upload more than 1TB of movies to your server. After occupying 1TB of your SSD, further movie files are written to the array. 
  • As the SSD is full, Plex writes to the array

 

Scenario 2

  • Plex appdata is set to /mnt/cache...
  • You  upload more than 1TB of movies to your server. After occupying 1TB of your SSD, further movie files are written to the array. 
  • As the SSD is full, Plex produces errors

Scenario 3

  • You set 100GB Min. Free Space
  • Plex appdata is set to /mnt/cache...
  • You upload more than 1TB of movies to your server. After occupying 900GB of your SSD, further movie files are written to the array. 
  • As there are still 100GB left, Plex works as usual

 

 

 

I did all this when we first discussed the /mnt/cache and after some time you really get used to having all the appdata and VM's on your fast cache drive.

 

My 2TB Nvme drive is reaching the 100G limit (Now defined on the cache drive and not the Gloval share settings)

If money wasn't an issue I would upgrade to a nvme 4TB - but they are incredible expensive (My IRQ doesn't allow for a second nvme)

 

So my option is to do some split of VM's some on the Cache drive and others on the Array

How would this best be done in practice? I gues it isnt enought to remove the /mnt/cache/ path when the "domains" folder is set to: Prefer

 

Would the best option keeping performance be to create a new domains share on the array? and just split them (Fast & Slow)

image.thumb.png.4ed48172d994070db1e9e587376736d8.png

 

@mgutt Also I am still using many of your great scripts (Also in this thread) the one you did for doing backups of the "Plex appdata backup" - Having everything on cache for speed requires good backup 🙂

I was wondering if you have a similar solution doing VM backups? (I found the solution in the app store to be buggy and cause to many problems)  again thanks for all your help and especially all the great posts that you share with all of us in the forum!

  • Like 1
Link to comment
7 hours ago, casperse said:

Would the best option keeping performance be to create a new domains share on the array? and just split them (Fast & Slow)

There is no rule to put all vdisks into the "domain" share. So feel free to create as many shares as you like and set their caching rules as you prefer it. For example add the share "vmarray" and set its caching to "no". Now move the vdisk file (of course as long the vm isn't running) into this new share, edit the vm to update the vdisk path and you're done. PS you don't need to use /mnt/diskX/sharename as path in your VM config. You can use /mnt/user/sharename. Unraid automatically replaces /mnt/user against /mnt/diskX in the background, before starting the VM.

 

Another solution could be to add multiple HDDs to a second pool. Its not as fast as the SSD cache pool, but would be a lot faster than your array. Some unraid users even create ZFS pools (through the command line) and use those paths for their VMs (or even for all data).

  • Thanks 1
Link to comment

Hi All - I think below topic is relevant under our "Tweaks for media server and performance"

 

Currently there is allot of talk about the video done by: IBRACORP

And the guide made by TRaSH on Unraid setup of Media shares on Unraid

 

I followed another recommendation and created a UAD for my downloads and then move them.

Downloads to Unassigned drive --> Cache --> Array seperate shared folders for media

 

I would like to hear what you have done?

My setup is the "old school" with a different share for each media and IMPORTANT for me the correct split level!

 

Music

     Albums

TV Shows

     TV shows kids

     TV shows

Movies

     Animation

     Stand-up

     Concerts

     Movies 4K

     Movies

     Movies Kids

 

My setup required different split levels (Mainly in case of a crashed drive its much easier to see what's lost if data is grouped at leas by Seasons and a movie folder keept on one drive. Not to menschen spinning up drives if you are bench watching something

 

The structure recommended in the video and the webpage is all under one parent share called DATA:

(I actual like the simplicity of this directory setup)  - just not sure its the best on UNRAID

image.png.53194b4da930ad701ea3c1423eef6687.png

 

The media folder at the bottom would then have subfolders for each of your media folder if you like me split them up....

 

QUESTION: Do you use this new setup? (I can see how this would simplify the paths between all the dockers and *.rr's apps and make atomic move/hardlink possible)

If you are running this setup then what split level are you using?

What would be the optimal settings?

Not sure how this works (I dont understand the new split level headers the old one with level was much easier to understand!

 

I can see the recommended share settings for data is this:image.png.d7d33828fc376fc44640ad072fda7d38.png

 

One more interesting Tweak I found on the webpage is for the new PLEX scanner is the IMDB ID support on the folder names:

(Could be handy if you need to rescan your collection or move the file structure like above?)

image.png.2ffce3ea2c749a627daacc33fbff3984.png

 

As always if you think this is off topic I will move my post 🙂

 

 

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.