Do I need an SSD


smush

Recommended Posts

11 hours ago, smush said:

How would having an SSD benefit?

Assuming you intend to run Plex as a Docker container, yes, you would benefit from an SSD.

 

The appdata share which holds all the config files and databases for Plex (and other Docker containers) functions much faster on an SSD than on a spinning array drive.  Having appdata and all docker containers on the SSD also means array drives do not need to be spun up unless you are accessing media from them.  No need to spin up a drive to access/update a docker container or database.

 

If you are only running Plex or a minimal number of docker containers and not doing any write caching on the cache drive/pool, 120GB (even smaller, but why as 120GB SSDs are so cheap and give a little overhead as your needs grow?) should be plenty for the SSD. 

 

You set your SSD up as a cache drive usually, although as an unassigned device is also possible.

 

Typically, the docker.img file is set to 20GB in size and it rarely needs to be bigger than that unless you have mis-configured containers which are writing to the image when they should not be doing so.

Edited by Hoopster
  • Like 1
Link to comment

The general rule is that NVME SSD tends to be faster than SATA SSD so it is a trade-off of cost v speed.    For many purposes the difference may not be that noticeable.

 

it is worth noting that even using an HDD that is not part of the array (I.e. as a cache drive) will still provide performance advantages for docker/mv performance over having them hosted on the main array as the process of updating parity on the main array significantly slows down write speeds.  It is just that for most people using an SSD is a better trade-off of cost v performance, and also many modern motherboards have slots for M2 based SSDs built into them.

Link to comment
16 hours ago, smush said:

Should I be looking at NVME or will a regular SSD do the job?

You will not notice any diff between NVMe vs SATA SSD for typical Unraid uses, except for 3 things.

  • Under heavy IO load, you should notice less lag with NVMe (e.g. lower CPU load). That's because NVMe is built with parallelism in mind so IOWAIT tends to be lower as compared to SATA (and M.2 AHCI to a lesser extend). Obviously part of it is also because NVMe is inherently faster as well.
  • If you regularly copy very large files then 1GB/s is perceivably faster than 500MB/s.
  • If you have a gaming VM, it is highly beneficial to pass through the NVMe to your VM as a PCIe device (i.e. the vfio binding method and NOT the ata-id method). That effectively isolates the NVMe IO from the host (Unraid) IO, which manifests as visibly obviously less lag when the host is under heavy IO load.
    • If you have a spare PCIe SATA controller, you can do something similar as well i.e. pass through the controller to the VM for a similar impact but most people don't have a spare PCIe SATA card laying around + a free PCIe slot to do that.

 

Link to comment
5 hours ago, testdasi said:

If you regularly copy very large files then 1GB/s is perceivably faster than 500MB/s.

Thanks. Could you just clarify your point above as for example copying over 1TB of video content to the Plex server would go to the spinning disk or is the SSD involved somewhere in this process?

Link to comment

It would depend.

In regular operation, it is generaly good to have a cache drive (SSD) to have quicker transfer to the server and have the mover do it's thing at night and move this data to the spinning part of the array.

On the initial loading of data to the array, it is best not to use cache as the amount of data is usually larger than the size of the cache drive.

Link to comment
Thanks. Could you just clarify your point above as for example copying over 1TB of video content to the Plex server would go to the spinning disk or is the SSD involved somewhere in this process?

The SSD is only involved with the video content if it is truly a cache drive (assuming it is large enough) for the media share. In this case, writes to the array go first to the cache drive and the content is later moved to the spinning disks by the Mover. This makes initial writes to the “array” much faster although not parity protected until moved to the actual media share.

 

Of course, the Plex database is updated in appdata anytime media is written to the array with or without cache enabled for the media share. However these are relatively small file writes which are not affected by the size of the media content. You are not likely to see much difference between a SATA or NVMe SSD for database reads/writes.

 

Some have a Cache-only media share for frequently accessed Plex content. This content always lives on the cache drive and the read/write speeds of NVMe would be noticeable in this case.

 

 

Sent from my iPhone using Tapatalk

  • Like 1
Link to comment
2 hours ago, smush said:

Given the amount of media (TB's) It would work out very expensive to use SSDs for caching I'm thinking.

The idea behind a cache drive is not to have cache capacity equal to the media capacity on the array.  It is a temporary landing place for initial writes so the mover can be run later (usually at a low demand time such as 3am) to move the media to the spinning disk array and update parity.  This is a much slower process than writing initially to cache.  Media, while on the cache drive is still accessible as if it were on the spinning disk array.

 

The cache drive, if you plan to do write caching, only needs to be as large as the most amount of data you think you will write to the array on a daily basis (assuming the mover runs daily).  You could have a 60TB array but only a 500GB cache drive for example as long as you plan on moving only 500GB or less to the array each day.

 

When initially moving data to the array on a new system, it is a good idea to not even have cache or a parity drive enabled or to use Turbo Write during this initial seeding process.  After the data is moved, you can introduce parity and caching as needed based on your anticipated caching needs.

 

In your case, set up the array initially without parity and move all the media to it.  Your "cache" drive will be enabled as it is the home for your docker appdata share, but, you won't use it for write caching. 

 

After moving the data, enable a parity drive and let it update parity.  You can then decide if you like the concept of write caching and, if so, for which shares you want it enabled.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.