Jump to content
Sign in to follow this  
Nelson

SSD or HDD in array

12 posts in this topic Last Reply

Recommended Posts

Im currently running an nvme as cache and a seagate barracuda 6TB as array. Right now, I've filled up to about 4 TB, an expect it to fill up completely in the next 6 months or less. I'm wondering if I should just keep adding more HDD for the storage array, or would an ssd storage array be viable.

 

By viable I mean safe, and unlikely to fail. I've been looking around, and while most that Ive read about seem to lean towards the hdd, I've also seen some running an ssd srray and apparently without any problems.

 

The specfic parts I would run in the array would either be a samsung 860 QVO 4 tb, and probably start with 3 to 4 of these, or another 3 to 4 seagate barracuda 5400 rpm.

 

Not sure if this makes a difference, but it is on a 10 gig network. This machine is only on at a a need to basis, so if i dont need it at the time, it will be turned off.

 

Would anyone who hopefully knows more than me be able to answer, whether i should run ssd or hdd in array and why?

Share this post


Link to post

Most SSDs should work fine as array devices, trim won't work for now, also 860 QVO won't be much faster writing than spinners, reads will be much faster though.

Share this post


Link to post
6 hours ago, Nelson said:

The specfic parts I would run in the array would either be a samsung 860 QVO 4 tb, and probably start with 3 to 4 of these, or another 3 to 4 seagate barracuda 5400 rpm.

The limitation I think is costs. The QVO cost is still too high to justify an all-SSD array. Maybe half its current price/GB and they can be a viable option for home uses.

Share this post


Link to post
8 hours ago, johnnie.black said:

Most SSDs should work fine as array devices, trim won't work for now, also 860 QVO won't be much faster writing than spinners, reads will be much faster though.

what do you mean by the 860 not being much faster than spinners? doesnt the 860 have something like 3x faster in write speeds compared to a hdd? Im pulling numbers from userbenchmark, and it sounds like the 860 has 380 MBs compared to a barracuda that's about 100 to 130 MBs

Share this post


Link to post
5 hours ago, Nelson said:

what do you mean by the 860 not being much faster than spinners? doesnt the 860 have something like 3x faster in write speeds compared to a hdd?

The 860 QVO is QLC and can only sustain 160MB/s writes after filling the small SLC cache, the 860 EVO is a different story and can write much faster than a HDD.

Share this post


Link to post
8 hours ago, Nelson said:

what do you mean by the 860 not being much faster than spinners? doesnt the 860 have something like 3x faster in write speeds compared to a hdd? Im pulling numbers from userbenchmark, and it sounds like the 860 has 380 MBs compared to a barracuda that's about 100 to 130 MBs

The 860 QVO uses "adaptive" SLC cache. That effectively means the cache capacity reduces with the amount of free space available. Apparently it ranges from 6GB (full drive) to 78GB (empty drive). When using the SLC cache functionality (it's a functionality of the firmware, it doesn't really have true SLC cells), it performs about the same level as the 860 EVO. When the cache runs out, it gets down to 160MB/s.

 

Headline sequential write numbers sort of undermine the QVO a little bit e.g. when you start comparing to slow HDD at 130MB/s (that's slow even for HDD, my 7200rpm is still faster than that towards the very end).

  • Random IO is still faster than HDD, regardless if cache or not.
  • Read speed is still consistently magnitude faster than HDD

 

Nevertheless, as I said, the QLC is still too expensive to be a viable home mass storage. Half of its current price and I have no problem switching to all-SSD.

 

Edited by testdasi

Share this post


Link to post
On 2/15/2020 at 3:24 AM, testdasi said:

The 860 QVO uses "adaptive" SLC cache. That effectively means the cache capacity reduces with the amount of free space available. Apparently it ranges from 6GB (full drive) to 78GB (empty drive). When using the SLC cache functionality (it's a functionality of the firmware, it doesn't really have true SLC cells), it performs about the same level as the 860 EVO. When the cache runs out, it gets down to 160MB/s.

 

Headline sequential write numbers sort of undermine the QVO a little bit e.g. when you start comparing to slow HDD at 130MB/s (that's slow even for HDD, my 7200rpm is still faster than that towards the very end).

  • Random IO is still faster than HDD, regardless if cache or not.
  • Read speed is still consistently magnitude faster than HDD

 

Nevertheless, as I said, the QLC is still too expensive to be a viable home mass storage. Half of its current price and I have no problem switching to all-SSD.

 

Dang.... I didnt know that about the the 860. I guess if I need to factor that little extra tidbit about the adaptive cache, it makes it a lot less appealing.

 

I did originally think about going to all-ssd array approach, mostly for the sound, but the size of a single drive as well as the cost always held me back. energy use would also probably be similar as you're using more drives as well.

 

I guess if I really think about it, write speed doesnt matter as much as read speed, but I can always just throw another ssd into the array for specific files just for the read speed, and everything else onto the hdd. The only time when I noticed any sort of lag during streaming, was when It was moving files anyway, so streaming seems to see little lag over a wire. I'll have to try wireless sometime later on just to see the results.

 

Out of curiosity, what drives do you use? Im currently using 6-8tb barracudas at 5400 rpm. at about $25/TB (CAD) that seems the most cost effective. Should i go for 7200 rpm drives instead? it seems the 7200 rpm drives cost quite a bit more. The 7200 rpm drive seems to cost about $33-$50/TB which on their own, might not be so bad on its own, but it does start to add up on the larger drives plus multiple drives as well. I am looking at skyhawk, ironwolf, and barracuda pro for the 7200 rpm drives though, as those seem to be the cheapest.

Share this post


Link to post
7 hours ago, Nelson said:

Dang.... I didnt know that about the the 860. I guess if I need to factor that little extra tidbit about the adaptive cache, it makes it a lot less appealing.

Out of curiosity, what drives do you use? Im currently using 6-8tb barracudas at 5400 rpm. at about $25/TB (CAD) that seems the most cost effective. Should i go for 7200 rpm drives instead? it seems the 7200 rpm drives cost quite a bit more. The 7200 rpm drive seems to cost about $33-$50/TB which on their own, might not be so bad on its own, but it does start to add up on the larger drives plus multiple drives as well. I am looking at skyhawk, ironwolf, and barracuda pro for the 7200 rpm drives though, as those seem to be the cheapest.

Whether you should go for 7200rpm (or even SSD array actually) depends on the number of streamers and how much you dislike buffering.

Just based on my own anecdotal quick tests:

  • If just 1-2 concurrent streamers accessing the same drive then 5400rpm is more than good enough.
  • Once you get to about 4 concurrent streamers and if they access the same HDD, you may start to have some buffering with 5400rpm. But it has to be accessing the SAME disk for it to be a problem so I reckon 5400rpm is still ok enough.
  • I think about 6+ streams is where 7200rpm starts to make some sense.
  • 10+ streams is where SSD array starts to become justifiable.

So if cost is important and you don't have that many streamers, I would say stick with 5400rpm.

 

In terms of 7200rpm drives, I just buy whatever I need that is cheapest from a reputable dealer whenever I need it.

Recently they just happen to be Iron Wolf.

 

The reason I use mainly 7200rpm is because I use the Unraid array mostly as my backup server so there are a lot of small files which benefit from 7200rpm.

The QVO would be even better than 7200rpm for my backup jobs but the cost is still too high to justify the benefit.

 

Edited by testdasi

Share this post


Link to post
1 hour ago, testdasi said:

Whether you should go for 7200rpm (or even SSD array actually) depends on the number of streamers and how much you dislike buffering.

Just based on my own anecdotal quick tests:

  • If just 1-2 concurrent streamers accessing the same drive then 5400rpm is more than good enough.
  • Once you get to about 4 concurrent streamers and if they access the same HDD, you may start to have some buffering with 5400rpm. But it has to be accessing the SAME disk for it to be a problem so I reckon 5400rpm is still ok enough.
  • I think about 6+ streams is where 7200rpm starts to make some sense.
  • 10+ streams is where SSD array starts to become justifiable.

So if cost is important and you don't have that many streamers, I would say stick with 5400rpm.

 

In terms of 7200rpm drives, I just buy whatever I need that is cheapest from a reputable dealer whenever I need it.

Recently they just happen to be Iron Wolf.

 

The reason I use mainly 7200rpm is because I use the Unraid array mostly as my backup server so there are a lot of small files which benefit from 7200rpm.

The QVO would be even better than 7200rpm for my backup jobs but the cost is still too high to justify the benefit.

 

Its just 1 or 2 streamers for now. Most of the files on the array is just an archive of videos anyway that i go through every so often. I guess ill stick to the 5400 rpm drives.

 

Is it possible to put a couple of ssd into the array for dedicated files/folders? I.e a specific user gets the entire ssd to themselves with its own dedicated parity just for that drive only and files would go directly to that without going through cache

Share this post


Link to post
15 minutes ago, Nelson said:

Is it possible to put a couple of ssd into the array for dedicated files/folders? I.e a specific user gets the entire ssd to themselves with its own dedicated parity just for that drive only and files would go directly to that without going through cache

You can add SSD to the array with the caveat that it's not officially supported (even though it would work just like a normal HDD).

However, note that there are some SSD's that were reported on here to cause parity error due to how their firmwares do garbage collection / wear leveling.

So if you ever do that, I recommend to watch your parity checks for errors, even 1 or 2 (because any error means corrupt data if rebuilding).

 

Your "with its own dedicated parity" part isn't possible however. That's multi-array functionality which is still in development, presumably.

Edited by testdasi

Share this post


Link to post
On 2/17/2020 at 5:39 AM, testdasi said:

You can add SSD to the array with the caveat that it's not officially supported (even though it would work just like a normal HDD).

However, note that there are some SSD's that were reported on here to cause parity error due to how their firmwares do garbage collection / wear leveling.

So if you ever do that, I recommend to watch your parity checks for errors, even 1 or 2 (because any error means corrupt data if rebuilding).

 

Your "with its own dedicated parity" part isn't possible however. That's multi-array functionality which is still in development, presumably.

Ill keep that in kind then. Thanks for all the info, it helps greatly.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this