need help choosing controller


Recommended Posts

I currently have a X11SSH-LN4F laying around that i want to re-purpose for unraid.

20x WD red sata 6Gb/s drives for data and 2-4x Samsung 860 EVO 500GB sata 6gb/s is what im going to stuff the nas with.

 

Since the board doesn't have 24 sata or sas ports means that i have to get a controller.

Im also planing on getting 10Gb network with the goal of having a whooping 1GB/s transfer speed. But this sacrifice one pcie x8 slot for the NIC, leaving me one x8 and one x4 slot for a controller that unraid can use.

 

So my questions are:

1. Looking at the LSI SAS 9305-24i (pcie x8, SATA 12Gb/s) is it enough for my setup without a bottleneck?

2. Is it better to use 2 controllers rather then 1 controller for all the drives if i want 10GB/s speed?

3. Any other recommendation on controller that are more suited for my situation?

 

 

 

 

Link to comment
23 minutes ago, r4ptor said:

1. Looking at the LSI SAS 9305-24i (pcie x8, SATA 12Gb/s) is it enough for my setup without a bottleneck?

Yes

 

23 minutes ago, r4ptor said:

Is it better to use 2 controllers rather then 1 controller for all the drives if i want 10GB/s speed?

Assuming you mean 1GB/s you'll never get that with that hardware, unRAID doesn't stripe disks, you can still get a decent speed writing to the SSDs if in a cache pool, but for 1GB/s you'd need NVMe, writing to the array will max out at around 150MB/s.

Link to comment
1 hour ago, johnnie.black said:

Assuming you mean 1GB/s you'll never get that with that hardware, unRAID doesn't stripe disks, you can still get a decent speed writing to the SSDs if in a cache pool, but for 1GB/s you'd need NVMe, writing to the array will max out at around 150MB/s.

Of course I meant 1 GigaByte transfer speed between PC and NAS in a 10 Gigabit network.  :D

 

I reed somewhere on this forum that its possible to stripe the cache drives but not the data drives. If this works then i wont be needing a NVMe but rather 2 regular SSDs

 

That 150MB/s you speak of would be the bottleneck in a 1Gb network but internaly from cache to array would be limited to up to 600MB/s as per SATA3.

Link to comment
8 minutes ago, r4ptor said:

That 150MB/s you speak of would be the bottleneck in a 1Gb network but internaly from cache to array would be limited to up to 600MB/s as per SATA3.

No, cache to array can only go as fast as the array can be written. No striping in array as mentioned and there is also some write speed penalty to update parity even with turbo write. And even without the parity update HDDs can't saturate sata3 anyway.

 

https://lime-technology.com/forums/topic/50397-turbo-write/

 

Link to comment
11 hours ago, r4ptor said:

I reed somewhere on this forum that its possible to stripe the cache drives but not the data drives. If this works then i wont be needing a NVMe but rather 2 regular SSDs

You can but with two SATA SSDs you won't get close to 1GB/s, more like 600MB/s.

 

11 hours ago, r4ptor said:

That 150MB/s you speak of would be the bottleneck in a 1Gb network but internaly from cache to array would be limited to up to 600MB/s as per SATA3.

No, that's the max speed the array will be able to write, from anywhere.

Link to comment
19 hours ago, trurl said:

No, cache to array can only go as fast as the array can be written. No striping in array as mentioned and there is also some write speed penalty to update parity even with turbo write. And even without the parity update HDDs can't saturate sata3 anyway.

 

8 hours ago, johnnie.black said:

No, that's the max speed the array will be able to write, from anywhere.

Sure, from the cache to the array of data disk, this will be slow as johnnie.black stated earlier.
But i want 1GB/s from my PC to the "cache", if it then takes 10min for 1GB from the cache to the data disks isn't that important, right now atleast :)

 

8 hours ago, johnnie.black said:

You can but with two SATA SSDs you won't get close to 1GB/s, more like 600MB/s.

are you suggesting that i go for 4x SSD then?

Link to comment
8 minutes ago, r4ptor said:

are you suggesting that i go for 4x SSD then?

Best would be a couple (or just one if no redundancy is needed) of NVMe devices, but test just those two SSDs, it might get closer to 1GB/s than I think using raid0.

 

 

 

Edited by johnnie.black
Link to comment
  • 2 weeks later...

Thanks for your tip jonnie.black, ive been reading up on this topic and you seem to be right.

 

i decided to go witha  norco 4324 case, 24 bay with 6 sas backplanes (1 per row of 4 drives) but i might need some suggestion on a new controller, the 9305-24i is a huge hole in my wallet.

Is there a good sas expander + controller combo that can handle 24 drive with no bottleneck? (preferable cheaper too)

Does the sas expander need a pcie slot for data or could i simply use a riser to power it since im running out of slots on my motherboard?

Link to comment

cant find a RES2CV360 for sale anyway :)

 

How about the Intel RES3TV360 with a LSI 9300-8I?

both seem to support 12Gb/s and combined they go for $400-420 instead of $620 for a 9305-24i

 

Is it possible to use one of these just to power the explander?
10pcs-lot-pcie-pci-e-pci-express-riser-c

Link to comment

If using dual link you'll have at least 4400MB/s for all the disks, so with 24 disks 183MB/s per disk, which is more than a WD Red max speed, and possible around 7000MB/s with LSI's databolt technology, assuming the RES3TV360 uses an LSI chipset*.

 

* Edit: It doesn't, it uses a PMC chipset, so no Databolt

Edited by johnnie.black
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.