Next Big Upgrade


Recommended Posts

Could anyone please tell me if this will work?

 

I've finally gotten around to realizing that my old Intel G630 wasn't cutting it anymore and decided on a rather large upgrade. After lasting 10+ years I want to spec out an upgrade that'll last the foreseeable future (perhaps not 10 years but that would be nice).

 

My budget is very lose. I've already made my mind up to the fact that I'll probably be spending $3-4k+ on these upgrades. My current build is as follows:

Case: Norco 4224

PSU: EVGA 850W 80+ Gold (can't remember the model but it's about 3 years old)

HDD: Mix and match ranging from 3 to 16tb (total of 21 disks).

Cache: 600GB WD Velociraptor

Midwall Fans: 3x Cougar 120mm HDB

MB/RAM/CPU are all going to be replaced as well as my 3x LSI 9211 HBA.

 

The parts I have selected so far are:

https://pcpartpicker.com/list/sP9MFg

2x LSI 9305-16i

I expect to upgrade eventually to all Seagate Exos 16TB or larger unless recommended otherwise.

Unknown fans (see below)

 

What I wanted to know is if anyone sees anything they'd recommend me changing or if they notice anything that doesn't match up. Specifically I wanted to pick people's minds about my choice of the HBAs. I choose the MB in the part picker list because I wanted a 12th gen Intel CPU (for its iGPU). That combined with a desired for a 10GbE port and 2 or more M.2 slots let me to that board. It however only has 3x PCIe slots (1xPCIe 5.0 2xPCIe 3.0).

 

I'm going to get the 2xSeagate Firecudas for Cache and VMs and I'm going to use this server primarily to run Plex/Jellyfin. I pretty much don't want a bottleneck anywhere besides maybe the number of concurrent streams.

 

Also if anyone recommends 80mm or 120mm fans for the 4224 that would pull air through the hard drives that would be great as my HDD temps are getting up to 52+ during parity and that seems kind of warm to me.

 

 

 

Edited by SergeantCC4
added information
Link to comment

All work, but I properly won't build in this way

- Two LSI 9305-16i really expensive, it just used to connect 24 harddisks

- 6 SAS-SATA cable

- DDR5 almost double price compare to DDR4

 

On 7/2/2022 at 1:55 AM, SergeantCC4 said:

I expect to upgrade eventually to all Seagate Exos 16TB or larger unless recommended otherwise.

Could you estimate does 16 bay already enough, let say 14x16TB =  224TB data capacity. Because if you limit to 16 disks, then you can avoid expensive high port count HBA.

 

On 7/2/2022 at 1:55 AM, SergeantCC4 said:

(perhaps not 10 years but that would be nice).

The best way was go to HEDT platform, more PCIe and memory slot. Only this kind platform have room for upgrade. In your case, I will use back 9211-8i as 9305-16i just help free up one PCIe slot and no performance gain. You should consider SAS expander backplane, less cable need ( cable also prices )

 

My main build as below, 

X299 with 9800x with 256GB memory, no cache pool, 10GB NIC, ( X299 mobo less then $240, 9800x less then $400 for my purchase ), same cooler Noctua NH-D9L

LSI-9300-4i4e, 4i connect to 16 bay SAS backplane and 4e connect to external 12 bay enclosure with Adaptec 82885t expander.

 

https://www.amazon.com/dp/B07YP69HTM

https://www.amazon.com/dp/B07YD6SXF7

https://www.amazon.com/dp/B07B79CF9R

 

 

Edited by Vr2Io
Link to comment
9 hours ago, Vr2Io said:

All work, but I properly won't build in this way

- Two LSI 9305-16i really expensive, it just used to connect 24 harddisks

- 6 SAS-SATA cable

- DDR5 almost double price compare to DDR4

The two LSI 9305-16i are an idea I had for two reasons. The first was that I wanted down the road to get another case. With Norco being extinct at the moment I wanted to look at other options which may have me splitting the two 9305-16i's into two chassis or getting one that had 30+ slots in which case I would need the additional capabilities of the 2 x 9305s. I don't want to go that much past 30 because that seems like a lot of disks to have protected by only two drives.

 

9 hours ago, Vr2Io said:

 In your case, I will use back 9211-8i as 9305-16i just help free up one PCIe slot and no performance gain.

Second, and correct me if I'm wrong, but the 9305 is PCIe 3.0 card vs the 9211 being PCIe 2.0. So if I eventually use all of the slots or even if I don't the 9305 has 2x the bandwidth so once I upgrade to all 7200rpm drives the overall system speed could be faster for reads (I know writes are still parity limited). That would enable me to have greater multi-disk reads without worrying about bandwidth issues.

 

The DDR5 thing I agree with 100%. it is crazy expensive right now which is why I was also considering waiting until Raptor Lake or Zen 4 to see if the prices go down????

 

9 hours ago, Vr2Io said:

Could you estimate does 16 bay already enough, let say 14x16TB =  224TB data capacity. Because if you limit to 16 disks, then you can avoid expensive high port count HBA.

Could you clarify this?

The Norco 4224 chassis that I have has 24 slots so (subtracting two disks for double parity) 22x16 would give me ~350TB. As this is a build I hope to not have to change anything out for the foreseeable future I wanted to have the maximum amount of expandibility as possible.

Edited by SergeantCC4
Link to comment
2 hours ago, SergeantCC4 said:

The two LSI 9305-16i are an idea I had for two reasons. The first was that I wanted down the road to get another case. With Norco being extinct at the moment I wanted to look at other options which may have me splitting the two 9305-16i's into two chassis or getting one that had 30+ slots in which case I would need the additional capabilities of the 2 x 9305s. I don't want to go that much past 30 because that seems like a lot of disks to have protected by only two drives.

That's fine, but the cost really high, other then that should be OK ( of course you need two x8 slot )

 

2 hours ago, SergeantCC4 said:

bandwidth issues.

If each 9211-8i conenct 8 disks, then ~4GB/s /8 = 500MB/s per disks, this far more then 7200 spinner disk real speed. For 9305 in double bandwidth but also double disk count, actual bandwidth per disk won't change.

 

2 hours ago, SergeantCC4 said:

Could you clarify this?

I mean limit the build in less then 16bay use 3u / 4u case. In my build combination 9300 + 82885t, let say connect in dual link i.e. 4e+4e, then you still can connect 28 disks internal without blocking bandwidth. But cost reduce a lot ( This depends on what price you got those HW ), the expander can place anywhere to free PCIe slot, you just need provide external power to it.

Edited by Vr2Io
Link to comment

@Vr2Io First off thanks again for your quick responses. This is exactly the kind of answers/suggestions I'm looking for.

 

8 hours ago, Vr2Io said:

That's fine, but the cost really high, other then that should be OK ( of course you need two x8 slot )

Yeah the motherboards I'm looking at all have x16 slots and are either x4 or x8 electrical (the one I have in my pcpartpicker list is x4 for the second two slots at PCIe 3.0). Which leads me to the realization of my next point.

 

8 hours ago, Vr2Io said:

If each 9211-8i conenct 8 disks, then ~4GB/s /8 = 500MB/s per disks, this far more then 7200 spinner disk real speed. For 9305 in double bandwidth but also double disk count, actual bandwidth per disk won't change.

I'm not sure why, especially since I've had the three 9211s for about 6+ years, that I'm realizing that they are x8 cards. For some reason I thought they were always x4 cards and as a result I believed they needed upgraded. So I think I will short term keep the 3 x 9211s and rethink the MB configuration to get more x8 mode slots.

 

8 hours ago, Vr2Io said:

I mean limit the build in less then 16bay use 3u / 4u case. In my build combination 9300 + 82885t, let say connect in dual link i.e. 4e+4e, then you still can connect 28 disks internal without blocking bandwidth. But cost reduce a lot ( This depends on what price you got those HW ), the expander can place anywhere to free PCIe slot, you just need provide external power to it.

This is where my expertise is limited. I'm not sure about what HBA or expander does what as I'm not caught up in that field. I basically have to read forums until someone mentions a card and then do research to see if it'll work for my use case and it's really slowing me down. Is the expander to allow me to have two cases share hard drives on the same unRAID array?

Link to comment
3 hours ago, SergeantCC4 said:

Is the expander to allow me to have two cases share hard drives on the same unRAID array?

Expander like a network switch, it use to expand more fanout port and share the uplink bandwidth, you can use it internal or external, the interrconnect cable allow several meter long.

 

With expander buildin backplane, you only need one / two cable for 24 disk.

 

You even can use one 9211 (6gb gen) to connect expander (6gb gen) for 24 disks with some bottleneck, when access all disk, each disk reduce to ~166MB/s.

Edited by Vr2Io
Link to comment

Thanks again. This cleared up a lot of stuff.

 

Do you or anyone else have any good fan recommendations for a Norco 4224? I swapped out the mid wall for a 3 x 120mm fan wall and I previously swapped out the 2 80mm fans in the back but the temps are getting higher than I'd like and wanted to see people's recommendations for replacements.

Link to comment

My 3U case come with three middle fan wall 1238 Superred ( rating 1.5A ) fan, I never use them due to its noise. I change to common type 1225 ( 0.12A ) fan i.e. Arctic, I like Arctic at all, longlife and resonable price.

 

As thoae are silent type fan, so I life with disks overheat issue.

Link to comment
2 hours ago, ryujin921 said:

About the nvme's, you're better off with SK hynix P41. Faster (and currently cheaper) than them firecudas.

I'm having some difficulty finding the 2tb version of the P41 currently. Although I did read up on some reviews and I think the reason I chose the Firecuda was because of it's drastically higher TBW lifespan. I know practically it'll never get hit but I don't want to take any chances down the road for a slight increase in cost.

 

2 hours ago, ryujin921 said:

About the few pcie slots, you can always use m.2 to pcie riser cables. one of them is already shared with the gen3 m.2 slot btw

Yeah I know I'm trying to be both a beggar and a chooser but I'm about to say screw it and get something stupid like a -24i HBA and just slap that into the first slot and call it a day... the lack of PCI lanes on processors (or the addition of multiple m.2 slots in conjuction with PCIe slots) is killing me....

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.