HYEPYC | High End EPYC Milan Build


Recommended Posts

In early June I started acquiring hardware for this build and I only just completed it this week with many parts taking an extremely long time to arrive. Now that it's complete I'm excited to share it with you after having enjoyed looking at your systems here on the forum and on the Unraid discord.

 

Full Specifications

  • CPU: AMD EPYC Milan 7443p 24 Core / 48 Thread 2.85 GHz base, 4 GHz boost
  • Heatsink: SuperMicro 4U Active SP3 cooler (Since replaced by Arctic 4U-M in Early 2024)
  • RAM: Samsung 256GB (8x32GB) DDR4 RDIMM ECC 3200MHz CL22
  • MOBO: SuperMicro H12SSL-NT EPYC 7002/7003 support with dual 10Gb
  • PSU: Corsair HX1200 Platinum 1200 Watt (Platinum Rated)
  • HBA: Broadcom 9500-8i HBA (PCIe 4.0 NVMe/SAS/SATA)
  • M.2 Card: Generic PH44 4 x M.2 PCIe 4.0 card
  • M.2 Card2: Asus Hyper M.2 PCIe 4.0 card (Added Mid 2023)
  • SSD1: Samsung PCIe 3.0 2TB 970 Evo Plus NVMe Drive
  • SSD2 and 3: Western Digital PCIe 4.0 2TB SN850 NVMe Drive (Heatsink Sku)
  • SSD4, 5, 6 and 7: Samsung PCIe 4.0 2TB 980 Pro NVMe Drives (Heatsink Sku) (Added Early 2023)
  • NIC: Intel PCIe 2.1 X550-T2 2x10Gb/s Ethernet Network Card (Added Mid 2023)
  • NIC: Intel PCIe 2.1 X710-DA4 4x10Gb/s SFP+ Ethernet Network Card (Added Mid 2023)
  • GPU: NVIDIA RTX A4000 (Added Feb 2024)
  • HDD1: Western Digital 4x18TB SATA Hard Drives (Shucks)
  • HDD2: Seagate 7x10TB SATA Hard Drives (NAS Editions)
  • CASE: Gooxi 24-bay Chassis with 12Gb SAS3 Expander chip by PMC-Sierra
  • CABLES: Custom Molex Cables from CableMod (Backplane<->PSU)
  • FANS: Noctua, 3 x 120mm (A12x25), 2 x 60mm (A6x25)
  • UPS: APC BR1600SI (960 Watts / 1.6kVA, Pure Sinewave)

 

Early 2023 Edit: A few months later some changes have happened with the build. 4 x 2TB 980 Pro SSD's were added. You can view photos and information about that in this update post.

 

Early 2024 Edit: Some more changes, new network cards, new graphics card and new CPU heatsink. You can view more about those things including photos in this update post.

 

Originally some of this hardware was different, for instance the motherboard was going to be the Asrock ROMED8-T2, the 18TB hard drives were going to be Toshiba enterprise drives, there was only going to be one SN850 SSD and the UPS was going to be a CyberPower model. But as prices of things changed and availability of certain things became problematic the build had to change.

 

I wouldn't normally buy external hard drives and shuck them just due to the time and effort but they were simply too cheap to ignore at £222.99 each on Amazon Prime Day. With this pricing I was able to purchase 4 x 18TB externals from WD for less than the price of 3 x 18TB raw drives from Toshiba, WD or Seagate. An extra drive and at a lower cost? hard to pass up and as I show later they work perfectly. I linked to each individual part above in-case you want to see more information about a specific component used in the build.

 

Individual Component Photos

  • Almost everything was photographed but I cannot link them as small thumbnails in the thread so I'll spare you having to scroll past those and we'll begin with the completed build photos instead and I'll include a few of the more interesting component shots after those.

 

The completed build

t6VVwkVYa.jpg

 

hZMjkxD8P.jpg

 

RPx9HQUQR.jpg

 

The below photo was just after I initially built it with the stock fans, I hadn't attempted to do any cable management yet.

 

Sdi7k62p9.jpg

 

The below photo is after the Noctua Fans have been installed and I did do a little bit of cable management. This is is why the fan cables have switched from the tomato-ketchup kind to the black fully sleeved kind. I did have to give-up the hotswap system that the case came with when changing fans but the Noctuas I purchased come with a 1cm long PWM connector from the fan and then a 30cm extension cable so it serves the same purpose.

 

I could have cut the cables on the original fans and soldered the 30cm Noctua extensions to the hotswap PCB to maintain this functionality but I didn't want to do that preferring to keep the original fans as-is in-case the cases backplane had a fault requiring the case to be sent back to the retailer. Still the Noctuas work great in the original holders and the sliding mechanism is maintained.

 

y8xn5jHV7.jpg

 

k4VG4UPLG.jpg

 

zYhWR74ik.jpg

 

bbxKMtW6d.jpg

 

VjXsgCJXe.jpg

 

iXhQbs6Bn.jpg

 

Individual Component Photos

 

The case came with three of these 40mm thick industrial hotswap fans shown below that each consume 20 watts at peak output. They are extremely loud and move an incredible amount of air. I quickly swapped these out for Noctua Black Chromax A12x25's which are essentially silent and move around 40% of the air of these fans that came with the case. I'm happy to say the temperatures are still great with the slower Noctua fans.

 

K4t9sk4D5.jpg

 

I did get two of the SN850 2TB's with Heatsink but unfortunately the Motherboards M.2 slots are too close together and only one can be installed. Thankfully I do have a 4 x NVMe PCIe card with wider spacing and so I'll be placing these two and my Samsung NVMe drive into that once the parity on my server is completed so I can power it down to add the card in.

 

NnFAEW3gD.jpg

 

UPDATE: And the SN850's are now in their PCIe card which I've put into the server today. Below is a photo of those installed into the card. As you can see with these heatsinks the spacing is quite tight and SuperMicro unfortunately didn't take that into account when they put their slots right next to each other with only a single sheet of paper gap between.

 

The card I'm using here is just a cheap generic model for £36, it comes with no fan or heatsink and requires Bifurcation support on your motherboard. I chose to buy this as I knew all the SSD's I'd be installing would have heatsinks on them and my server case has quite high airflow.

 

SWPdsQDq9.jpg

 

Samsung likes to ship their memory individually wrapped. I've had this memory for about two months so I didn't open them when I took the photo in June as to keep them in pristine condition but you can see a better photo of them installed in the motherboard above.

 

FMTgT3aek.jpg

 

The 7443p CPU below was chosen not just for its 24 Cores / 48 Threads and Zen3 Architecture but for its high clock speeds. It has a 2.85GHz base and 4GHz boost clock which is perfect for my use case.

 

K3aBdc9EL.jpg

 

It sure has a lot of pins, 4092 to be exact. If you know these EPYC chips they feature 129 PCIe 4.0 lanes (128 usually accessible to the user through slots and connectors and 1 reserved for the motherboard vendor to use with a BMC implementation). And eight memory channels which can take 3200MHz RDIMM's which is what I installed to guarantee I get the maximum infinity fabric bandwidth.

 

CehRW5Dsw.jpg

 

The Broadcom 9500-8i below was chosen for a few reasons, I wanted to be able to use all 24 slots on my chassis at near line speed, this card can do that at 96Gb/ps (12GB/s) where each slot would have 500MB/s capability. It is a PCIe 4.0 x8 card and features signed firmware. I could have obtained a 9400-8i or even a 9300-8i on ebay for 1/3rd to 1/5th the cost of this card but I felt I wanted to get a brand new card at retail with a warranty and secure code signing for its firmware. This is thus a retail Broadcom card purchased from a reputable etailer.

 

bE8dUfQXT.jpg

 

V5uaeJgdP.jpg

 

Below is my Intel X540-T2 ethernet card. I actually have several of these and also a X710-T4 which is a newer model with four ports. The reason I'm installing this into the server even though the motherboard already features 2 x 10Gb ethernet is because I'm going to be running pfSense in a VM on this server as a backup in the event my physical pfSense system has a hardware failure (which it did last year and it was very annoying).

 

Early 2024 Edit: This X540-T2 card was subsequently removed from the server in mid-2023 and replaced with an X550-T2 which looks essentially identical. I did this change because the X550-T2 supports 2.5GbE and 5GbE in addition to 1GbE and 10GbE. I needed the 2.5GbE for a cable modem, thus the swap.

 

So I'll be setting up a pfSense VM on Unraid and passing this card through just so I can have a backup since we all need internet and I work from home making it extra important.

 

r3EJAPnUy.jpg

 

Some may be surprised that I went with an ATX power supply instead of a dual-redundant setup. This Chassis actually comes with several brackets which support single (ATX), dual and triple (server) redundant power supplies. I went with the HX1200 because while it's more power than I'll need in this server it has a 10 year warranty, platinum rated efficiency and it's silent upto 450 watts and very quiet beyond that.

 

In-fact the main reason I even chose to build this server myself as opposed to buying a prebuilt system was because I wanted an ATX power supply so that the system could be as quiet as possible.

 

HjGWUsywt.jpg

 

QPCY4tV5L.jpg

 

And finally I thought I'd show the APC UPS, not really that exciting but it is a newer available model at-least in the UK. Features line interactivity and a pure sinewave which is important to me for stability.

 

dKr6hAMmG.jpg

 

RqzPYPqqx.jpg

 

So what has been the damage for all of this. Everything listed in the spec sheet was purchased specifically for this build except for the NVIDIA GTX 1080 Ti, Intel X540-T2 and 7 x 10TB Hard Drives which I already had and will be moving from my previous build. So not including those parts the total cost has been £7,100 GBP or $8,604.13 USD. I imagine with those added parts included it would go to around £8,500-£9,000 making it around $11,000 USD.

 

I did set out to build a very high end server and that certainly came with a price especially considering I wasn't prepared to compromise much by going with slower or less memory, an older generation CPU, a used chassis, an older HBA etc. If you're looking to build something similar you can probably obtain 75-80% the same capability and performance for half the cost by purchasing one generation older hardware and used parts where it makes sense and I would certainly advise you to do so!

 

Feel free to ask any questions and I'll leave you with a couple system screenshots and benchmarks.

 

eJwDNUfWD.jpg

 

Above: Some benchmarks ran from Windows before Unraid was installed on the system.

Below: So many threads!

 

NU83kwDJj.png

 

Below: Building Parity before I insert the rest of my 7 x 10TB drives (I still need to copy files from that old server to this one before moving those drives over).

 

JG3QCwzWe.png

Edited by Pri
Updates to changes in the build.
  • Like 4
  • Thanks 2
Link to comment

What you will do with this hardware and 1080ti?)

Curious if you need to do anything with this hba like flashing  in IT mode ?)

I was thinking about same card as I want to create cool ssd downloads share to make my drives sleep most of time 

Link to comment
14 minutes ago, J05u said:

What you will do with this hardware and 1080ti?)

Curious if you need to do anything with this hba like flashing  in IT mode ?)

I was thinking about same card as I want to create cool ssd downloads share to make my drives sleep most of time 

 

It'll be used for storing media, running virtual machines, running docker containers.

 

The VM's will consist of stuff for my work, a mirror cluster of our live infrastructure made up of multiple VM's. That's mainly why it has so much memory and CPU cores. I'll of course also be running things like Plex. It'll be our home server running Homebridge and all our home automation stuff, CCTV recording and so on.

 

The 1080 Ti will be used for machine learning applications with software that I write. I also have an RTX 3090 which is in my desktop system but I may put that into the server when I upgrade my desktop GPU. I really like the 24GB of VRAM for machine learning.

 

The HBA comes already flashed in IT mode, I did flash it again to upgrade the firmware as mine came with v14 and v23 was out. It was very easy to flash, just download the file, one command in Windows from the StorCLI software and it was up to date. I'm very happy with the card, it runs very cool and low energy (5.9 Watts peak vs 11 Watts for the previous generation card for instance).

 

The only downside apart from the price of the card is the availability of compatible cables. I had to use a 1 meter long SAS cable because I literally couldn't acquire a 0.5-0.6m one which would have fit better. This is due to it using the newer SlimSAS x8 port instead of MiniSAS HD so the cable choices are a bit more limited.

  • Thanks 1
Link to comment

@Pri, your post could not have come at a better time for me. I'm looking to build a similar kind of server and have been given quotes by 45Drives and TrueNAS which are just insane (I was going for quotes because while I've built several PC's and HEDT servers, I've not dabbled in real server hardware and so all the numbers and compatibilities were throwing me off.

 

I'm wondering if I could pick your brain via direct message as I have several questions! (I've sent you a message)

With that said, I'm keen to share knowledge too, so for anyone else reading, once I have my new server built (it'll be a few months) I will make a post to share the specs and decisions we made.

Link to comment

Today I hooked up the server to my UPS (the same APC model shown in the above post). And so now I have some power consumption numbers.

 

The server idle consumes 150 Watts, under a parity rebuild with all Hard Disks involved it consumes 200 Watts. With a high CPU load (75% or so, so not completely maxed) it consumes 260 Watts.

 

This is quite a solid showing for the energy efficiency. I was surprised as my old server which is a Dual-XEON E5-2667v2 with 16 DIMM's consumes about 360 Watts idle and can hit 500 Watts under a sustained CPU load. So this is less than half that server which is excellent.

 

Connected to my UPS I have a whole bunch of devices including two switches (10Gb and 1Gb, both 16 ports), a WiFi access point, a Cloud Key, a pfSense router with a Quad Core i5 a CCTV camera. All of those devices combined consume 96 Watts and I made sure to monitor that over a full day before I connected the EPYC server so I could gauge exactly how much the server uses on its own.

 

As a result of this idle the total power consumption is about 250 Watts (Server 150 + 100 for the other devices) which gives me a 25 minute runtime on the UPS (based on its own calculations) which is pretty good. Below I've included some screenshots of the UPS power draw as displayed in Unraid, remember this is Server + all those other devices. The load during this screenshot was a second parity drive being built while two VM's were running some medium loads.

 

bmfBwemcA.png

Link to comment
1 hour ago, Pri said:

which gives me a 25 minute runtime on the UPS (based on its own calculations) which is pretty good.

I'd call that barely adequate. Applying several assumptions and rules of thumb, first being that you don't want to drain a typical UPS below 50% to prolong eventual battery replacement, so that means less than 15 minutes of usable runtime, and depending on how long it takes your typically running  VM's to properly shutdown when asked, you could be running on the ragged edge of getting everything shut down properly even if you call for shutdown at power out +5 minutes.

 

I'm not saying you necessarily need more capacity, just that you need to manage what you have, and make sure your VM's get the message to start shutdown pretty much immediately on power out, via a properly configured apcupsd network client in the VM. If using windows VM's, make sure they are set to hibernate on power loss, as shutdown may trigger a lengthy queued windows update.

Link to comment

My Virtual Machines shut down in less than 30 seconds (really it's like 10 seconds). And I have things configured to shutdown when the server has no power for more than 1 minute. So there will be ample time for everything to shutdown gracefully. I would expect the UPS to remain above 85%.

 

This UPS is perfectly adequate for my usage. I understand you're having to make assumptions as people probably do all kinds of silly things and you've seen a lot of it but I already know not to run the battery down, I know it ages it faster, I'm not yet using a Lithium-ion equipped UPS that can do 500+ cycles. I never intended to run this UPS for 25 minutes in a power outage it was just a statement about the capacity. I've often run servers with 17 Minutes or less runtime projections without issue, it's important to know your hardware and how long it takes to shutdown. All that of course I know.

 

I'll be moving to Network UPS Tools (NUT) soon I have other devices that will act as clients to shutdown that will be connected to the UPS as-well like my pfSense router and some other things I write myself can also act as clients.

 

So don't worry, all covered, perfectly fine.

Edited by Pri
Link to comment
11 minutes ago, Pri said:

My Virtual Machines shut down in less than 30 seconds. I have things configured to shutdown when the server has no power for more than 1 minute. So there is ample time for everything to shutdown.

 

This is perfectly adequate, definitely not "barely" adequate.

Cool, I just keyed on the 25 minute runtime being good, which if you shut everything down promptly is fine, but if you are expecting your IT closet to stay functional for 25 minutes that would be an issue. Many folks assume a battery backup is meant to continue running as long as possible during an outage, which only applies to multi kilobuck setups that typically have their own rack, and usually have a diesel generator that steps in after a few minutes.

 

Consumer type battery backups are meant to get things shut down safely if the power is out more than a minute or two.

Link to comment
1 minute ago, JonathanM said:

Cool, I just keyed on the 25 minute runtime being good, which if you shut everything down promptly is fine, but if you are expecting your IT closet to stay functional for 25 minutes that would be an issue. Many folks assume a battery backup is meant to continue running as long as possible during an outage, which only applies to multi kilobuck setups that typically have their own rack, and usually have a diesel generator that steps in after a few minutes.

 

Consumer type battery backups are meant to get things shut down safely if the power is out more than a minute or two.

 

Indeed, there are a lot of people who don't understand all that, I am not one of those people.

Link to comment
35 minutes ago, Pri said:

I'll be moving to Network UPS Tools (NUT) soon I have other devices that will act as clients to shutdown that will be connected to the UPS as-well like my pfSense router and some other things I write myself can also act as clients.

Have you had issues using apcupsd in client mode? I've been running with my Unraid server as the apcupsd master and all my other physical machines and VM's using apcupsd slaved to the Unraid server. It's worked great for me, everything starts an orderly shutdown staggered between 1 to 5 minutes after the power is out, and everything has been properly put to bed 15 minutes into the outage.

Link to comment
Just now, JonathanM said:

Have you had issues using apcupsd in client mode? I've been running with my Unraid server as the apcupsd master and all my other physical machines and VM's using apcupsd slaved to the Unraid server. It's worked great for me, everything starts an orderly shutdown staggered between 1 to 5 minutes after the power is out, and everything has been properly put to bed 15 minutes into the outage.

 

I've actually not tried it, I always just used NUT in the past, out of habit as I used to have a CyberPower UPS and it just worked, but I'll check out the APC stuff maybe it's better.

Link to comment
Just now, Pri said:

I'll check out the APC stuff maybe it's better.

I wouldn't call it better, but it works for me and has for many many years, and inertia typically wins. apcupsd is NOT affiliated with the APC company, so YMMV.

 

Since you are familiar with and use NUT, you are probably better off sticking with it. Open source support of proprietary hardware is always a little hit or miss.

  • Upvote 1
Link to comment
Just now, JonathanM said:

I wouldn't call it better, but it works for me and has for many many years, and inertia typically wins. apcupsd is NOT affiliated with the APC company, so YMMV.

 

Since you are familiar with and use NUT, you are probably better off sticking with it. Open source support of proprietary hardware is always a little hit or miss.

Mmhm very true. When I hooked up my server to the UPS initially I was having communication errors, it would disconnect every few minutes and Unraid would bring up a notification about it losing communication.

 

I thought okay maybe it doesn't support my UPS fully, but before I checked that I looked at the cables and it seems there's very little to no shielding on the included APC cable. Doesn't even have a ferrite choke. I noticed it was close to the AC power input on my server but not actually touching it, moved it further away and the communication problems were resolved.

 

Just thought I'd add it to the topic incase someone else googles and comes across this problem :)

  • Thanks 1
Link to comment

Today I finally got my Unraid storage configured. It took a lot of time shuffling data around to move from my old server but today all finished.

 

Which means I now have things setup as such:

2 x 18TB = Parity

2 x 18TB = Data

7 x 10TB = Data

2 x 2TB WD SN850 NVMe = Cache & VM's

1 x 2TB Samsung 970 EVO Plus = Probably a scratch drive for VM's not sure yet.

 

FMkNPApCD.png

 

I'm quite happy with Unraid so far. The array performance has actually been better than I expected. Admittedly I had low expectations and had ran it as a test on some other hardware to really test how it worked and the performance but even so it surpassed what I thought it would do.

 

The 9500-8i HBA I bought has been a really good purchase able to run every disk maxed out without any performance issues which is as expected, I think I've only hit 1/5th of its total available bandwidth so far.

 

That 2TB NVMe drive I've put in the server (Samsung Evo Plus) was originally going to be my disk cache drive but with the SN850's being so cheap I bought two of those and used one as my cache drive. I'm now not really sure what to do with the 2TB Evo Plus but perhaps it'll come in handy as a VM scratch drive or something else in the future.

  • Like 1
Link to comment

Unless you are planning to mostly fill those data drives with the initial data load I would advise not using so many data slots. Each empty drive uses power on hours, and unnecessary risk if you do have a drive failure. All drives, even totally empty filesystems, participate end to end in the parity equation for rebuilding a failed drive. I typically recommend limiting the parity array free space to twice the capacity of your largest drive, and adding space only after you fall below that largest drive's worth of space. So, in your case, I recommend reducing free space down to 36TB at most, and adding back data slots when the free space falls below 18TB free. Excess capacity is better sitting on the shelf waiting to replace the inevitable drive failure instead of sitting in the array and potentially BEING the next drive failure. Even better is limiting the shelf time on your shelf, and using shelf time at the manufacturers end, so when you get the drive you have a longer warranty period and possibly a lower cost per TB. I typically keep 1 spare tested drive equal or larger than the biggest drive in service in any of my Unraid servers, if I have a drive failure I either replace the failed drive, or replace whichever makes sense and use the pulled good drive to replace the failure in another server. My drives migrate down the line from server to server, I have a backup server that still has some ancient 2TB drives that are still running fine. If one of those fails, my primary server gets an upgrade, and the good drive I replaced in the main server goes into the backup server to replace the failed drive.

 

Tech happiness is a well executed backup system.

  • Like 2
Link to comment

This will get filled before the end of the year and more 18TB drives will be added at that time. I may even replace some of the 10TB drives already present with 18TB drives, these 10TB drives have been running 24.7 for 3 years in my previous server which did not have the ability to spin down drives at all due to the striping it did so they saw a lot of miles.

Link to comment
  • 2 weeks later...

This is an incredible build.

 

I will propably copy-cat a lot of things from you :)

 

 

One question: this case you are using has an SAS Expander with the 8643 expander.. couldnt you go for a mainboard like this: THIS and save some money on the HBA card?

Maybe i am missing something and this board controllers are horrible?

Link to comment
9 minutes ago, chrissi5120 said:

This is an incredible build.

 

I will propably copy-cat a lot of things from you :)

 

 

One question: this case you are using has an SAS Expander with the 8643 expander.. couldnt you go for a mainboard like this: THIS and save some money on the HBA card?

Maybe i am missing something and this board controllers are horrible?

 

You can indeed go with that motherboard and it will work just fine with this chassis and backplane.

 

The reason I ended up with the 9500-8i is really down to logistics and there's no reason you would need to buy such a card, the Supermicro board you linked to would work fine.

 

To explain my circumstances, at the time I was planning out this build I was going to use the Asrockrack ROMED8-T2 which doesn't include a SAS capable chip on board. It just has two SFF-8643 connectors that can only be used with SATA drives via included breakout cables. So because I was planning to buy that motherboard I had to get a HBA to combine with it.

 

My choices were to buy a used LSI/Broadcom 9200, 9300 or 9400 series card from ebay sellers or buy a 9500 brand new from a reputable local retailer with a 3 year warranty. There was a local store that had a Supermicro branded 9300-8i equivalent using the same 3008 chip as that motherboard you linked to but it was like £250 and I thought for another £150 I can get the 9500-8i instead which was the latest HBA model. So I essentially convinced myself into getting the 9500-8i instead of a 9300-8i HBA.

 

Now at the time because I was so locked into the ROMED8-T2 (I had it on pre-order) I never even considered the Supermicro range of boards until after I found out the ROMED8-T2 I had ordered wasn't going to arrive (and in-fact it's still not available even now and retailers are saying September 30th as of today).

 

So by the time I realised that I needed to order a different motherboard I already had the Broadcom 9500-8i in my possession and I thought well I like the card, I might as well order the Supermicro board (they have 4 SKU's of the same board with varying feature levels) that doesn't include the SAS disk controller from Broadcom since I already have the 9500-8i HBA now and it's two generations ahead of what Supermicro was including on their motherboard. It also meant I could choose the Supermicro motherboard which contains the 2 x PCIe x8 connectors which I could use for more SSD's later on.

 

So that's why I ended up with this card. It's totally not necessary. I think in hindsight I still would go with the 9500-8i just because its newer, faster, lower heat and it only costs £150 more than going with the Supermicro board that has the 3008 chip on-board over the one that doesn't. But that's my thinking now in hindsight for my own situation, I'd still probably recommend others to save the £150 and get the Supermicro haha - More of a do as I say not as I do kinda situation I guess.

  • Like 1
Link to comment
32 minutes ago, Pri said:

 

You can indeed go with that motherboard and it will work just fine with this chassis and backplane.

 

The reason I ended up with the 9500-8i is really down to logistics and there's no reason you would need to buy such a card, the Supermicro board you linked to would work fine.

 

To explain my circumstances, at the time I was planning out this build I was going to use the Asrockrack ROMED8-T2 which doesn't include a SAS capable chip on board. It just has two SFF-8643 connectors that can only be used with SATA drives via included breakout cables. So because I was planning to buy that motherboard I had to get a HBA to combine with it.

 

My choices were to buy a used LSI/Broadcom 9200, 9300 or 9400 series card from ebay sellers or buy a 9500 brand new from a reputable local retailer with a 3 year warranty. There was a local store that had a Supermicro branded 9300-8i equivalent using the same 3008 chip as that motherboard you linked to but it was like £250 and I thought for another £150 I can get the 9500-8i instead which was the latest HBA model. So I essentially convinced myself into getting the 9500-8i instead of a 9300-8i HBA.

 

Now at the time because I was so locked into the ROMED8-T2 (I had it on pre-order) I never even considered the Supermicro range of boards until after I found out the ROMED8-T2 I had ordered wasn't going to arrive (and in-fact it's still not available even now and retailers are saying September 30th as of today).

 

So by the time I realised that I needed to order a different motherboard I already had the Broadcom 9500-8i in my possession and I thought well I like the card, I might as well order the Supermicro board (they have 4 SKU's of the same board with varying feature levels) that doesn't include the SAS disk controller from Broadcom since I already have the 9500-8i HBA now and it's two generations ahead of what Supermicro was including on their motherboard. It also meant I could choose the Supermicro motherboard which contains the 2 x PCIe x8 connectors which I could use for more SSD's later on.

 

So that's why I ended up with this card. It's totally not necessary. I think in hindsight I still would go with the 9500-8i just because its newer, faster, lower heat and it only costs £150 more than going with the Supermicro board that has the 3008 chip on-board over the one that doesn't. But that's my thinking now in hindsight for my own situation, I'd still probably recommend others to save the £150 and get the Supermicro haha - More of a do as I say not as I do kinda situation I guess.

Thank you for sharing your reasoning behind that build. I totally get your point, running the newer chipset and keeping sas slots available is also a nice upside in my eyes.. 

 

Your case is pretty hard to come by in Germany, I would really love to see a comparable offer with an integrated sas expander, all the cases i can find require at least two connections to even run the backplanes.. they are also not exactly cheap.. ah man.. hard choices..

Link to comment

Indeed, you would need to import it either from China from the manufacturer directly or from the UK. Keep in mind only the UK distributor (XCase) includes the ATX PSU bracket, they appear to machine them locally and include them in the package themselves. Otherwise the case comes with two brackets for server power supplies only.

 

Another option may be buying Supermicro cases, either used or new. They're quite popular and decent.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.