Pri Posted July 28, 2022 Share Posted July 28, 2022 (edited) In early June I started acquiring hardware for this build and I only just completed it this week with many parts taking an extremely long time to arrive. Now that it's complete I'm excited to share it with you after having enjoyed looking at your systems here on the forum and on the Unraid discord. Full Specifications CPU: AMD EPYC Milan 7443p 24 Core / 48 Thread 2.85 GHz base, 4 GHz boost Heatsink: SuperMicro 4U Active SP3 cooler RAM: Samsung 256GB (8x32GB) DDR4 RDIMM ECC 3200MHz CL22 MOBO: SuperMicro H12SSL-NT EPYC 7002/7003 support with dual 10Gb PSU: Corsair HX1200 Platinum 1200 Watt (Platinum Rated) HBA: Broadcom 9500-8i HBA (PCIe 4.0 NVMe/SAS/SATA) M.2 Card: Generic PH44 4 x M.2 PCIe 4.0 card SSD1: Samsung PCIe 3.0 2TB 970 Evo Plus NVMe Drive SSD2: Western Digital PCIe 4.0 2TB SN850 NVMe Drive (Heatsink Sku) SSD3: Western Digital PCIe 4.0 2TB SN850 NVMe Drive (Heatsink Sku) NIC: Intel PCIe 2.1 X540-T2 2x10Gb/s Ethernet Network Card GPU: NVIDIA GTX 1080 Ti FE (not yet installed, still in my old server) HDD1: Western Digital 4x18TB SATA Hard Drives (Shucks) HDD2: Seagate 7x10TB SATA Hard Drives (NAS Editions) CASE: Gooxi 24-bay Chassis with 12Gb SAS3 Expander chip by PMC-Sierra CABLES: Custom Molex Cables from CableMod (Backplane<->PSU) FANS: Noctua, 3 x 120mm (A12x25), 2 x 60mm (A6x25) UPS: APC BR1600SI (960 Watts / 1.6kVA, Pure Sinewave) EDIT:// A few months later some changes have happened with the build. 4 x 2TB 980 Pro SSD's were added. You can view photos and information about that in this update post. Originally some of this hardware was different, for instance the motherboard was going to be the Asrock ROMED8-T2, the 18TB hard drives were going to be Toshiba enterprise drives, there was only going to be one SN850 SSD and the UPS was going to be a CyberPower model. But as prices of things changed and availability of certain things became problematic the build had to change. I wouldn't normally buy external hard drives and shuck them just due to the time and effort but they were simply too cheap to ignore at £222.99 each on Amazon Prime Day. With this pricing I was able to purchase 4 x 18TB externals from WD for less than the price of 3 x 18TB raw drives from Toshiba, WD or Seagate. An extra drive and at a lower cost? hard to pass up and as I show later they work perfectly. I linked to each individual part above in-case you want to see more information about a specific component used in the build. Individual Component Photos Almost everything was photographed but I cannot link them as small thumbnails in the thread so I'll spare you having to scroll past those and we'll begin with the completed build photos instead and I'll include a few of the more interesting component shots after those. The completed build The below photo was just after I initially built it with the stock fans, I hadn't attempted to do any cable management yet. The below photo is after the Noctua Fans have been installed and I did do a little bit of cable management. This is is why the fan cables have switched from the tomato-ketchup kind to the black fully sleeved kind. I did have to give-up the hotswap system that the case came with when changing fans but the Noctuas I purchased come with a 1cm long PWM connector from the fan and then a 30cm extension cable so it serves the same purpose. I could have cut the cables on the original fans and soldered the 30cm Noctua extensions to the hotswap PCB to maintain this functionality but I didn't want to do that preferring to keep the original fans as-is in-case the cases backplane had a fault requiring the case to be sent back to the retailer. Still the Noctuas work great in the original holders and the sliding mechanism is maintained. Individual Component Photos The case came with three of these 40mm thick industrial hotswap fans shown below that each consume 20 watts at peak output. They are extremely loud and move an incredible amount of air. I quickly swapped these out for Noctua Black Chromax A12x25's which are essentially silent and move around 40% of the air of these fans that came with the case. I'm happy to say the temperatures are still great with the slower Noctua fans. I did get two of the SN850 2TB's with Heatsink but unfortunately the Motherboards M.2 slots are too close together and only one can be installed. Thankfully I do have a 4 x NVMe PCIe card with wider spacing and so I'll be placing these two and my Samsung NVMe drive into that once the parity on my server is completed so I can power it down to add the card in. UPDATE: And the SN850's are now in their PCIe card which I've put into the server today. Below is a photo of those installed into the card. As you can see with these heatsinks the spacing is quite tight and SuperMicro unfortunately didn't take that into account when they put their slots right next to each other with only a single sheet of paper gap between. The card I'm using here is just a cheap generic model for £36, it comes with no fan or heatsink and requires Bifurcation support on your motherboard. I chose to buy this as I knew all the SSD's I'd be installing would have heatsinks on them and my server case has quite high airflow. Samsung likes to ship their memory individually wrapped. I've had this memory for about two months so I didn't open them when I took the photo in June as to keep them in pristine condition but you can see a better photo of them installed in the motherboard above. The 7443p CPU below was chosen not just for its 24 Cores / 48 Threads and Zen3 Architecture but for its high clock speeds. It has a 2.85GHz base and 4GHz boost clock which is perfect for my use case. It sure has a lot of pins, 4092 to be exact. If you know these EPYC chips they feature 129 PCIe 4.0 lanes (128 usually accessible to the user through slots and connectors and 1 reserved for the motherboard vendor to use with a BMC implementation). And eight memory channels which can take 3200MHz RDIMM's which is what I installed to guarantee I get the maximum infinity fabric bandwidth. The Broadcom 9500-8i below was chosen for a few reasons, I wanted to be able to use all 24 slots on my chassis at near line speed, this card can do that at 96Gb/ps (12GB/s) where each slot would have 500MB/s capability. It is a PCIe 4.0 x8 card and features signed firmware. I could have obtained a 9400-8i or even a 9300-8i on ebay for 1/3rd to 1/5th the cost of this card but I felt I wanted to get a brand new card at retail with a warranty and secure code signing for its firmware. This is thus a retail Broadcom card purchased from a reputable etailer. Below is my Intel X540-T2 ethernet card. I actually have several of these and also a X710-T4 which is a newer model with four ports. The reason I'm installing this into the server even though the motherboard already features 2 x 10Gb ethernet is because I'm going to be running pfSense in a VM on this server as a backup in the event my physical pfSense system has a hardware failure (which it did last year and it was very annoying). So I'll be setting up a pfSense VM on Unraid and passing this card through just so I can have a backup since we all need internet and I work from home making it extra important. It's funny, I have two internet connections but not two routers, that is, until now. Some may be surprised that I went with an ATX power supply instead of a dual-redundant setup. This Chassis actually comes with several brackets which support single (ATX), dual and triple (server) redundant power supplies. I went with the HX1200 because while it's more power than I'll need in this server it has a 10 year warranty, platinum rated efficiency and it's silent upto 450 watts and very quiet beyond that. In-fact the main reason I even chose to build this server myself as opposed to buying a prebuilt system was because I wanted an ATX power supply so that the system could be as quiet as possible. And finally I thought I'd show the APC UPS, not really that exciting but it is a newer available model at-least in the UK. Features line interactivity and a pure sinewave which is important to me for stability. So what has been the damage for all of this. Everything listed in the spec sheet was purchased specifically for this build except for the NVIDIA GTX 1080 Ti, Intel X540-T2 and 7 x 10TB Hard Drives which I already had and will be moving from my previous build. So not including those parts the total cost has been £7,100 GBP or $8,604.13 USD. I imagine with those added parts included it would go to around £8,500-£9,000 making it around $11,000 USD. I did set out to build a very high end server and that certainly came with a price especially considering I wasn't prepared to compromise much by going with slower or less memory, an older generation CPU, a used chassis, an older HBA etc. If you're looking to build something similar you can probably obtain 75-80% the same capability and performance for half the cost by purchasing one generation older hardware and used parts where it makes sense and I would certainly advise you to do so! Feel free to ask any questions and I'll leave you with a couple system screenshots and benchmarks. Above: Some benchmarks ran from Windows before Unraid was installed on the system. Below: So many threads! Below: Building Parity before I insert the rest of my 7 x 10TB drives (I still need to copy files from that old server to this one before moving those drives over). Edited February 28 by Pri Updated links to products used in the build 4 1 Quote Link to comment
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.