The next generation - M12SWA, Threadripper Pro

Recommended Posts

As my earlier post noted, my main server was due for an upgrade - it'll probably keep running for now so as not to overload the backup server due to all the VMs I have running for work-related testing and such, but the replacement, it's finally becoming a reality:




It's a supermicro CSE-743-1200B-SQ, a 1200 watt platinum power supply with 8 bays built in, with a 5 bay CSE-M35TQB in place of the 3 5.25" bays, all designed to run at less than 27db, while able to be either ran as a tower, or rack mounted (it'll spend the next 3 months in tower form... seems getting rails for this thing requires first sending a carrier pigeon to hermes, hermes then tasks zeus with forging them in the fires of the gods from unobtainium, who then ships then when he's done doing... well greek stuff). My first 700 series chassis is still doing work, still with it's original X8SAX motherboard, and I see no reason to fix something that isn't broken!


While having a bunch of drives is great, the idea here is to have two gaming VMs, run plex, nextcloud, homeassistant, frigate, and numerous others. All of that takes a ton of IO. Enter the motherboard:





This motherboard is a friggin monster - but importantly, at least to me, it's design syncs up perfectly with the chassis, so all the power monitoring, fan modulation, and LED/management functions can all be controlled via built-in out of band management. The M12SWA is currently paired to a 3955WX; given how close we are to next gen threadrippers release, I'm going to wait that out for now, and then decide whether to upgrade to next gen's mid-range (whether it be a 5975WX, or whatever the case may be), or otherwise.


For now, the VM's will be 4 core / 8 thread to match the CCD's, leaving the rest to docker. Down the line, they'll likely be either 8 cores each, or one 8 and one 4, depending on what the need is. The lighter of the two is going to house an always on emulation VM with a 1650s, which will play all our games on screens throughout the house (or wherever) via moonlight/parsec/whatever.


It slots perfectly in the chassis:




But cable management is going to be a meeessssss:





That ketchup and mustard is hurting my friggin eyes. I'm going to have to wrap those with something :| 


More to come on this one - the plan for now is to throw in 128GB of ECC 3200, 4 NVME, an rtx 2070, gtx 1650s, quad 10Gb nic (chelsio, since this thing comes with the stupid acquantia nic which has no SR-IOV support), quad 1Gb nic (since the intel nic they included ALSO doesn't support SR-IOV... ugh), then one slot left for potentially adding either tinker-type toys or an external SAS HBA if I somehow eventually run out of room. There are custom boards out there that combine the x540 and i350 chipsets onto one board, but I may instead consolidate this down to a single X550 or one of those fancy x700 intel based boards... We'll see.

  • Like 3
Link to comment
  • 1 month later...

Finally found a few free minutes to update! The server's been running now since Sept 5th without so much as a hiccup! However, it took a little planning to get it that way...


The problem area starts with this:



Both GPUs are dual slot, and stacking them means the 2070's intake is about 40% covered by the backplate of the 1650S. I then thought to use the intel NIC as the in-between, but it still covers a bit:



And if I can avoid covering it at all, all the better - As this is *right* on top of the NVME drives, any additional heat the 2070 radiates means heat added to them. In the end, I went ahead and put the HBA here instead:



It's not perfect (nothing is for me I guess), but after running a temperature probe to the HBA and finding it's well within spec, it's about as good as it gets for me, and it'll do for now!


Here's (almost) what it looks like as of today:



The 32GB DIMMs I ordered didn't show up in time, and I really needed to get this thing up and running before start of business Monday morning so everyone could access their files and backups could pick back up, so this is where we're at till probably Thanksgiving or so. Running through the cards, from the top:


1. Intel 1.2TB NVME - a hold-over from the last server setup, which only exists for caching writes to the unraid array; seems like the md driver is modified as part of Unraid's base deployment, or this would be removed in lieu of LLVM with an XFS volume over the 4 sn750's onboard. BTRFS just doesn't have the performance needed (not to mention other areas of concern :( ) and I'm too cheap to buy 4 more M.2 drives just to up the capacity lol

2. Intel NIC - pfSense, etc

3. RTX 2070 - this serves two purposes - it's either running my gaming VM whenever I find time to play, or serving an unraid VM for an additional Tdarr node or testing out new things prior to implementing them on the main hypervisor

4. LSI 2308 based HBA - just connecting any of the drives that I don't have onboard connectors for

5. GTX 1650S - main hypervisor's GPU for Plex, Tdarr, and facial recognition in both nextcloud and frigate (well, until I can convince myself that I need a Coral accelerator anyway)


Hope to update again sometime after thanksgiving!

Link to comment

Nice build, I like those 5 or 6 pcie mobo platform for server application rather then mix with too much M2, cooling always headcade


My first 4 32G non-ecc memory set was burn out by an in service mobo,  It really not fun, After RMA, I confirm they work on other mobo(s).  I still don't know why and won't take risk to retest again. ( module abnormal hot when sit at problem mobo )



Edited by Vr2Io
Link to comment



Its more-so a dedicated workstation motherboard than a true server platform, but I get where you're coming from. Going workstation is the only simple way to get high quality audio playback capabilities that I've found without resorting to some separate card/device, and that just makes other things needlessly complicated imo. Workstation boards seem to offer the best of both worlds when it comes to home servers - IPMI, gobs of PCIe, system stability testing, double or more the ram channels, and all without losing the benefits of a consumer board (audio, plenty of USB ports, etc). Only downside... they charge a friggin arm and a leg. This was a purchase that was a little over a year in the making, mostly paid for with side-hustles, or I'd never have gotten up the nerve to pull the trigger on it lol.


As to the M.2 - I've honestly been quite happy with it! Peaks at about 53°C during sustained heavy IO now that I've got the card arrangement optimized a bit, which is basically ideal for NAND/NVME, and I intentionally went with PCIe 3.0 initially as part of the overall plan to limit unnecessary power consumption. Best of all (as a cheapskate), m.2 is far easier to find great deals on than U.2/2.5" counterparts.


If you can find a board that has enough onboard NVME connections to satisfy your needs, I personally say "go for it" - beats the snot out of throwing in an add-in bifurcation card, which not only takes up another slot, but more importantly, adds a single point of failure for all connected devices.

Link to comment
  • 4 months later...

With rumors regarding zen 3 TR Pro seemingly evaporating overnight (nothing new since December), I've been getting antsy. Sincerely hoping that all the March 2022 release date rumors leading up to the drought turn out to be true...


Because in preparation for it's release, I've been deploying more and more of the workloads I'd had planned for the server, and now I've stacked enough on that the 16c 3955wx has been redlining somewhat regularly the last few weeks. Images of a super buff 4'5" body builder trying to backpack a volkswagen have come to mind. Given this, I've spun down had to disable Tdarr's CPU transcodes and just stick with GPU accelerated only. It's helped quite a bit, saving some headroom for those occasional spikes.


Some current results on that front:



Almost ready to add the last Plex library and queue it up. I figure the 1650S has pretty much paid for itself twice over at this point (at least at the price I'd paid for it back when anyway) - 16TB drives @ ~$320/per, with the 1650S costing me ~$170, I'm honestly pretty stoked!


The first 'new' addition since moving to the new hardware was Paperless-ng - I had no idea how much it'd change our life to be honest. We went from having 2 banker boxes full of everything from tax paperwork, to paid bills, work reviews, etc, to about 3 inches of papers left. No more having to go into the boxes each and every time we have a bill come in the mail, no more time wasted trying to find 'that one bill that we know we paid but the stupid bank/lender/government/whatever say they never received payment for... Just plop it on the scanners feed tray, hit a button, and done.


It's been a hell of a process to get here, about 5 months in the making as we're using this ancient HP business scanner and I'm too cheap to replace it - it's wicked slow at a decent quality setting, maybe a page every couple minutes... but we're in no hurry, the quality is perfect for Paperless's OCR (converting the scanned image to searchable text), it's able to scan directly to SMB, and it's got a 35 page feed tray. I just put another stack on a couple times a day and leave it at that. Paperless automatically tags them based on content, and we've not had to dig out a single page since setting it up.


Next, I opened up the media request website to my sister, now that my parents have gotten the whole 'we can just submit it here and we don't have to change DVD's anymore!? *CLICK CLICK CLICK CLICK CLICK CLICK* " thing out of their system; that might've been a bit of a mistake:



Nearly everything in TV_SaturdayMorning is hers, and she literally sat down for a full hour adding each and every one her series... I wasn't expecting that kind of growth lol. That 'last library' to add? ...That's it.


There's more, but I think I gotta hit the sack - hoping to get more time to catch this thing up to date in the coming weeks!


Brief preview of what I've still got to detail out:


Another host - two hypervisors, routing, and a little room to spare


Network refresh - IoT takes a bite out of wifi



And a couple other odds and ends - several containers, storage crunch woes (thanks sis...), and repurposing the older xeon gold 6139 based system.



Edited by BVD
Link to comment
  • 1 month later...

WOAW!  Congratz! with your upgrade!


I am also looking to do a upgrade! ;-)


Your setup looks like it covers all of my requirements and then some!

I finally found a seller of the MB:

I think hope my PSU can provide enough power? (Corsair AX860i) 


How much power does this setup use in average? (The "low" power usage and the onboard GPU of the Xeon E-2100G series was why I got into Unraid some years ago LOL)

My existing server runs 24x7 so its only at night time it draws less power (My power consumption right now shows 763W - 26 drives)

Do you have ECC memory? - It have saved my "bacon" a couple of times 🙂



Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.