My SuperMicro Build


Recommended Posts

Either way I do have dual 10G sfp+ card that is in my current server that I could move over.

 

So an update. By the sounds of it SuperMicro doesn't know whats going on either but they are looking into it. (Although personally I don't know how much help they are going to give me due to it being an "old" board.

Edited by demonmaestro
Link to comment

I agree that SuperMicro's support for older products leaves a bit to be desired.    I suppose I understand that; but it's still frustrating when a board doesn't work per its specifications.   [Or perhaps it is; but something that is "disabled by default" should certainly have a way to enable !!]

 

 

Link to comment

So they are saying that due to me having it in a 3rd part case they aint going to help support me.. Such BS.

 

I had talked with a Authorized re-seller/support place here near my hometown and they said that the motherboard will work in any 3rd party case. 

I told them that what my issues are and they said they would get back with me tomorrow due to it was "after working hours". So they could ask their support team to find out what might be the issue. Something that got me wondering is if the "hardware" thermal monitor/controller went bad...

 

So I am going to wait to see if maybe they can "repair" the board without breaking the bank.

 

HOWEVER I am trying to decide on what system I wanna do IF I have to get a new motherboard. What would would yaw say?

 

System 10g:

  • 1 - SUPERMICRO MBD-X10DRI-T4+-O

  • 2 - Intel Xeon E5-2630 V4 Broadwell-EP 2.2 GHz

  • 4 - Kingston ValueRAM 32GB DDR4 2400 ECC  KVR24R17D4/32MA

  • 2 - Noctua NH-U9S 92mm

System 1g:

  • 1 - SUPERMICRO MBD-X10DRC-LN4+-O

  • 2 - Intel Xeon E5-2630 V4 Broadwell-EP 2.2 GHz

  • 8 - Kingston ValueRAM 32G  DDR4 2133 ECC KVR21R15D4/32

  • 2 - Noctua NH-U9S 92mm

 

 

 

IF I do have to get a new system. Would you wanna buy the "old" system. Same cost $400 + shipping.

Edited by demonmaestro
Link to comment

Why the difference in RAM between the two new systems?    There's little difference in cost between the motherboards, so it's just a matter of whether you want the 10Gb network or the extra 8 disk connections.    I'd probably go with the 10Gb board and just add a controller for additional SATA ports.

 

But either would be a STELLAR system !!

 

Your "old" system is tempting at such a low price -- but I'll pass.   I already have 4 UnRAID boxes; and if I WAS going to build another one, it'd be closer to the new systems you're looking at rather than using older technology.    I'll probably pop 10-12 8TB or 10TB drives in my next box, so the cost of the "other" hardware (motherboard/CPU/memory) really doesn't matter much in the great scheme of things.

 

I suspect, however, that you'll have NO problem getting $400 for that set of gear -- just list it on the forum and it'll be gone :D

 

Link to comment

Welp... I done did it.

 

After talking with the local shop they were saying that those kind of motherboards due to age don't have a temperature fan curve. Its either on or off for the most part.

 

So they are getting me this setup!

  • 1 - SUPERMICRO MBD-X10DRI-T4+-O

  • 2 - Intel Xeon E5-2630 V4 Broadwell-EP 2.2 GHz

  • 8 - Samsung 32g ddr4 2400 O.o

  • 2 - Noctua NH-U12DXi4

There prices are cheaper than NewEgg or Amazon however after Taxes it's a little bit more expensive. However now if there is an issue I can go to someone local and easier to deal with RMA and stuff like that. Also I get to support my local economy!:D

 

Although I wonder if my Corsair AX860 PSU will be enough???:|

Edited by demonmaestro
Link to comment
18 minutes ago, demonmaestro said:

Although I wonder if my Corsair AX860 PSU will be enough???:|

 

Shouldn't be an issue -- the motherboard is almost certainly more efficient than the older one; and the CPU's only have a modest bump in TDP (and will normally be running at much lower power).    I assume everything else (disks, etc.) is the same as you already have on it.    Just for grins, have you measured your power consumption with a Kill-a-Watt on the old system??

 

Link to comment
2 minutes ago, garycase said:

 

Shouldn't be an issue -- the motherboard is almost certainly more efficient than the older one; and the CPU's only have a modest bump in TDP (and will normally be running at much lower power).    I assume everything else (disks, etc.) is the same as you already have on it.    Just for grins, have you measured your power consumption with a Kill-a-Watt on the old system??

 

Not at full load. idle it was around 190w-220w it would jump around alot..

I will test that later on. 

 

Although per the CoolerMaster psu calculator I should only need a 534w psu.

Link to comment

I seriously doubt the new system will draw any more --- likely even less -- than the old system ... simply due to the much more efficient power management of the newer chips.   Even though your newer CPU's have slightly higher TDP's than your old ones; they almost certainly run at FAR lower power when idle than the older CPUs do.    Same is true of the newer motherboard/chipset.

 

I'd do a parity check with the old system and see what the load is (should be fairly stable after it starts) ... and then do the same once you set up the new one.    I'd be very surprised if the new one isn't lower than the old.

 

Link to comment
11 minutes ago, garycase said:

I'd do a parity check with the old system and see what the load is (should be fairly stable after it starts) ... and then do the same once you set up the new one.    I'd be very surprised if the new one isn't lower than the old.

 

I was going to do that and launch up a VM with 22 out of 24 cores and do a cpu load test.

Basically hammer the system.

Then see how much power it takes.

 

Although something I just though of is I'm going to have to redo the drive layout again. Assign drives to it correct spot.. Or should it know?

Edited by demonmaestro
Link to comment
10 minutes ago, demonmaestro said:

Although something I just though of is I'm going to have to redo the drive layout again. Assign drives to it correct spot.. Or should it know?

As long as the controllers identify the drives identically unraid will remember and put them back exactly as they are now. Before you shut down the array for the last time, be sure to set it to not autostart, that way you can verify everything is ok on the new system.

Link to comment
1 minute ago, garycase said:

You shouldn't have to do anything except connect the drives and boot to your USB flash drive.    As long as UnRAID "sees" all of the drives in the array, it will start just fine.

 

So unraid doesn't go by the "drive letter" and it goes by the serial number?

 

I mean I have a handfull of drives connected to the mobo and then a handfull connected to the 2 raid cards.

Edited by demonmaestro
Link to comment

That's correct -- as long as it "sees" all of the drives in your config it should boot just fine.    The earlier versions (v4.7 and earlier) required specific drive assignments;   since v5 it has tracked drives by serial number.   You can freely move them around to different SATA ports/controllers, as long as UnRAID can "see" the drive.

 

Link to comment
10 hours ago, garycase said:

I'd do a parity check with the old system and see what the load is (should be fairly stable after it starts) ... and then do the same once you set up the new one.    I'd be very surprised if the new one isn't lower than the old.

Well the old setup is 375w full load doing a parity check/ and 99% on the CPUs.

Link to comment
13 minutes ago, garycase said:

That's more than I would have expected -- is this with both a parity check and the fully loaded VM you noted above?   (which is what I assume is loading the CPUs at 99%)

 

Yep.

7-120mm fans

2-80mm fans

2 - cpu fans

13 - wd reds

2 - wd re

1 - pny 960g ssd

2 -  LSI MegaRAID 9240-8i PCI-E 6Gb RAID Controller IBM M1015

1 - 4 port 1gb nic card

1 - 2 port 10g sfp+ card

2 - Intel Xeon L5640

1 - Supermicro X8DTL-iF

 

The only thing that I didn't load up was the ram.

 

jeez when I go list it out like that. It sounds like there is A LOT of stuff in that case.. O.o

Edited by demonmaestro
had to add something.
Link to comment
  • 2 weeks later...

So an update on those wondering.

 

Got the BIG bad motherboard in and guess what. It didn't fit. I guess I should of paid attention to the Enhanced Extended ATX. My case said nope you will not go in. My case will only accept an E-ATX and ATX. Oops.... Thankfully they ordered me the regular E-ATX version  SUPERMICRO X10DRI-T. From the looks of it I only lose 2 NICs, and some RAM slots. 

That should be in by Thursday. We shall see..

Oh and the Noctua NH-U12DXi4 didn't fit either. They were too tall for the case. So I packed them up and slapped a return label on it and gave it to the front office for them to give to UPS to return to Amazon. A couple of days has gone by on that and guess what. No activity on the tracking and the package is gone from the front office. So I think I just lost $140... On the other hand I had ordered the Noctua NH-U9DX i4  and I got them in today. They will fit but unless I turn it the 2 fans will be butting up next to each other. Or I might take one off. Idk. I'll tinker with it once it get it built and be able check the thermals. 

 

If it's not one thing it's another with this build. BUT I'll be damned if I will let it beat me!

Link to comment
  • 9 months later...

Current setup:

3- 120mm fans on the drive bays

1- 120mm fan on top of the sfp+ nic and LSI card.

2- 80mm fans

2 - Noctua NH-U9DX i4

10 - wd reds

2 - wd re

1 - pny 960g ssd

1 - WD Blue 1TB SSD

1 -  LSI MegaRAID 9240-8i PCI-E 6Gb RAID Controller IBM M1015

1 - 2 port 10g sfp+ card

2 - Intel Xeon E5-2630 v4

1 - Supermicro MBD-X10DRI-T

8 - Samsung 32g ddr4 2400

 

Soooooo I'm back with some more fun stuff.. :( not really.

 

I had 2 hard drives fail on me and I decided to put the hot swap bays in the case. (Rosewill RSV-SATA-Cage-34  - The ones that come with the RSV-L4412 version of the case. Basically I converted the Rosewill RSV-L4500 over to a Rosewill RSV-L4412.

 

 

That went all fairly smoothly until I had installed Plex and started to test how many streams I can run.

 

CPUs went to 100% and up went the temps. Next thing I know my nics are hitting 60c and the CPUs were thermal throttling. Oh the fans were at 100% by then as well.

 

Bad part is with the narrow CPU bridge I cannot rotate the coolers. 

 

I am at a loss now of what to do.

 

I had taken this picture as I was swapping out the drive bays and HDDs.

IMG_20180212_050655.jpg

Edited by demonmaestro
added current setup
Link to comment

I'm surprised you're having heat issues with the CPU's in particular => dual fans on those Noctua heatsinks should provide PLENTY of cooling for your CPU's.    According to Noctua's dissipation chart, the heatsinks you have, when used with a 2011 mount, should be able to dissipate 140w of heat from the CPUs ... and since your CPU's only have an 85W TDP that should be PLENTY -- I'm surprised you can get them to thermally throttle unless you're either significantly over-clocking them (seems unlikely); or there's an issue with the thermal compound not providing a good bond between the CPU and heatsink.     I'd remove the heatsinks;  thoroughly clean both the CPU and the heatsink [use Artic's ArctiClean kit:  https://www.amazon.com/ArctiClean-Thermal-Compound-Remover-Purifier/dp/B001JYQ9TM/ref=pd_lpo_vtph_23_tr_t_2?_encoding=UTF8&psc=1&refRID=KTYAKS51HYQCT94RK4BH ]; and then reapply some good thermal compound [e.g. Arctic Silver] and remount the heatsinks.

 

How are your drive temps?    Three 120rpm fans should provide adequate cooling for the drive cages IF the fans have adequate static pressure and move a reasonable amount of air.   There's a big big difference between a low-noise fan that's only moving 40CFM, a moderate speed fan that moves 80+ CFM; and a server-grade high-rpm (= noisy) fan that can move 120 - 150 CFM  (e.g. Delta fans that run at 2900-3900 rpm).   I'd want fans that move at least 80CFM.

 

I can't tell from your picture just how well ventilated the case is -- i.e. are there adequate air intakes and exhausts that your fans can move the air they're designed to move?

 

I suspect this is likely a combination of these factors -- re-applying good thermal compound and increasing the CFM will likely resolve it.

Link to comment

I had used Noctua's NT-H1 thermal compound. It came with the heatsinks.

I used the X method and usually I always seem to have thermal compound squish out alittle.

The WD-RE drives sits around 46c when doing a parity check and the others around 40c

 

idle temps are 40c on the wd-re

and 35 on the wd-reds

 

the 120mm fans on the drive cages are the rosewill 120mm fans that came on them.a

 

 

 

Edited by demonmaestro
Link to comment
30 minutes ago, demonmaestro said:

idle temps are 40c on the wd-re

and 35 on the wd-reds

these are a bit too high - should be at 30c for Reds, if they are low RPM drives..

what is an ambient temp for location where server is stored?

sorry, didn't understand - why you can't rotate a cooler?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.