Intel Core i9-13900K with SuperMicro X13SAE-F or Gigabyte MW34-SP0?


Recommended Posts

On 10/15/2023 at 6:55 AM, mikeyosm said:

Great news, thanks for the update. Do you mind showing a screenshot of the complete IPMI sensors please? I would like to see what sensors are detected.

Also sry for the late reply. Here you go. 

 

The IPMI card is really nothing special. At the moment it is making me pull out my hair. I have major problems with the video for which I basically need the IPMI card (BIOS control, which is btw better in terms of sensors, Booting alt. OS). Legacy boot is not possible with Graphics card(s) installed -> BIOS will always grab the other card despite any BIOS settings (iGPU etc.). Also Secure boot messed up my setting now I am fighting for a week to get video back even with CSM disabled. CMOS resets sounds like an awesome feature ... NOP it resets the IPMI too !! So someone needs to restart the machine by hand, and I couldn't confirm an actual BIOS reset.

 

So if you need the IPMI just for sensors, or starting/stopping the machine, it is actually pretty cheap. I don't know if the disabled sensors are software side disabled or just not hooked up. A few cables were not shipped with the card :/. If you need to rely on solid controls avoid the 100 bucks and put them into a PI KVM or something similar (Opinion from an engineer, I am not a IT guy but I am working, or trying to, with the IPMI for 3 months now and I hate it more every day ....)

Bildschirmfoto 2023-12-11 um 01.40.11.png

Bildschirmfoto 2023-12-11 um 01.40.07.png

Bildschirmfoto 2023-12-11 um 01.39.57.png

Link to comment
On 11/18/2023 at 7:09 PM, ByteNick said:

I am trying to build something very similar.

What king of CPU cooler have used?

I need to use all PCI's, and I'm afraid buying one too big would kill the closed to the CPU one. 

I have a picture of my layout. Definitely would suggest water cooling, that leaves plenty of space around the VRM. This is the mentioned asus ryujin 2. Directly underneath the small IPMI. Cable management is poor, but my goal was just to connect everything and I couldn't bend the riser cable more. So every PCI (including all M2s except the CPU one) are hooked up. I went to full-size PCIe from the m2s with 2 different adapters. If you have space I would suggest using the m2 to miniSAS or U.2, that is more elegant, but if you need to hook up something different than storage this would be the way to go in my opinion.

 

Every card is properly recognized, but I didn't performance test them yet, because I have problems with the LSI HBA, which gets extremely hot btw (that's why I put 3 fans next to it, one big, 2 small switch fans). 

IMG_5859.jpg

Link to comment
2 hours ago, blacklight said:

I have a picture of my layout. Definitely would suggest water cooling, that leaves plenty of space around the VRM. This is the mentioned asus ryujin 2. Directly underneath the small IPMI. Cable management is poor, but my goal was just to connect everything and I couldn't bend the riser cable more. So every PCI (including all M2s except the CPU one) are hooked up. I went to full-size PCIe from the m2s with 2 different adapters. If you have space I would suggest using the m2 to miniSAS or U.2, that is more elegant, but if you need to hook up something different than storage this would be the way to go in my opinion.

 

Every card is properly recognized, but I didn't performance test them yet, because I have problems with the LSI HBA, which gets extremely hot btw (that's why I put 3 fans next to it, one big, 2 small switch fans). 

IMG_5859.jpg

I have plans to use the PCI slots for:
 

PCIe slot 1: x1 IPMI
PCIe slot 2: x8 SAS Controller: LSI Logic SAS 9305-24i Host Bus Adapter - x8, PCIe 3.0, 8000 MB/s (6 connectors = Supports 24 internal 12Gb/s SATA+SAS ports - Supports SATA link rates of 3Gb/s and 6Gb/s)

PCIe slot 3: x8 GIGABYTE - AORUS Gen4 AIC Adaptor, PCIe 4.0 GC-4XM2G4
PCIe slot 4: x4 NIC: Intel i350-T4 4x 1GbE Quad Port Network LAN Ethernet PCIe x4 OEM Controller card (I350T4V2BLK)
PCIe slot 5: x4 NVIDIA GeForce RTX 3060
 

The HBA is for the backplane 48 server case, and the AORUS is for 2 addtional M.2. drives (+ the 3 on the board)
I only placed the RTX 3060 because from reading I would need a GPU using the IPMI?
(I might replace that with an old NVIDIA Quadro P2000)

Link to comment
21 hours ago, casperse said:

I have plans to use the PCI slots for:
 

PCIe slot 1: x1 IPMI
PCIe slot 2: x8 SAS Controller: LSI Logic SAS 9305-24i Host Bus Adapter - x8, PCIe 3.0, 8000 MB/s (6 connectors = Supports 24 internal 12Gb/s SATA+SAS ports - Supports SATA link rates of 3Gb/s and 6Gb/s)

PCIe slot 3: x8 GIGABYTE - AORUS Gen4 AIC Adaptor, PCIe 4.0 GC-4XM2G4
PCIe slot 4: x4 NIC: Intel i350-T4 4x 1GbE Quad Port Network LAN Ethernet PCIe x4 OEM Controller card (I350T4V2BLK)
PCIe slot 5: x4 NVIDIA GeForce RTX 3060
 

The HBA is for the backplane 48 server case, and the AORUS is for 2 addtional M.2. drives (+ the 3 on the board)
I only placed the RTX 3060 because from reading I would need a GPU using the IPMI?
(I might replace that with an old NVIDIA Quadro P2000)

Also an interesting setup. Are you going for a media center (Plex etc. )? I am asking because of the many m2/nvmes :P

 

You definitely don't need a GPU for the IPMI ! As I mentioned it could even make your day worse (in Legacy boot mode). What you could do: you could hook a dedicated ASUS VGA to IPMI cable to the GPU, but depending on the use case that is not necessary (my guess is you would only need it, if you do virtualization -> passing through cards and you want to be able to switch OSs on the fly for testing etc. -> again: as long as you only boot UEFI you can plug in what ever you want and assuming you didn't screw up any settings the IMPI should work with or without a dedicated GPU)

 

Keep in mind that the last two slots are only x4 and ONLY gen3 so you are going to see worse performance with your 3060. Depending on the use case (AI for example, that's what I planned -> stable diffusion etc.) you can maybe ignore the worse performance. 

 

Anyway would be nice if you could keep us up to date about your system; would be interested how it is going for your PCIe monster. 

 

And one last question, why are you going for 1G LAN only ? with that much storage wouldn't at least 10G be nice :D Just curious ?

Link to comment
3 hours ago, blacklight said:

Also an interesting setup. Are you going for a media center (Plex etc. )? I am asking because of the many m2/nvmes :P

 

You definitely don't need a GPU for the IPMI ! As I mentioned it could even make your day worse (in Legacy boot mode). What you could do: you could hook a dedicated ASUS VGA to IPMI cable to the GPU, but depending on the use case that is not necessary (my guess is you would only need it, if you do virtualization -> passing through cards and you want to be able to switch OSs on the fly for testing etc. -> again: as long as you only boot UEFI you can plug in what ever you want and assuming you didn't screw up any settings the IMPI should work with or without a dedicated GPU)

 

Keep in mind that the last two slots are only x4 and ONLY gen3 so you are going to see worse performance with your 3060. Depending on the use case (AI for example, that's what I planned -> stable diffusion etc.) you can maybe ignore the worse performance. 

 

Anyway would be nice if you could keep us up to date about your system; would be interested how it is going for your PCIe monster. 

 

And one last question, why are you going for 1G LAN only ? with that much storage wouldn't at least 10G be nice :D Just curious ?


Yes my existing Unraid server (See below configuration) is lacking the power to run everything.
Plex/Emby/Jellyfin server yes! - But all the great dockers running on the side:  NextCloud, Paperless, Synology DSM docker (NAS in a NAS with Apps) etc just needs more power and nvmes. And a couple of VMs for gaming (AMP - gaming server VM is awesome).
Also I would like to have enough Nvme's for a ZFS mirror or raid to protect the data and have snap-shoot support on APPDATA and VM's.

I would need a GPU for UNMANIC 🙂 but I guess the Quadro P2000 would be better in a 4x slot (And very energy efficient).
The very small 3060 card was to get better quality when encoding with UNMANIC.

Correct the 10G is on my list but I really want a Unifi 10G switch and I am waiting for a better product than the existing one that also supports 2,5G Network in a enterprise rack setup.

I must admit that the lack of Pci-e lanes almost made me go AMD, but the iGPU quicksync performance made me choose Intel again!
Like many others.

Link to comment
On 12/12/2023 at 7:02 AM, ByteNick said:

I installed the X13SAE-F into a Supermicro 846 case.

I cannot take the PSU readings. Any idea why? 

Did you wire up the PSU or is there an internal (dedicated) rail plug for the PSU in the rack case ? I don't know the system that's why I ask. But my IPMI is also not showing any PSU information because the headers are not attached to anything. Mainly because I use a commercial Corsair PSU and not an industrial server one :P The corsair also has a dedicated software to check the wattage etc, for this I have it hooked up to a internal USB (not the IPMI tho), so I could watch the stats in for example a windows vm but the IPMI can't handle the custom sensor data stream. My guess is you need all dedicated server hardware for this. Probably better to open up a new thread :) 

Link to comment
  • 2 months later...

Hello Everyone,

 

Noob alert 🙂 I just built my unRaid server with the following configuration - 

 

SUPERMICRO MBD-X13SAE-F-O Motherboard

Intel Core i5-13600K 

Supermicro (Micron) 32GB 288-Pin DDR5 4800 (PC5-38400) Server Memory (MEM-DR532L-CL01-EU48) X 2

Corsair RM750e Power supply.

Case : HL15 (i really liked this case, since i am going to keep in a open rack spend the extra $)

 

My HL15 comes with 4 Mini-SAS-HD cable from the backplane. I want to connect 10 HDDs so got this HBA card - LSI 9300-16i 16-port (flashed to IT mode). Was hoping i can run all drives with this single card. 

 

This card needs x8 lane, PCIe 3.0 so connected to SLOT 4 which is PCIe 5.0 x8 (IN x16) Slot. In the BIOS i am able to see the LSI card and the HDD attached with them, but unRaid doesn't show the HDDs. I moved the LSI card to SLOT7 which is a PCIe 5.0 x16 Slot. And I am able see the HDDs in unRaid.

 

I want keep SLOT 7 for Graphics card in the future. Am I making any basic mistake here? I even changed the SLOT4 speed to Gen 3 and still no luck.

 

Can someone please help? What is the right way to use this LSI card? should go and get 8i card and use SATA for rest of the drives? 

 

thank you

 

 

 

 

 

 

 

 

 

 

Edited by manny99
Link to comment
On 2/21/2023 at 8:03 PM, spamalam said:

Finally got round to buying this a few weeks back and just assembled it:

  • i9 13900k
  • 128GB ECC memory - 4x KSM48E40BD8KM-32HM (gets clocked down)
  • Supermicro X13SAE-F , Version 1.02
  • 3x WD 850X with Heatsyncs
  • I used my old SAS card: Broadcom / LSI SAS3224 PCI-Express Fusion-MPT SAS-3 - this only worked in Slot 5
  • Quad PCI 3.0 NVMe Card with 4x WD 850X - this worked fine in Slot 7 - no bifurcation on this board so needed this to use more m.2s, note its limited to 3500MB/s per drive.

The motherboard shipped with version 2.0 of the bios, so no need to flash or mess around:

Supermicro X13SAE-F , Version 1.02

American Megatrends International, LLC., Version 2.0
BIOS dated: Mon 17 Oct 2022 12:00:00 AM CEST

 

Everything worked out of the box, I enabled modprobe i915 in  /boot/config/go and  passed --device=/dev/dri  through to Plex and can see it using hw transcoding now.  Not much config needed:

 

i9.thumb.png.61423f4361ff7d6f7f9ce43f1cc74503.png

i9-2.png.9ce229aa888448e8e826eea462056edb.pngi9-3.png.f9bd0dfc9fd54d641b9dd6dd6083a9c4.pngi9-4.thumb.png.99195c68a4e9cc7570b12a0ce3d4a2a8.png

 

Hello,

 

So I wanted to talk about this.  For the life in me I can not POST with the SAS3224 and QUAD M.2 anymore.  Can not explain how it booted before and not now.  The M.2 drops one by one, and when I tried to switch out the M.2 for another HBA, the motherboard won't POST.  Tried upgrading firmware and playing with the bios but nothing, as soon as I put another card in SLOT7 it seems to have given up on having a PCI-e 3.0 8x and a PCI-e 3.0 8x (over 16x).  I haven't gone all the way back to 2.0 yet.

 

I had tried a HBA 9600-24i but it couldn't see any drives.  I think its cabling, I thought I'd bought 2x 05-60005-00 cables for 4x U.2, but they seem to be non-Broadcom knock-offs.  I can only see 05-60005-00 working with HBA 95xx, guess i'll email broadcom :D

Edited by scs3jb
Link to comment
On 3/12/2024 at 1:08 AM, manny99 said:

Hello Everyone,

 

Noob alert 🙂 I just built my unRaid server with the following configuration - 

 

SUPERMICRO MBD-X13SAE-F-O Motherboard

Intel Core i5-13600K 

Supermicro (Micron) 32GB 288-Pin DDR5 4800 (PC5-38400) Server Memory (MEM-DR532L-CL01-EU48) X 2

Corsair RM750e Power supply.

Case : HL15 (i really liked this case, since i am going to keep in a open rack spend the extra $)

 

My HL15 comes with 4 Mini-SAS-HD cable from the backplane. I want to connect 10 HDDs so got this HBA card - LSI 9300-16i 16-port (flashed to IT mode). Was hoping i can run all drives with this single card. 

 

This card needs x8 lane, PCIe 3.0 so connected to SLOT 4 which is PCIe 5.0 x8 (IN x16) Slot. In the BIOS i am able to see the LSI card and the HDD attached with them, but unRaid doesn't show the HDDs. I moved the LSI card to SLOT7 which is a PCIe 5.0 x16 Slot. And I am able see the HDDs in unRaid.

 

I want keep SLOT 7 for Graphics card in the future. Am I making any basic mistake here? I even changed the SLOT4 speed to Gen 3 and still no luck.

 

Can someone please help? What is the right way to use this LSI card? should go and get 8i card and use SATA for rest of the drives? 

 

thank you

 

 

 

 

 

 

 

 

 

 

I can just tell you from my experience that Unraid is a pain in the *** with virtualization with EXACTLY this card (see my other posts) but that was never the fault of the NAS component of UNRAID itself. It always detected all drives attached to this HBA if I didn't bound it to VFIO and it also detected all other drives attached to the mainboard. Did you figures this out ? Otherwise go through the bios and search for every option you find according the LSI HBA and/or the HDDs attached. A separate Bios entry should be there for the HBA, what information is in there ? Did you flash both Controllers ? This HBA has two 8x SAS controllers ! Did you flash them yourself and how ? Did you validate IT mode (I only did this once successful with a booted efi version of sas3flash) ? Did you check the VFIO is not bound ? Did you check the PCIe slot with another device ? Is it maybe turned of in BIOS or by another BIFUR (Bifurcation) feature ?

Link to comment
On 3/19/2024 at 6:57 PM, scs3jb said:

 

Hello,

 

So I wanted to talk about this.  For the life in me I can not POST with the SAS3224 and QUAD M.2 anymore.  Can not explain how it booted before and not now.  The M.2 drops one by one, and when I tried to switch out the M.2 for another HBA, the motherboard won't POST.  Tried upgrading firmware and playing with the bios but nothing, as soon as I put another card in SLOT7 it seems to have given up on having a PCI-e 3.0 8x and a PCI-e 3.0 8x (over 16x).  I haven't gone all the way back to 2.0 yet.

 

I had tried a HBA 9600-24i but it couldn't see any drives.  I think its cabling, I thought I'd bought 2x 05-60005-00 cables for 4x U.2, but they seem to be non-Broadcom knock-offs.  I can only see 05-60005-00 working with HBA 95xx, guess i'll email broadcom :D

If you consider trying another mainboard (I know it's not ideal): I am close to using all PCIe lanes on the Asus W680 ACE with the i9. It works like a charm and the IOMMU layout is perfect for my use case. You can easily riser the m2 slots for additional x4 PCIes and I will even try a SLIM SAS to PCIE adapter soon, that would give you up to 8 PCIe slots (2 x8 and 6 x4) on a workstation mainboard. 

I can't speak for that HBA and mainboard in particular never used it, sry ...

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.