ASRock Industrial IMB-X1714 (ATX, Intel W680)


Recommended Posts

One of the mobos on my list for an Unraid server is the Asus Pro WS 680-ACE IPMI for 12th & 13th generation Intel Core CPUs. (We have a topic on that here.)

 

But I just found out about the new IMB-X1714 by ASRock Industrial:

 

https://www.asrockind.com/en-gb/IMB-X1714

 

…and it kinda looks like it's a little better for building your own server. What do you think? Here's a quick comparison, but also see my question regarding the SIM card at the end of this post.

Price: I haven't found a lot on the ASRock yet, but it seems that it would be about 150 to 200 USD|EUR cheaper than the Asus

Network: the Asus has 2 x 2.5GbE ports, the ASRock has three, one of them with PoE

IPMI: only Asus offers an IPMI model (for a few dozen bucks more), so with the ASRock you'd need to set up your own KVM solution, e.g. with a TinyPilot Voyager 2a

Security: both have support for discrete TPM, while the ASRock also has onboard Intel PTT

Chipset & RAM: both use the Intel W680, both support DDR5 ECC memory, the Asus up to 192 GB, but the ASRock only up to 128 GB… maybe 192 GB will be possible after a future upgrade

PCI Express (general): the Asus has five PCIe slots, while the ASRock has the full seven for an ATX build, i.e. it's a lot more versatile for building or expanding your server

PCIe x16: both have 2 x 5.0 x16 (x16/NA or x8/x8), while the ASRock also supports a dual x8 riser for the primary x16 slot

PCIe x4: the Asus has 2 x 3.0 x4 (in x16), while the ASRock has 2 x 4.0 x4 (open slots); note: see M.2 storage (PCH) below

PCIe x1: the Asus has one 3.0 x1, while the ASRock has three (open slots)

M.2 PCIe NVMe storage (CPU-direct): both have one gen4 x4 M.2 slot

M.2 PCIe NVMe storage (via PCH): the Asus has two gen4 x4 M.2 slots, while the ASRock has only one gen3 x4, but this allows the ASRock more versatility with regard to PCIe expansion, specifically with the two PCIe 4.0 x4 slots

Nota bene: the gen3 M.2 slot on the ASRock with 3.938 GB/s would imho be A-OK for an Unraid application cache, even if you use SATA SSDs instead of HDDs for your storage, or for a dedicated unassigned volume for macOS Time Machine backups

SATA storage: the Asus has four plug&play SATA ports and support for four more using the 4i SlimSAS connector (which can also be used in PCIe 4.0 x4 mode), i.e. you'll need a pricey adapter cable, while the ASRock has eight plug&play SATA connectors at no extra cost

M.2 Key E: both have one slot for WiFi/Bluetooth (PCIe 3.0 x1), which you can use with an adapter card for two more SATA ports

M.2 Key B: only the ASRock has such a slot (PCIe 3.0 x1), with a SIM card adapter built-in; note: see below

Video/Audio: both have DP + HDMI + VGA plus audio I/O (5 x audio on the Asus, 3 x audio on the ASRock)

External COM ports: the Asus has none, while the ASRock has two (RS-232/422/485); note the ASRock also has a lot of internal RS232 COM headers

USB rear I/O: the Asus has one Type C 3.2gen2 and one Type A 3.2gen2 plus four Type A 3.2gen1 and two Type A 2.0, while the ASRock has one Type C 3.2gen2x2 and five Type A 3.2gen2, i.e. no USB 3.2gen1 or USB 2.0 ports, only 3.2gen2

Internal USB Type A ports: the Asus has none, while the ASRock has two internal USB 2.0 Type A connectors, i.e. it can also be used for TrueNAS with two mirrored thumb drives for boot/system, or for Unraid with one boot drive and one dummy array drive, if you only want a ZFS storage pool

USB headers (2 ports each): both have one 3.2gen1 header, but the Asus has two 2.0 headers, while the ASRock has only one (pitch header)

Internal USB connector: the Asus has one 3.2gen2x2 connector with support for Type C, while the ASRock has none; note: its 3.2gen2x2 is already present as a rear I/O port (Type C)

Fan connectors etc.: both have one CPU fan header and three 4-pin chassis fan headers, but the Asus has two additional 4-pin headers (optional CPU fan and water pump)

Other internal connectors/headers: both have Thunderbolt etc.

Question about SIM cards: I haven't looked into this at all, especially with regard to Unraid, but maybe you could use a SIM card as a fallback WAN, in case your copper/fibre WAN is down, i.e. you will always be able to connect to your server. For normal operation, you could use it to send server notifications using SMS, or maybe use the mobile carrier time to create a local NTP server. (?) However, I don't know if all of that is possible with Unraid.
 

IMB-X1714_TOP.png

Edited by eicar
Link to comment

I love that the ASRock has more PCI slots! Can they all be used at the full bandwidth, at the same time, though? Or will there be some sort of bottlenecks when all PCI slots and M2 slots are used? That would be my concern. 
 

Also do we know how well the ASRock would do with virtualization? Are the IOMMU groups known?

Link to comment

In a home server setting, the NIC will usually be the bottleneck. (Case in point: I plan to have a RAIDz1 storage pool with seven SATA SSDs, but I'll only have a dual SFP+ NIC. I would have to upgrade to a 25G SFP28 network, but that's in the future.)

 

But putting that aside, the Intel W680 chipset is definitely able to handle all of that, namely 12 lanes of PCIe 4.0, and 16 lanes of PCIe 3.0:

 

https://ark.intel.com/content/www/us/en/ark/products/218834/intel-w680-chipset.html

 

(This obviously includes almost everything you see on the board: SATA, USB, networking, the gen3 M.2 slot etc.)

 

But there's always another potential bottleneck, namely the DMI, which in our case can handle data throughput of "only" 126 Gb/s = 15.754 GB/s between chipset and CPU, which corresponds to eight lanes of PCIe 4.0. And that's where you'd have to start calculating. One thing is easy: if you populate both PCIe 4.0 x4 slots on the ASRock with (for example) one gen4 M.2 NVMe SSD each, you will already have saturated the DMI.

 

So it's always better to handle the important stuff via the CPU-direct slots, e.g. a NIC (x8), or a PCX-bifurcated dual or quad gen4 M.2 SSD carrier card, or something like the Broadcom HBA 9500-8i for two additional gen4 M.2 SSDs.

 

As for the DMI, looking at it from the PCIe 4.0 vantage point, eight SATA SSDs would need 2 lanes, the gen3 M.2 SSD would need another two, which would leave you with 4 lanes, or 8 lanes of PCIe 3.0, or 16 lanes of PCIe 2.0, which is still enough for lots of SATA HBAs, the occasional USB connection etc.

 

As for virtualization & IOMMU groups? Sorry, can't say. (Couldn't find anything either.)

Edited by eicar
  • Thanks 1
Link to comment

I’m building this as a Plex/gaming vm so I’d put an RTX 4070 into the PCI 5.0x16 CPU slot. 
 

Then for the other slots I’d really only need one 10GB Ethernet card and one USB 3 extension card (to pass through to my VM) and maybe in the future another SATA extension card for spinning hard drives. So no SSD storage would go into any of the PCI slots. 
 

my biggest worry is the IOMMU configuration, though. As I can’t find anything on this and I need to pass a couple of things into my VM. (Graphics card, usb3 extension card, and one of the M2 drives) 

Link to comment

The RTX 4070 is an x16 card, so you would not be able to use the second CPU-direct PCIe 5.0 slot. This means you'd have to run the 10G NIC via the PCH using one of the two PCIe 4.0 x4 slots. On paper, this would leave you with 4 lanes of PCIe 4.0 on the chipset, or 8 lanes of PCIe 3.0, until you've saturated the DMI. However, even if you use a 10G PCIe 4.0 x4 NIC, e.g. the one by OWC, the actual bandwidth used by a 10G connection will be below one lane of PCIe 4.0. So that would leave you with enough expansion options before reaching a DMI bottleneck. (IMHO.)

Link to comment
16 hours ago, eicar said:

The RTX 4070 is an x16 card, so you would not be able to use the second CPU-direct PCIe 5.0 slot. This means you'd have to run the 10G NIC via the PCH using one of the two PCIe 4.0 x4 slots.

Addendum: since you still have the CPU-direct M.2 slot with PCIe 4.0 x4, and since (as you wrote) you don't seem to need any M.2 SSDs, you could also use an M.2 to PCIe riser/extender. Then you'd be able to leave the PCH alone and have the CPU deal with the NIC directly. But I can't say if this would work… and the cable would need to be long & flexible enough, and have the right angle.

 

Something like this (four lanes of PCIe 4.0), one of six variants:

 

https://www.amazon.com/dp/B09C1K62PN

 

Edited by eicar
Link to comment
On 9/18/2023 at 7:22 PM, eicar said:

Currently, this looks like my favorite, but I'll wait for comments and reviews. I probably won't buy my server parts anyway until early 2024.

In a recent video on ASRock illegal warranty practices, Louis Rossmann has excoriated ASRock hardware in general, however based solely on his own experience. (But he's a hardware repair professional, so his opinion matters, at least to me.) I'll keep surveiling the market: the W680 is rather new, and I suspect we'll see a couple more ATX boards with that chipset in the future. (Currently we only have this one by ASRock Industrial, one by Asus, and one by SuperMicro.)

Link to comment
  • 4 months later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.