UnRAID on Asus Pro WS W680-ACE IPMI


Recommended Posts

12 hours ago, Daniel15 said:

 

I bought it directly from the Asus website. I'm in the USA but I'm not sure which other regions their store operates in. Newegg has them in stock too.

 

For the RAM, I'm using two sticks of Kingston server DDR5 ECC RAM. Model number KSM48E40BD8KM-32HM. I got them for $100 each using a employer discount with a supplier we use at work. It's also the only ECC RAM on Asus' compatibility list that seems to be easily obtainable - I couldn't figure out where to buy the others.

 

I don't see the RAM temperature anywhere, but I did see it when running memtest86, so there's probably just some sensor config I need to do? I'm not sure as sensors-detect didn't detect it...

 

For the CPU, I'm using a Core i5-13500. I usually stick to the ...500 CPUs since they're often the best balance of price and performance. My previous PC had an i5-6500 and the small form factor PC I'm replacing with this new server has an i5-9500. :)

 

If you want to do anything with the integrated graphics (Plex or Jellyfin server, security cameras, etc), the iGPU in the 13500 (UHD 770) is more powerful than the one in the 13400 and below (UHD 730) and can support more concurrent video encoding/decoding jobs. Both support SR-IOV which lets you use one GPU in multiple VMs and Docker containers at the same time (you'll need the SR-IOV plugin)

Excellent info, thanks for sharing. Got the DDR5 sticks already but might opt for a 13700 or even 14700 if the BIOS permits.

Link to comment

By the way, you'll want to disable "fast boot" in the BIOS, otherwise booting from USB drives will fail unless you do it via the BIOS. Took me a while to figure that out.

 

On the positive side, having IPMI is great since I can get into the BIOS remotely, without having to plug a screen and keyboard into the system itself. It's sitting on a shelf in a narrow closet which makes it a pain to physically plug stuff into it. The IPMI web interface is pretty decent and has all the basic features you'd expect (power on/off, shut down, remote control, sensors, fan control). If you're having trouble logging into the IPMI (e.g. it says the password is wrong), ipmitool or ipmicfg can reset the password. In a way, I could justify the higher cost of the motherboard because it saved me having to buy something like a PiKVM for remote control.

 

The IPMI has its own basic GPU, so you can use both the IPMI remote control and the intel iGPU (e.g. for hardware accelerated encoding/decoding via QuickSync) at the same time. That's not the case on some boards.

Edited by Daniel15
  • Like 1
  • Thanks 3
Link to comment

I am using Core i9-13900T - probably total overkill unless you will run a lot of containers, have many users etc. but I will also use it as a build server and some other things and since the build anyhow became quite expensive I wanted some future margin on capacity rather than save some bucks on the CPU. It runs cool and does not consume a lot of power so has worked out nicely. If you really intend to push the CPU at times and want to avoid temp throttling go for a good cooler (when I pushed it really hard it went up to 200W+ according to the display on the UPS)...  

  • Like 1
Link to comment

Thank you. I would in fact have a lot of Docker containers, probably a dozen, maybe more, a maximum of 7 users, but probably not more than 4 simultaneously… no VMs or build server etc. planned, though. So I guess the i5-13600T should be the better choice for me: ECC support, too, slightly higher base clocks than the i9, but a good value.

Edited by eicar
Link to comment
6 hours ago, eicar said:

Would any of you recommend the Core i9-13900T (35W TDP) with this board for Unraid?

I wouldn't buy a T variant, as the non-T variant is usually the same price. You can see here that both the i9-13900T and i9-13900 both have a recommended retail price of $549:

https://www.intel.com/content/www/us/en/products/sku/230498/intel-core-i913900t-processor-36m-cache-up-to-5-30-ghz/specifications.html

https://www.intel.com/content/www/us/en/products/sku/230499/intel-core-i913900-processor-36m-cache-up-to-5-60-ghz/specifications.html

 

The T variant is essentially just a lower binned version with a lower power limit. If you want to reduce power usage, I'd buy a regular i9-13900 (not the T variant) and power limit it in the BIOS. Setting the PL1 limit to 35W and PL2 limit to 106W will make it equivalent to the T variant, except you can always increase the limits a bit if you want more performance in the future. These settings are labelled something different in the Asus BIOS, but I can't remember off the top of my head.

 

You can also disable turbo and set the governor to powersave mode to save more power. The latter two can be done in Unraid using the "Tips and Tweaks" plugin.

 

Intel don't officially sell the T variant to retail customers, so it's also a bit more difficult to find, and you'd receive it just in a little plastic container without any official Intel packaging.

 

Honestly, that CPU is overkill. I bought an i5-13500 for mine (paid $249 for it) and even that's overkill if you aren't running a lot of VMs. It supports ECC RAM too. Up to you though - both are good choices :)

 

Edited by Daniel15
  • Like 1
  • Upvote 1
Link to comment

Those are really good suggestions/information against the tray models. Thank you. I had planned to tweak the board & CPU in BIOS anyway to bring power down. Lots of stuff that's possible: disable virtualization (don't need VMs), disable hyper-threading & turbo-boost etc. Not sure what I'll do in the end, but on my Intel MacBook Pro I even disable some cores, too, when I'm away from wall power for longer. Good to know that you can also tweak using an Unraid plugin. 👍

 

As for the board, the main topic of this thread: I have looked at a lot of potential other builds, but in the end it always seems to come down to the board's chipset and the DMI. Putting aside the NIC, that's where the bottlenecks would occur, maybe not immediately in build stage #1, but when (on paper) I started adding (for example) PCIe M.2 SSDs or other components, and started calculating, the W680 (and 12th/13th gen CPUs) always seemed to look best to me in terms of future expansion.

 

(And the ATX form factor was always plug-and-play; with smaller boards that only have one x16 CPU PCIe slot, I'd need a dual x8 riser/splitter for a future upgrade, e.g. with an additional x8 HBA. Here's looking at you, Broadcom HBA 9500-8i. 😉)

 

I just wish that Asus would include a Slim SAS to 4xSATA cable.

 

Ever since the Asustor FlashStor came out this year, I have also looked at a future-proof MB, one that you could (in principle) use for an all-gen3-M.2 server/NAS. With this board, you can. With the right setup, you could run your server/NAS storage pool on seven gen3 or five gen4 M.2 SSDs with (afaict) little to no speed restrictions. (But if I understand this correctly, for a gen3 build, you'd need a PCIe 4.0 x8 card that supplies 16 lanes to attach four gen3 M.2 SSDs.) At any rate, the Pro WS W680-ACE always looked damn great to me. Then all I'd need is a dual 100G fibre network. Ah, 💩. 😆

 

➡️ But jumping off from the idea of an all-M.2 build, I have a more general question about the PCH and the DMI, since I'm still quite new to all this: the DMI has 8 lanes at PCIe 4.0 speed. Does that mean you can get a theoretical throughput corresponding to 16 lanes at PCIe 3.0 speed, if (for example) you're only using gen3 M.2 SSDs via the PCH? In other words: does the chipset "translate" or "bundle" the data stream for the DMI? Or does 8 lanes on the DMI mean a maximum of 8 lanes that the PCH can use for its components at any give time, whether gen3 or gen4?

Edited by eicar
Link to comment
8 hours ago, eicar said:

the DMI has 8 lanes at PCIe 4.0 speed. Does that mean you can get a theoretical throughput corresponding to 16 lanes at PCIe 3.0 speed,

That's my understanding - it's just the throughput, which is around 16 GB/s for PCIe 4.0 x8. It's a lot of bandwidth and you're extremely unlikely to be using every single PCIe device at 100% capacity at the same time.

 

Having said that, note that on the Pro WS W680-ACE IPMI, the two PCIe 5.0 x16 and the first M.2 slot are connected directly to the processor, so they're not subject to the DMI speed limit. You can look at the "Expansion Slots" and "Storage" sections on the specs page: https://www.asus.com/us/motherboards-components/motherboards/workstation/pro-ws-w680-ace/techspec/ . They have headers labelled "Intel® 13th & 12th Gen Processors" and "Intel® W680 Chipset" so you can tell how the slots are connected. Unfortunately they don't provide block diagrams, but this is good enough.

 

If you're ever worried about PCIe bandwidth, even some older AMD EPYC processors have 128 PCIe 4.0 lanes that all go directly to the processor. :)

  • Like 1
Link to comment

Yes, the CPU direct lanes are x16 at 5.0 + x4 at 4.0 or x8/x8 + x4, which is perfect for a NIC, a storage AIC/HBA, and a gen4 M.2 SSD cache, e.g. for a metadata-only L2ARC. (I'd put the Unraid application data cache on a gen4 PCH M.2 slot… if you really need both, which I'm still not sure about; see my topic here.)

 

It looks like all of the relevant CPUs in Intel's gen 12 & 13 have those 20 lanes, even the G6900 Celeron and the Pentium G7400. (Those were on my list for a more power-efficient build, but the PCH would be inferior, so I'd lack expansion options… and I would in principle be able to reduce power consumption of a 13th gen Core i5 too, maybe even get it down close to the Pentium.)

 

A side-note/question regarding this:

Quote

I'd put the Unraid application cache on a gen4 PCH M.2 slot.

This might be a false memory, but I might have read somewhere that using a cache SSD via the PCH, which also handles the SATA RAID storage pool, can lead to internal bottlenecks, if the PCH isn't powerful enough. This would relate to read or write operations within the PCH, internally routed from the RAID pool to a cache drive or vice versa. I don't know if this true, but if it is, it would probably be wiser for me to use additional cache drives via the second CPU-direct x8 slot with a carrier card or Slim SAS HBA.

 

PS: yes, I'd love to have those diagrams too… SuperMicro does a great job in this respect.

Link to comment
7 hours ago, eicar said:

e.g. for a metadata-only L2ARC

 

What is a "metadata-only L2ARC"? I'm very new to ZFS so I'm still learning - my only knowledge of it is what is shown in the Unraid UI when setting up a pool. My current configuration is that I have two 20TB hard drives in a ZFS mirror, two 2TB NVMe drives in a ZFS mirror, 64GB RAM and 32GB of it dedicated to ZFS.

Edited by Daniel15
Link to comment
On 9/5/2023 at 4:09 PM, Daniel15 said:

These settings are labelled something different in the Asus BIOS, but I can't remember off the top of my head.

I'm in the BIOS today and PL1 is labelled as "Long Duration Package Power Limit" and PL2 is labelled as "Short Duration Package Power Limit". Both are under AI Tweaker -> Internal CPU Power Management.

Link to comment
1 hour ago, Daniel15 said:

 

What is a "metadata-only L2ARC"? I'm very new to ZFS so I'm still learning - my only knowledge of it is what is shown in the Unraid UI when setting up a pool. My current configuration is that I have two 20TB hard drives in a ZFS mirror, two 2TB NVMe drives in a ZFS mirror, 64GB RAM and 32GB of it dedicated to ZFS.

I'm still learning, too. Off the top of my head:

The L2ARC (level 2 adaptive replacement cache) is kind of an extension of the default ARC, which is the ZFS read cache that resides in memory, in your case the 32GB, I assume. (Basically it's a mirror of the primary data on your storage pool or any other Unraid volume/pool that is accessed most often.) Obviously, for an L2ARC to make any sense, it would need to be significantly faster than your main storage pool (and have better random read IOPS), but a gen4 M.2 SSD vs. even a SATA SSD RAID should definitely suffice… a gen3 M.2 SSD would probably only work for a SATA HDD RAID. (You'd have to calculate if gen3 read speeds would exceed your storage pool read speeds.)

The size of the L2ARC should be at least 5 times as big as the maximum that your system will allocate to the standard ARC. Afaik the latter is currently capped at 50% of available RAM on Linux, which is a bummer, so it's not really as "adaptive" as on *BSD, e.g. on TrueNAS Core, but this might change in a future Linux/ZFS update. So if you have 128 GB of RAM, the maximum size of your ARC, assuming a later update to account for (let's say) 90% of RAM or more, would be about 115GB, meaning your L2ARC should be about 500GB. Since it's hard to find a fast gen4 SSD with 500GB, a 1TB M.2 SSD is probably the right choice.

If your ARC is full or sized down as part of its "adaptive" scheme, ARC data is written out to the L2ARC. Then, if a client requests a file, the system first looks in RAM (ARC), then on the M.2 cache SSD (L2ARC), then on the slower storage volumes or other cache volumes.

Nota bene: internet says that an L2ARC is useless & should be avoided in case you have less than 32 GB of RAM.

Whether your system needs an L2ARC is another question: if you're only streaming videos & music in Plex or Emby, you don't need it. The L2ARC is usually meant for random reads of lots of data in the same dataset by many users. Then it can really shine, even if you have loads of RAM.

But in home server systems, it can be useful for metadata and auxiliary data used by applications, e.g. in their UI. This is a setting that can be applied to the L2ARC at initialization, if I remember correctly.

As for Unraid, the L2ARC is not yet implemented really, let alone the metadata-only feature. (To my knowledge there is a workaround to get the L2ARC to work in its basic form.) But it seems that it will be implemented at some point. I still think it could be a nice companion to an M.2 SSD that's used by Unraid as a cache for application data, e.g. for Docker containers, because the L2ARC would cover the data on those cache volumes too.

Regarding the Asus board: it definitely has enough gen4 M.2 slots to use for many cache or scratch drives. Almost like a dream come true. 😉 EDIT: I would put the L2ARC in the Asus board's CPU-direct gen4 M.2 slot, because it's a deep cache for the file system, different from e.g. Unraid's app data cache, which is (to my knowledge) more like a software/library cache. I would rather not want the system to access the L2ARC through the PCH.

Edited by eicar
  • Like 1
Link to comment

Thanks for the info @eicar. Very useful! It sounds like I don't really need L2ARC since I'm happy with the performance of the hard drives. I'm actually moving from a storage VPS "in the cloud", where dozens of people share the same hardware, so even just the hard drives with no caching is quite a bit faster than what I've gotten used to using for storage.

 

I've got the MicroATX version of the Asus board, so I've only got two M.2 slots. I definitely want to have two drives in a mirror just in case one of them dies. One of the M.2 slots is directly below the CPU cooler, so it's noticeably warmer (right now one M.2 drive is 46C while the other one is 41C). Both are well within the operating temperatures for the drives I'm using (70C max) but I wonder if they'll wear unevenly as a result of the temperature difference? I didn't think to install the supplied heatsink onto the drive, and now it's too hard to attach it since I'd have to remove the whole CPU cooler to be able to reach it. <_<

 

I've currently got a 10Gbps Ethernet card in the single CPU-attached PCIe 5.0 slot, but that only needs PCIe 2.0 x4. I think I can somehow bifurcate the x16 slot so I can plug in two x8 devices (in case I want more NVMe drives in the future for example) but I'm not quite sure how that works.

Edited by Daniel15
Link to comment

For the ATX form factor, setting the bifurcation manually is possible in BIOS: x16 —> x8/x8, by default set to auto. Maybe manual bifurcation is possible for the mATX form factor too? Then you could just add a passive x16 to dual x8 PCIe splitter/riser (with a cable, so you can place it next to the board). If the x16 slot on the mATX can't be bifurcated, however, this would only be possible with an active splitter, and those cost a lot of money. 😕

Edited by eicar
Link to comment
3 minutes ago, eicar said:

For the ATX form factor, setting the bifurcation manually is possible in BIOS: x16 —> x8/x8, by default set to auto. Maybe manual bifurcataion is possible for the mATX form factor too? Then you could just add a passive x16 to dual x8 PCIe spitter/riser (with a cable, so you can place it next to the board). If the x16 slot on the mATX can't be bifurcated, however, this would only be possible with an active splitter, and those cost a lot of money. 😕

I do see the bifurcation option in the BIOS!

What I don't understand is how I'd connect the network card to a riser and still have the port available at the back of the computer. Wouldn't the riser mean the card is oriented the wrong way?

  • Thanks 1
Link to comment
22 minutes ago, eicar said:

Found it… this is the one I was looking at for my potential low-power build using either mATX or Deep Mini-ITX:

 

https://c-payne.com/products/pcie-gen4-gen5-bifurcation-adpater-fpc-cable-x8x8-1w-1u-55mm

But when looking at the board view of the Pro WS W680M-ACE SE, the x16 slot isn't at the edge of the board, and a card in the x4 slot might be in the way, i.e. you might need an additional PCIe 4.0 x16 extension cable. And furthermore, I'm not sure if the splitter/riser by C-Payne has the right orientation. The fixing plate with the two x8 slots might end up flipped upside down, so I don't know if this specific splitter would work. Maybe you need two more x16 risers for the right angle. Seems hackier by the minute. 😉

Edited by eicar
Link to comment
4 hours ago, firstTimer said:

Hi guys, yesterday I became the owner of this MB (the full ATX one) and the IPMI card. I just added another GPU to my Unraid server, a old gtx 1050 but now I receive a no signal error when trying to access it remotely. Did this ever happened to you?

 

This is common across most IPMI systems. The issue is that the IPMI only sees content rendered using its onboard graphics chip - there's physically no way for it to get the video output from the GPU. If the GPU is your primary video output, nothing will be rendered via the IPMI, and thus the remote console will be blank.

 

You need to either configure the BIOS to use the IPMI's display adapter as the default screen, or enable the dual screen mode so that it outputs to both the GPU and the onboard graphics at the same time. I can't remember exactly where this is in the BIOS, but search around and you should find it.

 

Note that if you're intending to use the graphics card for video transcoding, Intel QuickSync will do the same thing with much less power consumption.

Edited by Daniel15
Link to comment

@Daniel15Thanks for your response, yeah I guessed that... At the moment I managed to have the IPMI working at least until the VGA is activated during Unraid bootup. Is it working with your system? Could you have a check in your BIOS maybe? The path is: Advanced -> System Agent (SA) Configuration -> Graphics Configuration

Anyway what I wanted to achieve is:

  1. having full access (not limited like now) of the output through the IPMI
  2. Igpu passed to Plex container
  3. GTX 1050 passed to a Windows 11 VM

The fact is that if I select either PEG or PCIE Graphics, the white led on the MB turns up and I can't access the BIOS at all

 image.png.c82efcae2e19aca83b94bdf3ee274010.png

 

I mounted the GTX 1050 at the purple Pcie and yellow one is for IPMI card of course

image.png.3ffe76dea7169720eae971f7076511da.png

Edited by firstTimer
Link to comment

Hi guys,

After a long afternoon of experimenting, if you have a discrete GPU, along with IPMI and IGPU you can follow this step to use the IPMI card as your main adapter:

Advanced -> System Agent (SA) Configuration -> Graphics Configuration --> Primary Display

you have to set the primary display to PCIE in order to have the IPMI card as "main" display.

Also set IGPU-Multi-Monitor to Enabled

Explanation:

Before rebooting the first time, check that no cables or dummies are attached to both IGPU and Ext. GPU.

If you forget this step, one of the two will be set as the primary adapter and when Unraid starts up it will output the first logs (in the remote window of the IPMI software) until any GPU driver is loaded. After that, any further output will be redirected to either IGPU or discrete GPU so you still have output but no refresh is sent to the IPMI card, e.g when Unraid is up and running any new output won't be displayed.

Edited by firstTimer
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.