UnRAID on Asus Pro WS W680-ACE IPMI


Recommended Posts

On 12/9/2023 at 3:12 PM, Omid said:

Resolved!

 

CSM and legacy mode PCIe did it.

Will post again later...

Just tested it out on my system with my 9207-8i in the PCI-E x16 top slot and I didn't have to mess with CSM, just disabling fast boot and I can see all drives in unRAID.

 

Wonder why it's different for you?

Link to comment
On 9/13/2023 at 4:03 PM, firstTimer said:

Hi guys,

After a long afternoon of experimenting, if you have a discrete GPU, along with IPMI and IGPU you can follow this step to use the IPMI card as your main adapter:

Advanced -> System Agent (SA) Configuration -> Graphics Configuration --> Primary Display

you have to set the primary display to PCIE in order to have the IPMI card as "main" display.

Also set IGPU-Multi-Monitor to Enabled

Explanation:

Before rebooting the first time, check that no cables or dummies are attached to both IGPU and Ext. GPU.

If you forget this step, one of the two will be set as the primary adapter and when Unraid starts up it will output the first logs (in the remote window of the IPMI software) until any GPU driver is loaded. After that, any further output will be redirected to either IGPU or discrete GPU so you still have output but no refresh is sent to the IPMI card, e.g when Unraid is up and running any new output won't be displayed.

So immediate after saving the bios changes to primary display to PCIE, unplug all the display cables?

Link to comment
On 12/10/2023 at 7:02 PM, JimmyGerms said:

Just tested it out on my system with my 9207-8i in the PCI-E x16 top slot and I didn't have to mess with CSM, just disabling fast boot and I can see all drives in unRAID.

 

Wonder why it's different for you?

 

I had to disable fast boot and add the 6 pin PCIe supplemental power and then had no problems. 

Link to comment

  

On 12/9/2023 at 9:06 PM, JimmyGerms said:

 

I'll hook up my LSI 9207-8i HBA a little later today and see if I get any abnormalities.

 

For now though, did you disable "fast boot" in bios?

Also, try out enabling CSM. I can't remember if this helped on my current server or not. Need to play around and see.

On 12/11/2023 at 3:02 AM, JimmyGerms said:

Just tested it out on my system with my 9207-8i in the PCI-E x16 top slot and I didn't have to mess with CSM, just disabling fast boot and I can see all drives in unRAID.

 

Wonder why it's different for you?

@JimmyGerms Thank you so much for trying this out for me. I'm super confused at the different result but appreciate the effort nonetheless.

 

On 12/9/2023 at 10:12 PM, maydaytek said:

Similar issue for me with lsi 9205-8i

 

If I disable "fast boot" it will boot fine from the USB drive but the HBA card won't be detected.

 

My next steps to try are:

- Install discrete GPU to get CSM enabled source: https://www.asus.com/us/support/FAQ/1045467

- Enable Interrupt 19 Capture in BIOS. Source: https://hardforum.com/threads/hba-cards-for-modern-motherboards-and-os.2030137/

 

for now, if I go into Bios and manually pick the USB drive it will boot with Fast Boot enable which also strangely enough picks up my HBA card. 

I already had Fast Boot disabled 🙂 but thanks for the inputs, @maydaytek! I've actually shared all of my BIOS changes in an earlier post on this thread (in case anyone is interested).

 

On 12/9/2023 at 11:15 PM, maydaytek said:

Did you have to throw a GPU in to get to CSM? It's greyed out for me on iGPU so wanted to confirm. 

 

Nope, I've never put a dGPU in this build. Only using the 13700K's iGPU.

 

On 12/19/2023 at 5:22 PM, jlarmstr said:

 

I had to disable fast boot and add the 6 pin PCIe supplemental power and then had no problems. 

@jlarmstr Wait, what? Supplemental power for the 9207-8i?

 

Edited by Omid
Link to comment
On 12/9/2023 at 11:12 PM, Omid said:

Resolved!

 

CSM and legacy mode PCIe did it.

Will post again later...

 

Before:

584279336_Screenshot2023-12-09163346.thumb.png.0e76a6334b39a440863332011a911703.png

 

After:

1117302590_Screenshot2023-12-09163425.png.b1d5f27482db2c30da15c39e2ddaa734.png

This was the solution and I got everything up and running back then, but I learned of a problem after a few days that was difficult to notice at first...

 

My iGPU isn't detected/active/working anymore. I've not rebooted, checked BIOS settings or tried anything yet but I know it's not working because the GPU Statistics plugin now just shows "Vendor command returned no data." and the output from the intel_gpu_top command aligns with this.

 

root@unRAID:~# intel_gpu_top
No device filter specified and no discrete/integrated i915 devices found

root@unRAID:~# ls -ltrah /dev/dri/
total 0
drwxrwxrwx  2 root root      60 Dec 16 02:15 by-path/
drwxrwxrwx  3 root root      80 Dec 16 02:15 ./
crwxrwxrwx  1 root video 226, 0 Dec 16 02:15 card0
drwxr-xr-x 16 root root    4.6K Dec 17 14:37 ../

 

I know it was working for many months until I installed the LSI HBA card, so it's either the presence of the card or enabling CSM (or both?).

 

I'm going to try the following:

  1. Just a simple reboot (not expecting this to fix it)
  2. Unplug VGA cable from the IPMI card and reboot
  3. Plug something in to the motherboard's VGA/HDMI port and reboot
  4. Play with BIOS settings (?)

Any thoughts/ideas from people here?

 

Edited by Omid
Link to comment
On 12/12/2023 at 3:36 AM, ParkerFlyGuy said:

So immediate after saving the bios changes to primary display to PCIE, unplug all the display cables?

 

On 12/26/2023 at 6:01 PM, Omid said:

 

Before:

584279336_Screenshot2023-12-09163346.thumb.png.0e76a6334b39a440863332011a911703.png

 

After:

1117302590_Screenshot2023-12-09163425.png.b1d5f27482db2c30da15c39e2ddaa734.png

This was the solution and I got everything up and running back then, but I learned of a problem after a few days that was difficult to notice at first...

 

My iGPU isn't detected/active/working anymore. I've not rebooted, checked BIOS settings or tried anything yet but I know it's not working because the GPU Statistics plugin now just shows "Vendor command returned no data." and the output from the intel_gpu_top command aligns with this.

 

root@unRAID:~# intel_gpu_top
No device filter specified and no discrete/integrated i915 devices found

root@unRAID:~# ls -ltrah /dev/dri/
total 0
drwxrwxrwx  2 root root      60 Dec 16 02:15 by-path/
drwxrwxrwx  3 root root      80 Dec 16 02:15 ./
crwxrwxrwx  1 root video 226, 0 Dec 16 02:15 card0
drwxr-xr-x 16 root root    4.6K Dec 17 14:37 ../

 

I know it was working for many months until I installed the LSI HBA card, so it's either the presence of the card or enabling CSM (or both?).

 

I'm going to try the following:

  1. Just a simple reboot (not expecting this to fix it)
  2. Unplug VGA cable from the IPMI card and reboot
  3. Plug something in to the motherboard's VGA/HDMI port and reboot
  4. Play with BIOS settings (?)

Any thoughts/ideas from people here?

 

 

@OmidDoes your HBA show up under the Advanced tab in the bios. Try the following (use the remote window of the IPMI software) for easy monitoring at reboot.1388479197_CaptureScreen(1).jpeg.4b69cddbf35b9d8f94a52debf12f666d.jpeg295371808_CaptureScreen(2).jpeg.38109da97aa9aaf9d4c20ae139a12a6f.jpeg307718023_CaptureScreen(3).jpeg.170ca7c84afbfedcdc0c6b749fe59628.jpeg

Before "Save configuration and reboot" remove any connection to all the graphics cards, Let Unraid boot and plug in VGA to IMPI card after Unraid is started, this solved the iGPU issue for me. 1509991016_CaptureScreen(5).jpeg.ee22d4e3c5419054d168e6bcdf9a2e51.jpeg

Edited by Zerax
Link to comment

HI guys, happy new year everyone. Does anyone has a similar issue like me? Basically when I start the NAS, IF I don't enter the bios, then the usb key (Unraid) does not start (from the IPMI I see only a black screen). If I enter the bios, even if I don't change anything and just exit then Unraid starts as normal. Any idea?

Link to comment
6 hours ago, firstTimer said:

HI guys, happy new year everyone. Does anyone has a similar issue like me? Basically when I start the NAS, IF I don't enter the bios, then the usb key (Unraid) does not start (from the IPMI I see only a black screen). If I enter the bios, even if I don't change anything and just exit then Unraid starts as normal. Any idea?

 

That's caused by "fast boot". Disable that option and it'll work fine. For some reason, fast boot breaks booting from USB. This is a common problem across many different motherboards. 

  • Thanks 1
  • Upvote 1
Link to comment

For anyone interested I just updated Intel ME from 16.1.25.2020 to 16.1.30.2307 and Bios from 2305 to 3101 because of the LogoFAIL exploit.  I had to reset ipmi dhcp settings ( defaulted to static with no info), completely remove power to the server and reboot for card to be recognized. But everything has been running solid for a couple of days. 

Link to comment

Hi,

 

can you say something about the power consumption (idel mode) of this board? I'm planning my first unraid server with 2 possible versions: first with ECC support (ASUS W680) and the other without ECC support (Gigabyte B760M)

 

kind regards

Link to comment
On 1/5/2024 at 7:45 AM, Sofcso said:

For anyone interested I just updated Intel ME from 16.1.25.2020 to 16.1.30.2307 and Bios from 2305 to 3101 because of the LogoFAIL exploit.  I had to reset ipmi dhcp settings ( defaulted to static with no info), completely remove power to the server and reboot for card to be recognized. But everything has been running solid for a couple of days. 

 

@SofcsoHow did you update the Intel ME version, last time i had to boot into bare-metal windows. Do you have another update method.

Link to comment
On 1/1/2024 at 6:02 AM, firstTimer said:

HI guys, happy new year everyone. Does anyone has a similar issue like me? Basically when I start the NAS, IF I don't enter the bios, then the usb key (Unraid) does not start (from the IPMI I see only a black screen). If I enter the bios, even if I don't change anything and just exit then Unraid starts as normal. Any idea?

 

I'll join you in the "unraid drive won't boot" boat. I just completed a major server upgrade using this motherboard and this is the only remaining hardware issue I have. My experience is similar to yours @firstTimer, but I'll add that it's more intermittent, and it's getting worse. When I first booted after assembly, the drive booted without any issue. But as I've been adjusting BIOS setting (based on previous posts in this thread, for example,) and updating firmware, the issue seems to have gotten worse. But it's been intermittent and gradual enough that I can't pinpoint what I did to break things.

 

But, at this point, after a cold boot, it almost never boots the Unraid drive and instead falls back to the BIOS. I then need to cycle anywhere between roughly 1 and 10 times before the drive will properly boot. I think soft resets are better then hard resets (CTRL+ALT+DEL vs reset switch,) but again, I can't be sure. "Discard and Exit" actions from the BIOS almost never work.

 

One trick that often works for me is to try and catch the boot menu, rather then let if fall into BIOS on a cold start or power cycle. This took me a little while to figure out, but F8 works on the logo screen to pull up the boot menu. Sometimes, the unraid disk will be in the menu and you can bypass the BIOS.

 

In terms of "things I've tried" I can say that I've fiddled quite a bit with the CSM (Compatability Support Module.) Turning this on gets Unraid to boot pretty reliably (maybe 100% of the time) but always in legacy mode, and I need UEFI for GPU pass-through, so it's a no-go for me. I've also fiddled with plenty of boot settings in the BIOS such as extending the delay on logo screen, turning on/off USB Legacy Support, and Enabling/Disabling Fast Boot (thanks @Daniel15 for your advice here as I know others in the thread have suggested this to make Unraid boot, but it's not working for me, unfortunately.) I've also tried every USB port on the motherboard without any change in behavior.

 

Let's compare notes a bit more on setups. For example, what USB drive are you using @firstTimer? Maybe it's specific to the drive. I'm using a SanDisk Cruzer Blade 64GB (started with an 8GB, but I purchased a replacement and migrated as a troubleshooting step because I was worried the drive was failing.) What BIOS version are you on? I've updated to the latest (3101).

Link to comment
18 minutes ago, FirbyKirby said:

The array is down and thus docker and VM services down as well. Nothing else significant is running.

 

Hi,

 

thanks for the information! The utilization (9%, 88W) seems to be very high in idle mode (array, docker an VM services down ...)

 

do you have installed other pci cards (LSI controller, GPU card ...)?

Edited by Neo78
Link to comment
Posted (edited)

I have 63W (at 7% with a T-series i9 CPU) but then I run two enterprise SSD for cache (U.2 NVMe), 3xPCIe sticks (idle at this point) and 4x20TB WD Enterprise HDDs (powered down right now). No GPU as I rely on the one in the CPU for my plex needs.

image.thumb.png.ef2061e3bb5b4ac0e277eec336453e15.png

Edited by NAS-newbie
  • Thanks 1
Link to comment
2 hours ago, NAS-newbie said:

I have 63W (at 7% with a T-series i9 CPU) but then I run two enterprise SSD for cache (U.2 NVMe), 3xPCIe sticks (idle at this point) and 4x20TB WD Enterprise HDDs (powered down right now). No GPU as I rely on the one in the CPU for my plex needs.

thanks for the information! Seems that this mainboard consumes more power than others (for example B670 chipset).

 

Link to comment
2 hours ago, Neo78 said:

Seems that this mainboard consumes more power than others (for example B670 chipset).

Yeah, and in general, if you're after idle efficiency, then stay away from any W680 board, not just the one discussed in this thread.

Link to comment
15 hours ago, Lolight said:

Yeah, and in general, if you're after idle efficiency, then stay away from any W680 board, not just the one discussed in this thread.

Not that I disagree if power consumption is the primary objective but apparently STH got their X13SAE to idle at 31W. It leads me to wonder if the difference being observed here is partly due to the configurations as well. 

Link to comment
1 hour ago, golfsands7 said:

Not that I disagree if power consumption is the primary objective but apparently STH got their X13SAE to idle at 31W. It leads me to wonder if the difference being observed here is partly due to the configurations as well.

another user (german forum) reported an idle of 25,1W which is pretty good

 

  • Upvote 1
Link to comment
1 hour ago, Neo78 said:

another user (german forum) reported an idle of 25,1W which is pretty good

 

 

They're using a 400W PSU which would help a bit. PSUs are pretty inefficient if you're only using a very small percentage of their capacity. 80 Plus Gold requires 87% efficiency at 20% load, but has no requirements at lower loads so it's not uncommon to only see 50-60% efficiency at 5-10% load. 

 

Properly size your PSU based on how much power you expect to use. They perform best around 50-60% load. 

 

I've got a similar PSU (Be Quiet Pure Power 12 M) but the lowest wattage I could find here was 550W, which is really oversized for what I need.

 

I'll try get some power measurements when I get some free time. Right now the CPU isn't properly entering idle states (powertop shows 0% for all the idle states) even though I have everything configured properly in the BIOS, so I have to figure that out too. I think my 10Gbps network card is causing issues with idle states. 

 

Edited by Daniel15
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.