Intel Socket 1151 Motherboards with IPMI AND Support for iGPU


Recommended Posts

On 8/10/2021 at 3:14 PM, kaiguy said:

I've been running my mobo with turbo disabled since March. Haven't had a single CPU_CATRR or mobo temperature warning since (which I was getting pretty frequently in the 6.9.x branch). Upgraded to 6.10.0-rc1 this morning and re-enabled turbo. Within an hour I got the mobo temp warning from IPMI for hitting 84 degrees. No CPU_CATRR yet... Rebooted and the temp warning happened not too long after.

 

Are people still consistently getting this erroneous motherboard temperature reading?

hey @kaiguy and others, how goes it on 6.10.0-rc1?

 

Also for clarity, are you toggling turbo boost via the BIOS, or the tips and tweaks plugin?

 

I am having mysterious lock ups as well, with turbo boost disabled via tips and tweaks. They happen once every 12-24hrs or so. I don't have CPU_CATRR on my IPMI log, but I did have a syslog readout sort of similar to yours. No motherboard temp issues.

 

Running a E-2126G on an E3C246D4U with the L2.34 firmware and BMC version 1.80.

Link to comment
51 minutes ago, AlexHuang said:

hey @kaiguy and others, how goes it on 6.10.0-rc1?

 

Also for clarity, are you toggling turbo boost via the BIOS, or the tips and tweaks plugin?

 

I am having mysterious lock ups as well, with turbo boost disabled via tips and tweaks. They happen once every 12-24hrs or so. I don't have CPU_CATRR on my IPMI log, but I did have a syslog readout sort of similar to yours. No motherboard temp issues.

 

Running a E-2126G on an E3C246D4U with the L2.34 firmware and BMC version 1.80.

@AlexHuang still running with Turbo disabled via Tips and Tweaks plugin. I do always get the mobo temp error so I just raised the threshold for notifications via the IPMI plugin (it's showing 87 degrees right now, which is false).

 

I also switched the Docker custom network type to ipvlan but host access to custom networks disabled.

 

No locks with these settings, including CPU_CATRR (which is absolutely due to turbo boost on my setup).

Link to comment
On 10/24/2021 at 1:34 PM, AlexHuang said:

Also for clarity, are you toggling turbo boost via the BIOS, or the tips and tweaks plugin?

 

I am having mysterious lock ups as well, with turbo boost disabled via tips and tweaks. They happen once every 12-24hrs or so. I don't have CPU_CATRR on my IPMI log, but I did have a syslog readout sort of similar to yours. No motherboard temp issues.

I can echo everything @kaiguy said.  Turbo Boost disabled through Tips and Tweaks, MB reporting 88C temp in my case which is bogus, Docker custom network type set to ipvlan.

 

I have had zero CPU_CATERR lockups since disabling Turbo Boost.  I have not tried enabling it since moving to the 6.10.0 RC1 a month or so ago but I am fairly certain I would again get CPU_CATERR lockups if I did as this appears to be related to Turbo Boost on this board and has nothing to do with unRAID/Linux kernel version.

 

I have been experiencing server lockup every 5-15 days since July; however, I now think that was due to a flaky PSU.  I swapped out the PSU 17 days ago and the server has been running without issue since then.

Edited by Hoopster
Link to comment
1 hour ago, Hoopster said:

I can echo everything @kaiguy said.  Turbo Boost disabled through Tips and Tweaks, MB reporting 88C temp in my case which is bogus, Docker customer network type set to ipvlan.

 

I have had zero CPU_CATERR lockups since disabling Turbo Boost.  I have not tried enabling it since moving to the 6.10.0 RC1 a month or so ago but I am fairly certain I would again get CPU_CATERR lockups if I did as this appears to be related to Turbo Boost on this board and has nothing to do with unRAID/Linux kernel version.

 

I have been experiencing server lockup every 5-15 days since July; however, I now think that was due to a flaky PSU.  I swapped out the PSU 17 days ago and the server has been running without issue since then.

 

2 hours ago, kaiguy said:

@AlexHuang still running with Turbo disabled via Tips and Tweaks plugin. I do always get the mobo temp error so I just raised the threshold for notifications via the IPMI plugin (it's showing 87 degrees right now, which is false).

 

I also switched the Docker custom network type to ipvlan but host access to custom networks disabled.

 

No locks with these settings, including CPU_CATRR (which is absolutely due to turbo boost on my setup).

 

Thanks for the details. I will try and replicate the settings as much as possible. I've reset the BIOS and the IPMI to factory default (minus the GPU setting) and removed all non-critical hardware.

 

Pardon my lack of insight, but what is the advantage or concern regarding macvlan vs ipvlan? Mine is set to the former, but can switch that over.

 

My PSU isn't the best (Silver Grade 750W) but has never raised concerns prior to this (powered 2-3 other systems prior to this one) - but can swap it if need be.

 

Fingers crossed!

 

Link to comment
On 10/24/2021 at 5:11 PM, AlexHuang said:

Pardon my lack of insight, but what is the advantage or concern regarding macvlan vs ipvlan? Mine is set to the former, but can switch that over.

Many people get call trace errors with macvlan which lock up the server eventually.  This usually occurs when you have docker containers to which custom IP addresses have been assigned on br0.  The solution for me and many others was to create a VLAN on which docker containers get IP addresses assigned.  In my case, that is br0.3.  This eliminated the macvlan call traces.

 

The new ipvlan custom network type in 6.10.0 rc1 is another potential solution to this problem.  In one of my server crashes, I did get a call trace that appeared to involve macvlan (among several other things) so I went with ipvlan.  It did not solve my crashes which now appear to have been PSU related although I am still testing that theory.

Edited by Hoopster
Link to comment
On 10/25/2021 at 10:37 AM, Hoopster said:

Many people get call trace errors with macvlan which lock up the server eventually.  This usually occurs when you have docker containers to which custom IP addresses have been assigned on br0.  The solution for me and many others was to create a VLAN on which docker containers get IP addresses assigned.  In my case, that is br0.3.  This eliminated the macvlan call traces.

 

The new ipvlan custom network type in 6.10.0 rc1 is another potentially solution to this problem.  In one of my server crashes, I did get a call trace that appeared in involve macvlan (among several other things) so I went with ipvlan.  It did not solve my crashes which now appear to have been PSU related although I am still testing that theory.

Whoa, I think you may have diagnosed my issue, I'm so glad I asked for clarification (though apologies, I should have done some leg work and found that post you linked first - just read it over). Truthfully, I've done so many changes to both diagnose/resolve my locking issue and just add functionality to my UnRAID server that it was a bit of a nightmare to diagnose. Just to close (hopefully) this topic and maybe serve to help others:

  • In early September 2021, I moved from a Supermicro X10 + Xeon 2600 v3 platform to the Asrock E3C246D4U + Xeon E-2126G setup I have now - I did upgrade the BIOS to L2.34 at that point - but no immediate issues with lockups. Unraid was 6.9.3
  • In late September 2021, I installed the FileBrowser docker (testing out a potential Krusader alternative) - it was and is the only docker using br0.
  • From late September to now, I was experiencing random lock ups, first every few days, then eventually daily.
    • At this point, I started down the CPU_CATERR, "is my BIOS version doing this", "is it some hardware failure"
    • My review of the syslogs did show this trace call error activity, but I thought very little of it since process still seemed to occur after these log entries, often for several hours before I would notice a lock up. It didn't help that most lock ups seemed to happen overnight, or while I was at work,.
  • This past weekend, I went into full "I need to fix this" mode when dealing with my kid wanting to watch Frozen II and my wife wanting to catchup on some old series in our waitlist. I then:
    • Posted on this thread
    • Gutted my UnRAID server including: converting my mirrored cache pools to XFS (out of fear for some newly developed BTRFS issues) [totalling my tiered cache setup], removing a SAS expander, disabling turboboost despite no CPU_CATERR messages ever, removing a host of plugins I had installed over the last month.
    • I factory reset the BIOS and IPMI, and set my docker network to ipvlan.

Presently, my system has been up and running for 1.5 days, FileBrowser remains installed and functional and a scan of my syslog reveals no call trace errors over this period. Like you and your PSU situation, I consider this all still in testing, but if stability persists over the week/weekend I'll slowly reinstall my cache pools and plugins.

 

Fingers crossed, and thanks!!!

Link to comment

Oddly enough I woke up to all my docker containers shutdown and a bunch of call traces in my log (likely due to an appdata backup that crashed). Unraid wouldn't let me stop the array (it said the mover was active but it was not--super odd), so I initiated a shutdown and got hung along the way. html5 ipmi interface wouldn't actually let me login (just returned to login screen). Had to hard powerdown the server. Should have pulled the plug as well since ipmi still isn't working, but I'm back up and running.

 

I did recently re-add a br0 container about a month ago, but it all seemed stable until this morning. So strange. Looks like I'm going into troubleshooting mode here as well.

Link to comment

hello everybody. i just stumbled over this thread while researching for my nas. would be interesting what you think.

 

i narrowed the choice for the mainboard down to two.

 

ASRock Rack E3C246D4I-2T

or the brand new

Supermicro X12STL-IF

 

i am not sure what to buy. the supermicro is a great board, but has very limited cpu options. on the other side the asrock which is a little older, but has great variety of choice of 8th gen consumer and server cpu. as far as i could research, both have igpu support for en/decoding, ipmi, m.2 ssd. the asrock has the bonus of 2x10glan whereas the supermicro only has 2xglan.

 

my nas will be mini itx, 4 hot swaps. the main purpose will be, beside of cloud storage, a plex server with hw-acceleration, some virtualization and dockers.

i would highly appreciate your opinion.

 

thanks

Link to comment

ok, this is a long shot but this thread seems pretty active so i figured id ask here.  

I just received a secondhand E3C246D4U and decided to see if I can boot from BMC via IPMI while i wait for my CPU/RAM to arrive.    Supplying power to the motherboard gives me link lights in the IPMI NIC port, as well as a blinking green BMC Health Indicator, but shorting the Power Switch doesnt seem to do anything (No Dr Health or VGA output).  I attached case fans as well, to see if they would spin up, but they did not.  It seems like IPMI is getting some sort of IP, but it is not showing up in my router as receiving a DHCP address.  My thoughts are there are some settings preventing IPMI, which is fine, but my bigger concern is that nothing happens when i try to power it on.  Is there a minimum set of hardware needed for this?  I assumed I could boot from the BMC/onboard graphics, but perhaps not.    Even if BMC is not set up properly, shouldnt I get SOMETHING when i short the PWR Switch header?  I will try with cpu/ram when it arrives, but it currently has me wanting to figure this out.  Thanks!

Link to comment

I have a windows vm with a nvidia 3070 passed through to it, I want to add a 1660 super. 

 

When I try to start the vm I get this error:

 

internal error: qemu unexpectedly closed the monitor: 2021-11-02T22:10:45.645683Z qemu-system-x86_64: -device vfio-pci,host=0000:03:00.0,id=hostdev0,bus=pci.0,addr=0x6: vfio 0000:03:00.0: group 17 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver.

 

In tools system devices I can see this group groups 17

 

1181447017_Screenshot2021-11-02at22_18_47.thumb.png.857765e5b7a3bb3a8e75b3da88a7898b.png

 

And here is all the info from system devices:

 

CI Devices and IOMMU Groups

IOMMU group 0:				[8086:3ec6] 00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers (rev 07)
IOMMU group 1:				[8086:1901] 00:01.0 PCI bridge: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) (rev 07)
 	[10de:2484] 01:00.0 VGA compatible controller: NVIDIA Corporation GA104 [GeForce RTX 3070] (rev a1)
 	[10de:228b] 01:00.1 Audio device: NVIDIA Corporation GA104 High Definition Audio Controller (rev a1)
IOMMU group 2:			 	[8086:3e96] 00:02.0 Display controller: Intel Corporation CoffeeLake-S GT2 [UHD Graphics P630]
IOMMU group 3:			 	[8086:1911] 00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model
IOMMU group 4:			 	[8086:a379] 00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10)
IOMMU group 5:			 	[8086:a36d] 00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 0781:5530 SanDisk Corp. Cruzer
Bus 001 Device 003: ID 046b:ff01 American Megatrends, Inc. Virtual Hub
Bus 001 Device 004: ID 046b:ffb0 American Megatrends, Inc. Virtual Ethernet
Bus 001 Device 005: ID 046b:ff10 American Megatrends, Inc. Virtual Keyboard and Mouse
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
 	[8086:a36f] 00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10)
IOMMU group 6:			 	[8086:a368] 00:15.0 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #0 (rev 10)
 	[8086:a369] 00:15.1 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH Serial IO I2C Controller #1 (rev 10)
IOMMU group 7:			 	[8086:a360] 00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10)
 	[8086:a361] 00:16.1 Communication controller: Intel Corporation Device a361 (rev 10)
 	[8086:a364] 00:16.4 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller #2 (rev 10)
IOMMU group 8:			 	[8086:a352] 00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)
[3:0:0:0]    disk    ATA      WDC WD80EZAZ-11T 0A83  /dev/sdb   8.00TB
[4:0:0:0]    disk    ATA      WDC WD80EZAZ-11T 0A83  /dev/sdc   8.00TB
[5:0:0:0]    disk    ATA      WDC WD80EZAZ-11T 0A83  /dev/sdd   8.00TB
[6:0:0:0]    disk    ATA      WDC WD80EZAZ-11T 0A83  /dev/sde   8.00TB
[7:0:0:0]    disk    ATA      CT120BX500SSD1   R013  /dev/sdf    120GB
IOMMU group 9:				[8086:a340] 00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 (rev f0)
IOMMU group 10:				[8086:a32c] 00:1b.4 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #21 (rev f0)
IOMMU group 11:				[8086:a338] 00:1c.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 (rev f0)
IOMMU group 12:				[8086:a330] 00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 (rev f0)
IOMMU group 13:				[8086:a331] 00:1d.1 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #10 (rev f0)
IOMMU group 14:				[8086:a332] 00:1d.2 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #11 (rev f0)
IOMMU group 15:			 	[8086:a328] 00:1e.0 Communication controller: Intel Corporation Cannon Lake PCH Serial IO UART Host Controller (rev 10)
IOMMU group 16:			 	[8086:a309] 00:1f.0 ISA bridge: Intel Corporation Cannon Point-LP LPC Controller (rev 10)
 	[8086:a323] 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10)
 	[8086:a324] 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10)
IOMMU group 17:			 	[10de:21c4] 03:00.0 VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660 SUPER] (rev a1)
 	[10de:1aeb] 03:00.1 Audio device: NVIDIA Corporation TU116 High Definition Audio Controller (rev a1)
 	[10de:1aec] 03:00.2 USB controller: NVIDIA Corporation TU116 USB 3.1 Host Controller (rev a1)
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
 	[10de:1aed] 03:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU116 USB Type-C UCSI Controller (rev a1)
IOMMU group 18:			 	[144d:a808] 04:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983
[N:0:4:1]    disk    Samsung SSD 970 EVO Plus 1TB__1            /dev/nvme0n1  1.00TB
IOMMU group 19:			 	[8086:1533] 05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
IOMMU group 20:				[1a03:1150] 06:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04)
 	[1a03:2000] 07:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
IOMMU group 21:			 	[8086:1533] 08:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)



CPU Thread Pairings

Pair 1:	cpu 0 / cpu 6
Pair 2:	cpu 1 / cpu 7
Pair 3:	cpu 2 / cpu 8
Pair 4:	cpu 3 / cpu 9
Pair 5:	cpu 4 / cpu 10
Pair 6:	cpu 5 / cpu 11


USB Devices

Bus 001 Device 001:	ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002:	ID 0781:5530 SanDisk Corp. Cruzer
Bus 001 Device 003:	ID 046b:ff01 American Megatrends, Inc. Virtual Hub
Bus 001 Device 004:	ID 046b:ffb0 American Megatrends, Inc. Virtual Ethernet
Bus 001 Device 005:	ID 046b:ff10 American Megatrends, Inc. Virtual Keyboard and Mouse
Bus 002 Device 001:	ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001:	ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001:	ID 1d6b:0003 Linux Foundation 3.0 root hub


SCSI Devices

[0:0:0:0]	disk    SanDisk  Cruzer           1.26  /dev/sda   8.00GB
[3:0:0:0]	disk    ATA      WDC WD80EZAZ-11T 0A83  /dev/sdb   8.00TB
[4:0:0:0]	disk    ATA      WDC WD80EZAZ-11T 0A83  /dev/sdc   8.00TB
[5:0:0:0]	disk    ATA      WDC WD80EZAZ-11T 0A83  /dev/sdd   8.00TB
[6:0:0:0]	disk    ATA      WDC WD80EZAZ-11T 0A83  /dev/sde   8.00TB
[7:0:0:0]	disk    ATA      CT120BX500SSD1   R013  /dev/sdf    120GB
[N:0:4:1]	disk    Samsung SSD 970 EVO Plus 1TB__1            /dev/nvme0n1  1.00TB

 

what can I do to get the 1660 working?

Link to comment
  • 2 weeks later...
On 10/24/2021 at 12:34 PM, AlexHuang said:

hey @kaiguy and others, how goes it on 6.10.0-rc1?

 

Also for clarity, are you toggling turbo boost via the BIOS, or the tips and tweaks plugin?

 

I am having mysterious lock ups as well, with turbo boost disabled via tips and tweaks. They happen once every 12-24hrs or so. I don't have CPU_CATRR on my IPMI log, but I did have a syslog readout sort of similar to yours. No motherboard temp issues.

 

Running a E-2126G on an E3C246D4U with the L2.34 firmware and BMC version 1.80.

 

 

I have this MB with  E-2246G and haven't had any strange lockups except for impi dying on me until I disabled inventory, logging in the bios. BMC 1.8 and am using the  L2.32 bios. Docker is macvlan.

Link to comment

been getting system lock ups on my asrock E3C246D4U  with bios L2.34 found this error not sure it it is been since bios mod installed for igpu any one else having problems like this or know what this is

 

i run lspci -vvnn

and it seems related to 

 

02:00.0 PCI bridge [0604]: Pericom Semiconductor PI7C9X2G304 EL/SL PCIe2 3-Port/4-Lane Packet Switch 

LnkCap: Port #0, Speed 5GT/s, Width x2, ASPM not supported

LnkSta: Speed 5GT/s (ok), Width x1 (downgraded)

 

00:1d.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 [8086:a330] (rev f0) (prog-if 00 [Normal decode])

LnkCap: Port #9, Speed 8GT/s, Width x1, ASPM not supported

LnkSta: Speed 2.5GT/s (downgraded), Width x1 (ok)

 

00:1d.1 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #10 [8086:a331] (rev f0) (prog-if 00 [Normal decode])

LnkCap: Port #10, Speed 8GT/s, Width x1, ASPM not supported

LnkSta: Speed 5GT/s (downgraded), Width x1 (ok)

 

00:1d.2 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #11 [8086:a332] (rev f0) (prog-if 00 [Normal decode])

LnkCap: Port #11, Speed 8GT/s, Width x1, ASPM not supported
LnkSta: Speed 2.5GT/s (downgraded), Width x1 (ok)

 

 

 

anyone have ideas might try going back to the standard bios and see if that fixes it.

 

 

 

 

image.thumb.png.2cef673e1516b4cf7d60700a1e9bdbd9.png

image.thumb.png.678d06f089e807d13873080870af21ef.png

Edited by nitrossub
Link to comment
1 hour ago, nitrossub said:

been getting system lock ups on my asrock E3C246D4U  with bios L2.34

I am also running BIOS L2.34 and have been getting system lockups after between 2 and 15 days of runtime.  I have tested the RAM and swapped out the PSU.  RAM is good and a PSU swap did not solve the problem.  I do not know if it has anything to do with BIOS 2.34 but the lockups started in July around the time I think I upgraded to this BIOS.

 

I have rolled back to 2.32 just to see if that makes any difference.  I am not convinced the BIOS version has anything to do with it as others have 2.34 installed and are not seeing system lockups.  Just something else to try as the lockups are annoying and a lack of clarity regarding the cause is even more annoying.

Link to comment
1 hour ago, Hoopster said:

I am also running BIOS L2.34 and have been getting system lockups after between 2 and 15 days of runtime.  I have tested the RAM and swapped out the PSU.  RAM is good and a PSU swap did not solve the problem.  I do not know if it has anything to do with BIOS 2.34 but the lockups started in July around the time I think I upgraded to this BIOS.

 

I have rolled back to 2.32 just to see if that makes any difference.  I am not convinced the BIOS version has anything to do with it as others have 2.34 installed and are not seeing system lockups.  Just something else to try as the lockups are annoying and a lack of clarity regarding the cause is even more annoying.

 do you have any ipmi logged event errors

Link to comment
1 hour ago, Hoopster said:

I am also running BIOS L2.34 and have been getting system lockups after between 2 and 15 days of runtime.

Running L2.34 here. While I don't have regular lockups, there are times when I will catch some traces in the log and proactively reboot my server, but it's been pretty rare as of late.

 

For the most part, as long as I have turbo boost disabled and ipvlan set for the Docker network, things run pretty stable. I wish I knew what combination of BIOS and Unraid versions/network configs are the root cause for instability--I do know there was a time when I was running with turbo boost for months without issue. I echo your frustrations that there's little consistency with these issues. Take the mobo temp issue for example--for whatever reason I haven't seen that erroneous reading all month despite seeing it constantly last month.

Link to comment
49 minutes ago, nitrossub said:

 do you have any ipmi logged event errors

Nothing meaningful in the event log or the syslog.  Event log just reports OS Stop/Shutdown or Microcontroller/Coprocessor transition to power off and there is nothing in the syslog before any of the shutdowns that looks suspicious (or even informative).

Link to comment
52 minutes ago, kaiguy said:

For the most part, as long as I have turbo boost disabled and ipvlan set for the Docker network, things run pretty stable

I have both turbo boost disabled and docker network set to ipvlan.  Makes no difference in my case.  Lockups still occur.

 

I still get the high MB temps within hours after every reboot.  It starts out normal (26C to 32C usually) but several hours later it is reporting in the 80s again.

Link to comment
14 minutes ago, Hoopster said:

I have both turbo boost disabled and docker network set to ipvlan.  Makes no difference in my case.  Lockups still occur.

 

I still get the high MB temps within hours after every reboot.  It starts out normal (26C to 32C usually) but several hours later it is reporting in the 80s again.

 

you guys all running 6.10?

Link to comment
On 1/30/2021 at 5:06 PM, rmadden80 said:

I purchased the ASRock Rack E3C246D4U and there is a way to enable the iGPU without installing the beta BIOS.  I'm currently running P2.30 with iGPU enabled.

 

There is a key combination you need to press when booting your system.  After powering on the boot splash screen will display the ASRock Rack image and the message “Updating FRU system devices”.  When you see "Updating FRU system devices" press ctrl+alt+F3 and it will load the BIOS menu.  In BIOS menu, you will see an additional page labeled IntelRC Chipset. Select System Agent (SA) Configuration, then Graphics Configuration, and then Enable IGPU Multi-Monitor. 

i totally missed this comment. i just went back to p2.30 am going to try this.  maybe this might fix some other peoples problems running the custom firmware. did not relies this was possible to get igpu working on factory firmware p2.30.  i can comfirm this work to get the option to turn it on in bios and igpu works in plex.

Link to comment
On 11/18/2021 at 2:57 AM, nitrossub said:

been getting system lock ups on my asrock E3C246D4U  with bios L2.34 found this error not sure it it is been since bios mod installed for igpu any one else having problems like this or know what this is

 

i run lspci -vvnn

and it seems related to 

 

02:00.0 PCI bridge [0604]: Pericom Semiconductor PI7C9X2G304 EL/SL PCIe2 3-Port/4-Lane Packet Switch 

LnkCap: Port #0, Speed 5GT/s, Width x2, ASPM not supported

LnkSta: Speed 5GT/s (ok), Width x1 (downgraded)

 

00:1d.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 [8086:a330] (rev f0) (prog-if 00 [Normal decode])

LnkCap: Port #9, Speed 8GT/s, Width x1, ASPM not supported

LnkSta: Speed 2.5GT/s (downgraded), Width x1 (ok)

 

00:1d.1 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #10 [8086:a331] (rev f0) (prog-if 00 [Normal decode])

LnkCap: Port #10, Speed 8GT/s, Width x1, ASPM not supported

LnkSta: Speed 5GT/s (downgraded), Width x1 (ok)

 

00:1d.2 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #11 [8086:a332] (rev f0) (prog-if 00 [Normal decode])

LnkCap: Port #11, Speed 8GT/s, Width x1, ASPM not supported
LnkSta: Speed 2.5GT/s (downgraded), Width x1 (ok)

 

 

 

anyone have ideas might try going back to the standard bios and see if that fixes it.

 

 

 

 

image.thumb.png.2cef673e1516b4cf7d60700a1e9bdbd9.png

image.thumb.png.678d06f089e807d13873080870af21ef.png

 

 

after submitting a ticket to asrock i got a reply to upgrade via i link to a ftp server for bios L2.35. going to try that and see how that goes.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.