• Content Count

  • Joined

  • Last visited

Community Reputation

21 Good

About kaiguy

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Right. As I mentioned, I originally did have containers (unifi, adguard) with fixed IP addresses, but in the process of troubleshooting I changed the network settings. Through further troubleshooting I then realized that the host access to custom networks caused the trace even without those fixed IP addresses (and easily/quickly repeatable), so that's what I've been focusing on. Here you go! Thanks.
  2. Sure. Attached. Since I started experiencing these issues, I removed unifi, homebridge, and adguard from service but I didn't fully delete the containers should I be able to restart in the future (unifi and adguard were originally assigned static ips, but I even removed that from these unused container configs). When you say special network setup, what do you mean? Aside from using a defined network for containers, I don't believe I've made any other changes. Attaching network config screenshot as well.
  3. Unfortunately it looks like the macvlan trace happened for me immediately after I enabled host access to custom networks. I posted diagnostics in the appropriate bug report thread.
  4. Mirrored syslog to flash, enabled host access to custom networks. Within a few minutes I got a call trace, but not a hard hang (expected). I went ahead and captured a diagnostics, turned off that setting again, and rebooted (as experience shows that it will ultimately hang in hours or days once I experience this call trace). Hope this helps. Happy to try other procedures to aid the cause. Apr 8 12:35:55 titan kernel: ------------[ cut here ]------------ Apr 8 12:35:55 titan kernel: WARNING: CPU: 1 PID: 20324 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_conf
  5. @limetech @bonienl Is mirroring to flash necessary if I already have a local syslog server back to an unraid SSD pool?
  6. With the drop of 6.9.2, I'm going to go ahead and toggle host access to custom networks later today and see if I immediately get a hang. I know my experience is slightly different than others in this thread, but it is the single setting that causes my kernel panics so I think it could be related. Will probably have an update within the next 24 hours.
  7. Well that was quick. Already got a call trace about 2 or 3 minutes after I made that setting. Relevant logging below. Mar 29 11:32:19 titan kernel: ------------[ cut here ]------------ Mar 29 11:32:19 titan kernel: WARNING: CPU: 1 PID: 14815 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 Mar 29 11:32:19 titan kernel: Modules linked in: macvlan xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap veth xt_nat xfs nfsd lockd grace sunrpc md_mod ipmi_devintf nct6775 hwmon_vid iptable_nat xt_MASQUERADE nf_nat wiregua
  8. I too have been without call traces/kernel panics for a while (11 days uptime), but I will re-enable host access to custom networks and keep an eye out. In my case, I do get a build up of call traces which end up ultimately resulting in a full blown hang, but I will capture a diagnostics before that happens (and still have the logging server going). I anticipate a call trace within the next few hours, and possibly a hang in the early AM tomorrow if I don't intervene by then. FYI, I already transitioned off any containers from br0 but it will still happen for me, so I'm not even su
  9. Disabling host access to custom networks has helped eliminate my kernel panics. Is this something you’d like me to re-enable for the cause?
  10. This board has a reputation of erroneous motherboard temp readings. I pretty much am always rocking an 80+ mobo temp via IPMI.
  11. Update: I seem to have narrowed down this issue to networking--some combination of utilizing br0 and also enabling "host access to custom networks." Even with containers not using br0 I get the kernel panic/hang when I have the host access option enabled under Docker settings. Something very strange going on. Disabled that setting and I've been good for 4 days. This report seems to have more action.
  12. Made the change back to host access and my server locked up sometime after 2:30am this morning. Looks like (at least for me) that's the primary culprit in general.
  13. Possibly. I still did get one when I removed them from br0 and turned off those containers entirely, but I don't recall if I rebooted between events. I'll maybe try re-enabling host access and see if it happens.
  14. Not sure if disabling host access to custom networks fixed it, or migrating the two containers that had static IPs assigned to br0 to a raspberry pi, but I’m no longer getting these syslog errors/locks. I would prefer to keep everything on the server, so next project will be setting up a docker VLAN on my pfsense and tplink smart switch. Not once did I see them before 6.9.x, so I am hopeful that whatever is going on in this kernel is corrected.
  15. I have also been getting this pretty constantly since upgrading to 6.9.x, which will eventually result in a system lock for me. Been trying to troubleshoot over the last few days. Older threads suggested it had to do with docker assignments on br0, so I removed all references to any containers using br0 (in fact, I moved 3 apps from unraid docker to a raspberry pi to try to fix this). Does not seem help. @CorneliousJD When you enabled the VLAN for br0.10, did you keep the setting for allowing host access to custom networks? Any chance you can share how you configured it? I'd like t