kaiguy

Members
  • Posts

    678
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

kaiguy's Achievements

Enthusiast

Enthusiast (6/14)

21

Reputation

  1. Seeing the same "link" here as well. SSL/TLS set to Auto. Firefox and Safari.
  2. I've been running my mobo with turbo disabled since March. Haven't had a single CPU_CATRR or mobo temperature warning since (which I was getting pretty frequently in the 6.9.x branch). Upgraded to 6.10.0-rc1 this morning and re-enabled turbo. Within an hour I got the mobo temp warning from IPMI for hitting 84 degrees. No CPU_CATRR yet... Rebooted and the temp warning happened not too long after. Are people still consistently getting this erroneous motherboard temperature reading?
  3. Super helpful overview, @bonienl! I was curious about a few of these items you highlighted. The crash you refer to is when enabling the host access to custom networks option, yes? If I have already moved off any containers that required their own IP address, is there any benefit (or drawback) from switching to ipvlan? Any help to the cause just in general testing in normal usage?
  4. Thanks, bonienl. I searched but for some reason the report and your post in the release thread didn't show up. I've made the suggested change. Marking this as solved since it probably is
  5. Ever since upgrading to 6.10-rc1, I've seen these errors recurring in the syslog: Aug 10 05:03:05 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:05 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:06 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:06 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:06 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:07 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:07 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:08 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Aug 10 05:03:09 titan wsdd2[10826]: error: wsdd-mcast-v6: wsd_send_soap_msg: send: Cannot assign requested address Preceding many of these wsdd errors, I'm seeing log entries relating to ipv6 addresses (but only ipv4 is enabled in my network settings). It also seems to occur around the times where the docker network and avahi log entries are chatty. Diagnostics attached. titan-diagnostics-20210810-0628.zip
  6. Right. As I mentioned, I originally did have containers (unifi, adguard) with fixed IP addresses, but in the process of troubleshooting I changed the network settings. Through further troubleshooting I then realized that the host access to custom networks caused the trace even without those fixed IP addresses (and easily/quickly repeatable), so that's what I've been focusing on. Here you go! Thanks.
  7. Sure. Attached. Since I started experiencing these issues, I removed unifi, homebridge, and adguard from service but I didn't fully delete the containers should I be able to restart in the future (unifi and adguard were originally assigned static ips, but I even removed that from these unused container configs). When you say special network setup, what do you mean? Aside from using a defined network for containers, I don't believe I've made any other changes. Attaching network config screenshot as well.
  8. Unfortunately it looks like the macvlan trace happened for me immediately after I enabled host access to custom networks. I posted diagnostics in the appropriate bug report thread.
  9. Mirrored syslog to flash, enabled host access to custom networks. Within a few minutes I got a call trace, but not a hard hang (expected). I went ahead and captured a diagnostics, turned off that setting again, and rebooted (as experience shows that it will ultimately hang in hours or days once I experience this call trace). Hope this helps. Happy to try other procedures to aid the cause. Apr 8 12:35:55 titan kernel: ------------[ cut here ]------------ Apr 8 12:35:55 titan kernel: WARNING: CPU: 1 PID: 20324 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Apr 8 12:35:55 titan kernel: Modules linked in: macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp veth xt_conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs nfsd lockd grace sunrpc md_mod ipmi_devintf nct6775 hwmon_vid iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables igb x86_pkg_temp_thermal intel_powerclamp i915 wmi_bmof coretemp ipmi_ssif kvm_intel kvm iosf_mbi drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel drm crypto_simd cryptd intel_gtt glue_helper mpt3sas agpgart i2c_i801 syscopyarea sysfillrect rapl i2c_algo_bit i2c_smbus sysimgblt raid_class i2c_core intel_cstate acpi_ipmi Apr 8 12:35:55 titan kernel: fb_sys_fops nvme scsi_transport_sas wmi intel_uncore nvme_core video ahci ie31200_edac ipmi_si intel_pch_thermal backlight libahci thermal acpi_power_meter fan acpi_pad button [last unloaded: igb] Apr 8 12:35:55 titan kernel: CPU: 1 PID: 20324 Comm: kworker/1:0 Not tainted 5.10.28-Unraid #1 Apr 8 12:35:55 titan kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C246D4U, BIOS L2.34 12/23/2020 Apr 8 12:35:55 titan kernel: Workqueue: events macvlan_process_broadcast [macvlan] Apr 8 12:35:55 titan kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Apr 8 12:35:55 titan kernel: Code: e8 dc f8 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 36 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 6d f3 ff ff e8 35 f5 ff ff e9 22 01 Apr 8 12:35:55 titan kernel: RSP: 0018:ffffc90000178d38 EFLAGS: 00010202 Apr 8 12:35:55 titan kernel: RAX: 0000000000000188 RBX: 0000000000004d65 RCX: 00000000124a1cea Apr 8 12:35:55 titan kernel: RDX: 0000000000000000 RSI: 0000000000000338 RDI: ffffffffa02b3ee0 Apr 8 12:35:55 titan kernel: RBP: ffff88819acbca00 R08: 000000004bf757e0 R09: 0000000000000000 Apr 8 12:35:55 titan kernel: R10: 0000000000000158 R11: ffff8882a39b5e00 R12: 000000000000bb38 Apr 8 12:35:55 titan kernel: R13: ffffffff8210b440 R14: 0000000000004d65 R15: 0000000000000000 Apr 8 12:35:55 titan kernel: FS: 0000000000000000(0000) GS:ffff88903f440000(0000) knlGS:0000000000000000 Apr 8 12:35:55 titan kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Apr 8 12:35:55 titan kernel: CR2: 0000150c678fc718 CR3: 000000000400a006 CR4: 00000000003706e0 Apr 8 12:35:55 titan kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Apr 8 12:35:55 titan kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Apr 8 12:35:55 titan kernel: Call Trace: Apr 8 12:35:55 titan kernel: <IRQ> Apr 8 12:35:55 titan kernel: nf_conntrack_confirm+0x2f/0x36 [nf_conntrack] Apr 8 12:35:55 titan kernel: nf_hook_slow+0x39/0x8e Apr 8 12:35:55 titan kernel: nf_hook.constprop.0+0xb1/0xd8 Apr 8 12:35:55 titan kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe Apr 8 12:35:55 titan kernel: ip_local_deliver+0x49/0x75 Apr 8 12:35:55 titan kernel: ip_sabotage_in+0x43/0x4d [br_netfilter] Apr 8 12:35:55 titan kernel: nf_hook_slow+0x39/0x8e Apr 8 12:35:55 titan kernel: nf_hook.constprop.0+0xb1/0xd8 Apr 8 12:35:55 titan kernel: ? l3mdev_l3_rcv.constprop.0+0x50/0x50 Apr 8 12:35:55 titan kernel: ip_rcv+0x41/0x61 Apr 8 12:35:55 titan kernel: __netif_receive_skb_one_core+0x74/0x95 Apr 8 12:35:55 titan kernel: process_backlog+0xa3/0x13b Apr 8 12:35:55 titan kernel: net_rx_action+0xf4/0x29d Apr 8 12:35:55 titan kernel: __do_softirq+0xc4/0x1c2 Apr 8 12:35:55 titan kernel: asm_call_irq_on_stack+0xf/0x20 Apr 8 12:35:55 titan kernel: </IRQ> Apr 8 12:35:55 titan kernel: do_softirq_own_stack+0x2c/0x39 Apr 8 12:35:55 titan kernel: do_softirq+0x3a/0x44 Apr 8 12:35:55 titan kernel: netif_rx_ni+0x1c/0x22 Apr 8 12:35:55 titan kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Apr 8 12:35:55 titan kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Apr 8 12:35:55 titan kernel: process_one_work+0x13c/0x1d5 Apr 8 12:35:55 titan kernel: worker_thread+0x18b/0x22f Apr 8 12:35:55 titan kernel: ? process_scheduled_works+0x27/0x27 Apr 8 12:35:55 titan kernel: kthread+0xe5/0xea Apr 8 12:35:55 titan kernel: ? __kthread_bind_mask+0x57/0x57 Apr 8 12:35:55 titan kernel: ret_from_fork+0x1f/0x30 Apr 8 12:35:55 titan kernel: ---[ end trace 57d37c5af5277fb5 ]--- titan-diagnostics-20210408-1236.zip
  10. @limetech @bonienl Is mirroring to flash necessary if I already have a local syslog server back to an unraid SSD pool?
  11. With the drop of 6.9.2, I'm going to go ahead and toggle host access to custom networks later today and see if I immediately get a hang. I know my experience is slightly different than others in this thread, but it is the single setting that causes my kernel panics so I think it could be related. Will probably have an update within the next 24 hours.
  12. Well that was quick. Already got a call trace about 2 or 3 minutes after I made that setting. Relevant logging below. Mar 29 11:32:19 titan kernel: ------------[ cut here ]------------ Mar 29 11:32:19 titan kernel: WARNING: CPU: 1 PID: 14815 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 Mar 29 11:32:19 titan kernel: Modules linked in: macvlan xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap veth xt_nat xfs nfsd lockd grace sunrpc md_mod ipmi_devintf nct6775 hwmon_vid iptable_nat xt_MASQUERADE nf_nat wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables igb i915 x86_pkg_temp_thermal intel_powerclamp ipmi_ssif coretemp kvm_intel wmi_bmof iosf_mbi kvm drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel drm mpt3sas crypto_simd intel_gtt cryptd raid_class agpgart syscopyarea glue_helper scsi_transport_sas i2c_i801 input_leds sysfillrect nvme video ahci rapl led_class i2c_algo_bit i2c_smbus sysimgblt nvme_core i2c_core wmi intel_cstate backlight intel_pch_thermal fb_sys_fops intel_uncore libahci acpi_ipmi ie31200_edac ipmi_si Mar 29 11:32:19 titan kernel: acpi_power_meter thermal button acpi_pad fan [last unloaded: igb] Mar 29 11:32:19 titan kernel: CPU: 1 PID: 14815 Comm: kworker/1:0 Not tainted 5.10.21-Unraid #1 Mar 29 11:32:19 titan kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C246D4U, BIOS L2.34 12/23/2020 Mar 29 11:32:19 titan kernel: Workqueue: events macvlan_process_broadcast [macvlan] Mar 29 11:32:19 titan kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 Mar 29 11:32:19 titan kernel: Code: e8 64 f9 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 d5 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 5d f3 ff ff e8 30 f6 ff ff e9 22 01 Mar 29 11:32:19 titan kernel: RSP: 0018:ffffc90000178d38 EFLAGS: 00010202 Mar 29 11:32:19 titan kernel: RAX: 0000000000000188 RBX: 0000000000007ca3 RCX: 000000000bdce55a Mar 29 11:32:19 titan kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff82009d78 Mar 29 11:32:19 titan kernel: RBP: ffff8882c2482780 R08: 0000000016e41bac R09: ffff88810185de80 Mar 29 11:32:19 titan kernel: R10: 0000000000000158 R11: ffff888252b3aa00 R12: 000000000000514e Mar 29 11:32:19 titan kernel: R13: ffffffff8210db40 R14: 0000000000007ca3 R15: 0000000000000000 Mar 29 11:32:19 titan kernel: FS: 0000000000000000(0000) GS:ffff88903f440000(0000) knlGS:0000000000000000 Mar 29 11:32:19 titan kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 29 11:32:19 titan kernel: CR2: 000014e336c24718 CR3: 000000000400c003 CR4: 00000000003706e0 Mar 29 11:32:19 titan kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Mar 29 11:32:19 titan kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Mar 29 11:32:19 titan kernel: Call Trace: Mar 29 11:32:19 titan kernel: <IRQ> Mar 29 11:32:19 titan kernel: nf_conntrack_confirm+0x2f/0x36 Mar 29 11:32:19 titan kernel: nf_hook_slow+0x39/0x8e Mar 29 11:32:19 titan kernel: nf_hook.constprop.0+0xb1/0xd8 Mar 29 11:32:19 titan kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe Mar 29 11:32:19 titan kernel: ip_local_deliver+0x49/0x75 Mar 29 11:32:19 titan kernel: ip_sabotage_in+0x43/0x4d Mar 29 11:32:19 titan kernel: nf_hook_slow+0x39/0x8e Mar 29 11:32:19 titan kernel: nf_hook.constprop.0+0xb1/0xd8 Mar 29 11:32:19 titan kernel: ? l3mdev_l3_rcv.constprop.0+0x50/0x50 Mar 29 11:32:19 titan kernel: ip_rcv+0x41/0x61 Mar 29 11:32:19 titan kernel: __netif_receive_skb_one_core+0x74/0x95 Mar 29 11:32:19 titan kernel: process_backlog+0xa3/0x13b Mar 29 11:32:19 titan kernel: net_rx_action+0xf4/0x29d Mar 29 11:32:19 titan kernel: __do_softirq+0xc4/0x1c2 Mar 29 11:32:19 titan kernel: asm_call_irq_on_stack+0xf/0x20 Mar 29 11:32:19 titan kernel: </IRQ> Mar 29 11:32:19 titan kernel: do_softirq_own_stack+0x2c/0x39 Mar 29 11:32:19 titan kernel: do_softirq+0x3a/0x44 Mar 29 11:32:19 titan kernel: netif_rx_ni+0x1c/0x22 Mar 29 11:32:19 titan kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Mar 29 11:32:19 titan kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Mar 29 11:32:19 titan kernel: process_one_work+0x13c/0x1d5 Mar 29 11:32:19 titan kernel: worker_thread+0x18b/0x22f Mar 29 11:32:19 titan kernel: ? process_scheduled_works+0x27/0x27 Mar 29 11:32:19 titan kernel: kthread+0xe5/0xea Mar 29 11:32:19 titan kernel: ? __kthread_bind_mask+0x57/0x57 Mar 29 11:32:19 titan kernel: ret_from_fork+0x1f/0x30 Mar 29 11:32:19 titan kernel: ---[ end trace b73556de35a696bd ]--- I went ahead and generated a diagnostics then went to go and disable the host access setting. While I was toggling docker to off (in order to access that setting), network connectivity went out. I was able to access the console via IPMI to ultimately make that switch and connectivity came back up. In the past, though, when I haven't actively intervened, by the time I realize I have no connectivity, the IPMI console is also hung. Going ahead and generating another diagnostics that may have captured what I just described above. It is attached. Edit: I should mention, ipv6 is disabled on unRAID and my network. Yet this call trace seems to explicitly call out ipv6... titan-diagnostics-20210329-1140.zip
  13. I too have been without call traces/kernel panics for a while (11 days uptime), but I will re-enable host access to custom networks and keep an eye out. In my case, I do get a build up of call traces which end up ultimately resulting in a full blown hang, but I will capture a diagnostics before that happens (and still have the logging server going). I anticipate a call trace within the next few hours, and possibly a hang in the early AM tomorrow if I don't intervene by then. FYI, I already transitioned off any containers from br0 but it will still happen for me, so I'm not even sure if my issue is the same as everyone else in this thread at this point. Or maybe it is. But hopefully it will lead to some answers.
  14. Disabling host access to custom networks has helped eliminate my kernel panics. Is this something you’d like me to re-enable for the cause?