• 6.9.0/6.9.1 - Kernel Panic due to netfilter (nf_nat_setup_info) - Docker Static IP (macvlan)


    CorneliousJD
    • Urgent

    So I had posted another thread about after a kernel panic, docker host access to custom networks doesn't work until docker is stopped/restarted on 6.9.0

     

     

    After further investigation and setting up syslogging, it apperas that it may actually be that host access that's CAUSING the kernel panic? 

    EDIT: 3/16 - I guess I needed to create a VLAN for my dockers with static IPs, so far that's working, so it's probably not HOST access causing the issue, but rather br0 static IPs being set. See following posts below.

     

    Here's my last kernel panic that thankfully got logged to syslog. It references macvlan and netfilter. I don't know enough to be super useful here, but this is my docker setup.

     

    image.png.dac2782e9408016de37084cf21ad64a5.png

     

    Mar 12 03:57:07 Server kernel: ------------[ cut here ]------------
    Mar 12 03:57:07 Server kernel: WARNING: CPU: 17 PID: 626 at net/netfilter/nf_nat_core.c:614 nf_nat_setup_info+0x6c/0x652 [nf_nat]
    Mar 12 03:57:07 Server kernel: Modules linked in: ccp macvlan xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap veth xt_nat xt_MASQUERADE iptable_nat nf_nat xfs md_mod ip6table_filter ip6_tables iptable_filter ip_tables bonding igb i2c_algo_bit cp210x usbserial sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd ipmi_ssif isci glue_helper mpt3sas i2c_i801 rapl libsas i2c_smbus input_leds i2c_core ahci intel_cstate raid_class led_class acpi_ipmi intel_uncore libahci scsi_transport_sas wmi ipmi_si button [last unloaded: ipmi_devintf]
    Mar 12 03:57:07 Server kernel: CPU: 17 PID: 626 Comm: kworker/17:2 Tainted: G        W         5.10.19-Unraid #1
    Mar 12 03:57:07 Server kernel: Hardware name: Supermicro PIO-617R-TLN4F+-ST031/X9DRi-LN4+/X9DR3-LN4+, BIOS 3.2 03/04/2015
    Mar 12 03:57:07 Server kernel: Workqueue: events macvlan_process_broadcast [macvlan]
    Mar 12 03:57:07 Server kernel: RIP: 0010:nf_nat_setup_info+0x6c/0x652 [nf_nat]
    Mar 12 03:57:07 Server kernel: Code: 89 fb 49 89 f6 41 89 d4 76 02 0f 0b 48 8b 93 80 00 00 00 89 d0 25 00 01 00 00 45 85 e4 75 07 89 d0 25 80 00 00 00 85 c0 74 07 <0f> 0b e9 1f 05 00 00 48 8b 83 90 00 00 00 4c 8d 6c 24 20 48 8d 73
    Mar 12 03:57:07 Server kernel: RSP: 0018:ffffc90006778c38 EFLAGS: 00010202
    Mar 12 03:57:07 Server kernel: RAX: 0000000000000080 RBX: ffff88837c8303c0 RCX: ffff88811e834880
    Mar 12 03:57:07 Server kernel: RDX: 0000000000000180 RSI: ffffc90006778d14 RDI: ffff88837c8303c0
    Mar 12 03:57:07 Server kernel: RBP: ffffc90006778d00 R08: 0000000000000000 R09: ffff889083c68160
    Mar 12 03:57:07 Server kernel: R10: 0000000000000158 R11: ffff8881e79c1400 R12: 0000000000000000
    Mar 12 03:57:07 Server kernel: R13: 0000000000000000 R14: ffffc90006778d14 R15: 0000000000000001
    Mar 12 03:57:07 Server kernel: FS:  0000000000000000(0000) GS:ffff88903fc40000(0000) knlGS:0000000000000000
    Mar 12 03:57:07 Server kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Mar 12 03:57:07 Server kernel: CR2: 000000c000b040b8 CR3: 000000000200c005 CR4: 00000000001706e0
    Mar 12 03:57:07 Server kernel: Call Trace:
    Mar 12 03:57:07 Server kernel: <IRQ>
    Mar 12 03:57:07 Server kernel: ? activate_task+0x9/0x12
    Mar 12 03:57:07 Server kernel: ? resched_curr+0x3f/0x4c
    Mar 12 03:57:07 Server kernel: ? ipt_do_table+0x49b/0x5c0 [ip_tables]
    Mar 12 03:57:07 Server kernel: ? try_to_wake_up+0x1b0/0x1e5
    Mar 12 03:57:07 Server kernel: nf_nat_alloc_null_binding+0x71/0x88 [nf_nat]
    Mar 12 03:57:07 Server kernel: nf_nat_inet_fn+0x91/0x182 [nf_nat]
    Mar 12 03:57:07 Server kernel: nf_hook_slow+0x39/0x8e
    Mar 12 03:57:07 Server kernel: nf_hook.constprop.0+0xb1/0xd8
    Mar 12 03:57:07 Server kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe
    Mar 12 03:57:07 Server kernel: ip_local_deliver+0x49/0x75
    Mar 12 03:57:07 Server kernel: ip_sabotage_in+0x43/0x4d
    Mar 12 03:57:07 Server kernel: nf_hook_slow+0x39/0x8e
    Mar 12 03:57:07 Server kernel: nf_hook.constprop.0+0xb1/0xd8
    Mar 12 03:57:07 Server kernel: ? l3mdev_l3_rcv.constprop.0+0x50/0x50
    Mar 12 03:57:07 Server kernel: ip_rcv+0x41/0x61
    Mar 12 03:57:07 Server kernel: __netif_receive_skb_one_core+0x74/0x95
    Mar 12 03:57:07 Server kernel: process_backlog+0xa3/0x13b
    Mar 12 03:57:07 Server kernel: net_rx_action+0xf4/0x29d
    Mar 12 03:57:07 Server kernel: __do_softirq+0xc4/0x1c2
    Mar 12 03:57:07 Server kernel: asm_call_irq_on_stack+0x12/0x20
    Mar 12 03:57:07 Server kernel: </IRQ>
    Mar 12 03:57:07 Server kernel: do_softirq_own_stack+0x2c/0x39
    Mar 12 03:57:07 Server kernel: do_softirq+0x3a/0x44
    Mar 12 03:57:07 Server kernel: netif_rx_ni+0x1c/0x22
    Mar 12 03:57:07 Server kernel: macvlan_broadcast+0x10e/0x13c [macvlan]
    Mar 12 03:57:07 Server kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan]
    Mar 12 03:57:07 Server kernel: process_one_work+0x13c/0x1d5
    Mar 12 03:57:07 Server kernel: worker_thread+0x18b/0x22f
    Mar 12 03:57:07 Server kernel: ? process_scheduled_works+0x27/0x27
    Mar 12 03:57:07 Server kernel: kthread+0xe5/0xea
    Mar 12 03:57:07 Server kernel: ? __kthread_bind_mask+0x57/0x57
    Mar 12 03:57:07 Server kernel: ret_from_fork+0x22/0x30
    Mar 12 03:57:07 Server kernel: ---[ end trace b3ca21ac5f2c2720 ]---

     




    User Feedback

    Recommended Comments



    I too have been without call traces/kernel panics for a while (11 days uptime), but I will re-enable host access to custom networks and keep an eye out. In my case, I do get a build up of call traces which end up ultimately resulting in a full blown hang, but I will capture a diagnostics before that happens (and still have the logging server going).  I anticipate a call trace within the next few hours, and possibly a hang in the early AM tomorrow if I don't intervene by then.

     

    FYI, I already transitioned off any containers from br0 but it will still happen for me, so I'm not even sure if my issue is the same as everyone else in this thread at this point. Or maybe it is. But hopefully it will lead to some answers.

    Link to comment
    16 minutes ago, bonienl said:

     

    In this case, is it the connection to the server not working anymore, or the complete server halted?

    In other words local console is still working in this case?

     

     

    Complete server halt, local console wouldn't respond (it just displayed the kernel panic/halt on the screen).

    All connections severed to the server, had to reboot it by power button. 

    Link to comment

    Well that was quick. Already got a call trace about 2 or 3 minutes after I made that setting. Relevant logging below.

    Mar 29 11:32:19 titan kernel: ------------[ cut here ]------------
    Mar 29 11:32:19 titan kernel: WARNING: CPU: 1 PID: 14815 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6
    Mar 29 11:32:19 titan kernel: Modules linked in: macvlan xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap veth xt_nat xfs nfsd lockd grace sunrpc md_mod ipmi_devintf nct6775 hwmon_vid iptable_nat xt_MASQUERADE nf_nat wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables igb i915 x86_pkg_temp_thermal intel_powerclamp ipmi_ssif coretemp kvm_intel wmi_bmof iosf_mbi kvm drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel drm mpt3sas crypto_simd intel_gtt cryptd raid_class agpgart syscopyarea glue_helper scsi_transport_sas i2c_i801 input_leds sysfillrect nvme video ahci rapl led_class i2c_algo_bit i2c_smbus sysimgblt nvme_core i2c_core wmi intel_cstate backlight intel_pch_thermal fb_sys_fops intel_uncore libahci acpi_ipmi ie31200_edac ipmi_si
    Mar 29 11:32:19 titan kernel: acpi_power_meter thermal button acpi_pad fan [last unloaded: igb]
    Mar 29 11:32:19 titan kernel: CPU: 1 PID: 14815 Comm: kworker/1:0 Not tainted 5.10.21-Unraid #1
    Mar 29 11:32:19 titan kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C246D4U, BIOS L2.34 12/23/2020
    Mar 29 11:32:19 titan kernel: Workqueue: events macvlan_process_broadcast [macvlan]
    Mar 29 11:32:19 titan kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6
    Mar 29 11:32:19 titan kernel: Code: e8 64 f9 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 d5 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 5d f3 ff ff e8 30 f6 ff ff e9 22 01
    Mar 29 11:32:19 titan kernel: RSP: 0018:ffffc90000178d38 EFLAGS: 00010202
    Mar 29 11:32:19 titan kernel: RAX: 0000000000000188 RBX: 0000000000007ca3 RCX: 000000000bdce55a
    Mar 29 11:32:19 titan kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff82009d78
    Mar 29 11:32:19 titan kernel: RBP: ffff8882c2482780 R08: 0000000016e41bac R09: ffff88810185de80
    Mar 29 11:32:19 titan kernel: R10: 0000000000000158 R11: ffff888252b3aa00 R12: 000000000000514e
    Mar 29 11:32:19 titan kernel: R13: ffffffff8210db40 R14: 0000000000007ca3 R15: 0000000000000000
    Mar 29 11:32:19 titan kernel: FS:  0000000000000000(0000) GS:ffff88903f440000(0000) knlGS:0000000000000000
    Mar 29 11:32:19 titan kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Mar 29 11:32:19 titan kernel: CR2: 000014e336c24718 CR3: 000000000400c003 CR4: 00000000003706e0
    Mar 29 11:32:19 titan kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Mar 29 11:32:19 titan kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Mar 29 11:32:19 titan kernel: Call Trace:
    Mar 29 11:32:19 titan kernel: <IRQ>
    Mar 29 11:32:19 titan kernel: nf_conntrack_confirm+0x2f/0x36
    Mar 29 11:32:19 titan kernel: nf_hook_slow+0x39/0x8e
    Mar 29 11:32:19 titan kernel: nf_hook.constprop.0+0xb1/0xd8
    Mar 29 11:32:19 titan kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe
    Mar 29 11:32:19 titan kernel: ip_local_deliver+0x49/0x75
    Mar 29 11:32:19 titan kernel: ip_sabotage_in+0x43/0x4d
    Mar 29 11:32:19 titan kernel: nf_hook_slow+0x39/0x8e
    Mar 29 11:32:19 titan kernel: nf_hook.constprop.0+0xb1/0xd8
    Mar 29 11:32:19 titan kernel: ? l3mdev_l3_rcv.constprop.0+0x50/0x50
    Mar 29 11:32:19 titan kernel: ip_rcv+0x41/0x61
    Mar 29 11:32:19 titan kernel: __netif_receive_skb_one_core+0x74/0x95
    Mar 29 11:32:19 titan kernel: process_backlog+0xa3/0x13b
    Mar 29 11:32:19 titan kernel: net_rx_action+0xf4/0x29d
    Mar 29 11:32:19 titan kernel: __do_softirq+0xc4/0x1c2
    Mar 29 11:32:19 titan kernel: asm_call_irq_on_stack+0xf/0x20
    Mar 29 11:32:19 titan kernel: </IRQ>
    Mar 29 11:32:19 titan kernel: do_softirq_own_stack+0x2c/0x39
    Mar 29 11:32:19 titan kernel: do_softirq+0x3a/0x44
    Mar 29 11:32:19 titan kernel: netif_rx_ni+0x1c/0x22
    Mar 29 11:32:19 titan kernel: macvlan_broadcast+0x10e/0x13c [macvlan]
    Mar 29 11:32:19 titan kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan]
    Mar 29 11:32:19 titan kernel: process_one_work+0x13c/0x1d5
    Mar 29 11:32:19 titan kernel: worker_thread+0x18b/0x22f
    Mar 29 11:32:19 titan kernel: ? process_scheduled_works+0x27/0x27
    Mar 29 11:32:19 titan kernel: kthread+0xe5/0xea
    Mar 29 11:32:19 titan kernel: ? __kthread_bind_mask+0x57/0x57
    Mar 29 11:32:19 titan kernel: ret_from_fork+0x1f/0x30
    Mar 29 11:32:19 titan kernel: ---[ end trace b73556de35a696bd ]---

     

    I went ahead and generated a diagnostics then went to go and disable the host access setting. While I was toggling docker to off (in order to access that setting), network connectivity went out. I was able to access the console via IPMI to ultimately make that switch and connectivity came back up. In the past, though, when I haven't actively intervened, by the time I realize I have no connectivity, the IPMI console is also hung. Going ahead and generating another diagnostics that may have captured what I just described above. It is attached.

     

    Edit: I should mention, ipv6 is disabled on unRAID and my network. Yet this call trace seems to explicitly call out ipv6...

     

     

    titan-diagnostics-20210329-1140.zip

    Edited by kaiguy
    More info
    Link to comment

    Add me to the list of those with the issue.  I'm running static IPs as well.  Curiously, I didn't have this issue until I uploaded to 6.9.1 from the 6.9.0 Beta......Also, just as curious, I'm running 6.9.1 on my backup server, with static IP as well, and not having this issue.   I'm going to roll back to 6.9.0-rc2 and also remove SSD Trim to see if the issues stop. Those are the only 2 changes I've made and that is when problems started.  Thinking if running on the beta has no issues, I'll add SSD Trim to see if it starts again. If not, I'll remove SSD Trim, go back to 6.9.1 without SSD Trim and see if issues begin or not.  Hope I can provide something useful to the cause!

    tower-syslog-20210331-0541.zip tower-diagnostics-20210331-0841.zip

    Edited by isrdude
    Link to comment
    3 hours ago, isrdude said:

    Add me to the list of those with the issue.

    Your syslog and diagnostics are taken after rebooting the system. Unfortunately the relevant information is gone once rebooted. Next time, please take diagnostics before rebooting (and/or activate flash mirroring).

     

    Link to comment

    That was part of the problem.  I couldn't take diagnostics before rebooting as everything locked up.  I had to do a hard reboot just to get system response back. 

     

    As far as flash mirroring goes, never heard of it, never done it so tell me how and I will do so.

    Thanks

    Link to comment
    bonienl

    Posted (edited)

    Go to Tools Settings -> Syslog Server -> Mirror syslog to flash = Yes

     

    This will keep a real-time copy of the syslog file on your flash device in the folder /logs.

     

    Edited by bonienl
    Link to comment

    Hmmmm, I don't have a Syslog Server option in Tools. Just Syslog. When I click on it, I get the script but no option for Mirror

    Link to comment
    7 minutes ago, bonienl said:

    There is an interesting kernel bug fix, which looks like our case https://lkml.org/lkml/2021/3/29/499

     

    I don't know if this already available in a linux version for Unraid, perhaps @limetech can tell?

     

    Interesting.  Unraid OS 6.9.1 is on kernel 5.10.21 and the referenced patch is not applied.  Upcoming 6.9.2 release is on kernel 5.10.27 which does have the patch.  Working on finalizing the release now.

    • Like 1
    • Thanks 4
    Link to comment

    Well, my roll back to 6.9.0 rc2 beta lasted about 12 hours and then another lockup/crash.....here is my syslog from my flash drive.  I'm going back up to 6.9.1 since it's obviously in the 6.9.x build.

    syslog

    Link to comment

    Somehow your syslog did not capture the call trace.

     

    Anyway, I am quite confident that the patch introduced in kernel 5.10.27 will help solve our issue.

    Once Unraid 6.9.2 is released, this can be tested.

     

    Link to comment

    With the drop of 6.9.2, I'm going to go ahead and toggle host access to custom networks later today and see if I immediately get a hang. I know my experience is slightly different than others in this thread, but it is the single setting that causes my kernel panics so I think it could be related. Will probably have an update within the next 24 hours.

    • Thanks 1
    Link to comment
    1 hour ago, kaiguy said:

    With the drop of 6.9.2, I'm going to go ahead and toggle host access to custom networks later today and see if I immediately get a hang. I know my experience is slightly different than others in this thread, but it is the single setting that causes my kernel panics so I think it could be related. Will probably have an update within the next 24 hours.

     

    Can you enable syslog mirroring to the flash device? This would help capturing the events.

    Link to comment
    7 minutes ago, bonienl said:

     

    Can you enable syslog mirroring to the flash device? This would help capturing the events.

     

    ...and be sure to disable at some point since that is going to write to the flash quite a bit.

    • Thanks 1
    Link to comment

    The local syslog server requires the server to work properly and may miss things when the system hangs unexpectedly.

     

    The mirror function simply copies everything simultaneously to syslog and flash, and can catch more in case the system hangs.

     

    Of course I am expecting everything to work and no more call traces :)

     

    • Thanks 1
    • Haha 1
    Link to comment

    Mirrored syslog to flash, enabled host access to custom networks. Within a few minutes I got a call trace, but not a hard hang (expected). I went ahead and captured a diagnostics, turned off that setting again, and rebooted (as experience shows that it will ultimately hang in hours or days once I experience this call trace).

     

    Hope this helps. Happy to try other procedures to aid the cause.

     

    Apr  8 12:35:55 titan kernel: ------------[ cut here ]------------
    Apr  8 12:35:55 titan kernel: WARNING: CPU: 1 PID: 20324 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack]
    Apr  8 12:35:55 titan kernel: Modules linked in: macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_tcpudp veth xt_conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs nfsd lockd grace sunrpc md_mod ipmi_devintf nct6775 hwmon_vid iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables igb x86_pkg_temp_thermal intel_powerclamp i915 wmi_bmof coretemp ipmi_ssif kvm_intel kvm iosf_mbi drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel drm crypto_simd cryptd intel_gtt glue_helper mpt3sas agpgart i2c_i801 syscopyarea sysfillrect rapl i2c_algo_bit i2c_smbus sysimgblt raid_class i2c_core intel_cstate acpi_ipmi
    Apr  8 12:35:55 titan kernel: fb_sys_fops nvme scsi_transport_sas wmi intel_uncore nvme_core video ahci ie31200_edac ipmi_si intel_pch_thermal backlight libahci thermal acpi_power_meter fan acpi_pad button [last unloaded: igb]
    Apr  8 12:35:55 titan kernel: CPU: 1 PID: 20324 Comm: kworker/1:0 Not tainted 5.10.28-Unraid #1
    Apr  8 12:35:55 titan kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C246D4U, BIOS L2.34 12/23/2020
    Apr  8 12:35:55 titan kernel: Workqueue: events macvlan_process_broadcast [macvlan]
    Apr  8 12:35:55 titan kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack]
    Apr  8 12:35:55 titan kernel: Code: e8 dc f8 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 36 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 6d f3 ff ff e8 35 f5 ff ff e9 22 01
    Apr  8 12:35:55 titan kernel: RSP: 0018:ffffc90000178d38 EFLAGS: 00010202
    Apr  8 12:35:55 titan kernel: RAX: 0000000000000188 RBX: 0000000000004d65 RCX: 00000000124a1cea
    Apr  8 12:35:55 titan kernel: RDX: 0000000000000000 RSI: 0000000000000338 RDI: ffffffffa02b3ee0
    Apr  8 12:35:55 titan kernel: RBP: ffff88819acbca00 R08: 000000004bf757e0 R09: 0000000000000000
    Apr  8 12:35:55 titan kernel: R10: 0000000000000158 R11: ffff8882a39b5e00 R12: 000000000000bb38
    Apr  8 12:35:55 titan kernel: R13: ffffffff8210b440 R14: 0000000000004d65 R15: 0000000000000000
    Apr  8 12:35:55 titan kernel: FS:  0000000000000000(0000) GS:ffff88903f440000(0000) knlGS:0000000000000000
    Apr  8 12:35:55 titan kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Apr  8 12:35:55 titan kernel: CR2: 0000150c678fc718 CR3: 000000000400a006 CR4: 00000000003706e0
    Apr  8 12:35:55 titan kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Apr  8 12:35:55 titan kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Apr  8 12:35:55 titan kernel: Call Trace:
    Apr  8 12:35:55 titan kernel: <IRQ>
    Apr  8 12:35:55 titan kernel: nf_conntrack_confirm+0x2f/0x36 [nf_conntrack]
    Apr  8 12:35:55 titan kernel: nf_hook_slow+0x39/0x8e
    Apr  8 12:35:55 titan kernel: nf_hook.constprop.0+0xb1/0xd8
    Apr  8 12:35:55 titan kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe
    Apr  8 12:35:55 titan kernel: ip_local_deliver+0x49/0x75
    Apr  8 12:35:55 titan kernel: ip_sabotage_in+0x43/0x4d [br_netfilter]
    Apr  8 12:35:55 titan kernel: nf_hook_slow+0x39/0x8e
    Apr  8 12:35:55 titan kernel: nf_hook.constprop.0+0xb1/0xd8
    Apr  8 12:35:55 titan kernel: ? l3mdev_l3_rcv.constprop.0+0x50/0x50
    Apr  8 12:35:55 titan kernel: ip_rcv+0x41/0x61
    Apr  8 12:35:55 titan kernel: __netif_receive_skb_one_core+0x74/0x95
    Apr  8 12:35:55 titan kernel: process_backlog+0xa3/0x13b
    Apr  8 12:35:55 titan kernel: net_rx_action+0xf4/0x29d
    Apr  8 12:35:55 titan kernel: __do_softirq+0xc4/0x1c2
    Apr  8 12:35:55 titan kernel: asm_call_irq_on_stack+0xf/0x20
    Apr  8 12:35:55 titan kernel: </IRQ>
    Apr  8 12:35:55 titan kernel: do_softirq_own_stack+0x2c/0x39
    Apr  8 12:35:55 titan kernel: do_softirq+0x3a/0x44
    Apr  8 12:35:55 titan kernel: netif_rx_ni+0x1c/0x22
    Apr  8 12:35:55 titan kernel: macvlan_broadcast+0x10e/0x13c [macvlan]
    Apr  8 12:35:55 titan kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan]
    Apr  8 12:35:55 titan kernel: process_one_work+0x13c/0x1d5
    Apr  8 12:35:55 titan kernel: worker_thread+0x18b/0x22f
    Apr  8 12:35:55 titan kernel: ? process_scheduled_works+0x27/0x27
    Apr  8 12:35:55 titan kernel: kthread+0xe5/0xea
    Apr  8 12:35:55 titan kernel: ? __kthread_bind_mask+0x57/0x57
    Apr  8 12:35:55 titan kernel: ret_from_fork+0x1f/0x30
    Apr  8 12:35:55 titan kernel: ---[ end trace 57d37c5af5277fb5 ]---

     

    titan-diagnostics-20210408-1236.zip

    Link to comment
    1 hour ago, kaiguy said:

    Happy to try other procedures to aid the cause.

     

    You seem to have a bit of special network setup with docker and quite a number of containers (I count 17) connected to a user defined network. Can you place a screenshot of the Docker page, it helps me understand better.

     

    Link to comment
    2 minutes ago, bonienl said:

     

    You seem to have a bit of special network setup with docker and quite a number of containers (I count 17) connected to a user defined network. Can you place a screenshot of the Docker page, it helps me understand better.

     

    Sure. Attached. Since I started experiencing these issues, I removed unifi, homebridge, and adguard from service but I didn't fully delete the containers should I be able to restart in the future (unifi and adguard were originally assigned static ips, but I even removed that from these unused container configs).

     

    When you say special network setup, what do you mean? Aside from using a defined network for containers, I don't believe I've made any other changes. Attaching network config screenshot as well.

    docker.png

    network.png

    Link to comment
    3 minutes ago, kaiguy said:

    When you say special network setup, what do you mean

     

    Your case is very different, you don't have any custom (macvlan) network defined and containers with their own (fixed) IP address configured. Instead you have a user defined bridge network (proxynet) and your containers reside in this network (172.18.0.X).

     

    Still the call trace refers to macvlan, which is puzzling to me.

     

    Q: when host access is enabled, can you show the routing table again, I like to see which shim interfaces are defined in this case.

     

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.