• 6.9.0/6.9.1 - Kernel Panic due to netfilter (nf_nat_setup_info) - Docker Static IP (macvlan)


    CorneliousJD
    • Urgent

    So I had posted another thread about after a kernel panic, docker host access to custom networks doesn't work until docker is stopped/restarted on 6.9.0

     

     

    After further investigation and setting up syslogging, it apperas that it may actually be that host access that's CAUSING the kernel panic? 

    EDIT: 3/16 - I guess I needed to create a VLAN for my dockers with static IPs, so far that's working, so it's probably not HOST access causing the issue, but rather br0 static IPs being set. See following posts below.

     

    Here's my last kernel panic that thankfully got logged to syslog. It references macvlan and netfilter. I don't know enough to be super useful here, but this is my docker setup.

     

    image.png.dac2782e9408016de37084cf21ad64a5.png

     

    Mar 12 03:57:07 Server kernel: ------------[ cut here ]------------
    Mar 12 03:57:07 Server kernel: WARNING: CPU: 17 PID: 626 at net/netfilter/nf_nat_core.c:614 nf_nat_setup_info+0x6c/0x652 [nf_nat]
    Mar 12 03:57:07 Server kernel: Modules linked in: ccp macvlan xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap veth xt_nat xt_MASQUERADE iptable_nat nf_nat xfs md_mod ip6table_filter ip6_tables iptable_filter ip_tables bonding igb i2c_algo_bit cp210x usbserial sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd ipmi_ssif isci glue_helper mpt3sas i2c_i801 rapl libsas i2c_smbus input_leds i2c_core ahci intel_cstate raid_class led_class acpi_ipmi intel_uncore libahci scsi_transport_sas wmi ipmi_si button [last unloaded: ipmi_devintf]
    Mar 12 03:57:07 Server kernel: CPU: 17 PID: 626 Comm: kworker/17:2 Tainted: G        W         5.10.19-Unraid #1
    Mar 12 03:57:07 Server kernel: Hardware name: Supermicro PIO-617R-TLN4F+-ST031/X9DRi-LN4+/X9DR3-LN4+, BIOS 3.2 03/04/2015
    Mar 12 03:57:07 Server kernel: Workqueue: events macvlan_process_broadcast [macvlan]
    Mar 12 03:57:07 Server kernel: RIP: 0010:nf_nat_setup_info+0x6c/0x652 [nf_nat]
    Mar 12 03:57:07 Server kernel: Code: 89 fb 49 89 f6 41 89 d4 76 02 0f 0b 48 8b 93 80 00 00 00 89 d0 25 00 01 00 00 45 85 e4 75 07 89 d0 25 80 00 00 00 85 c0 74 07 <0f> 0b e9 1f 05 00 00 48 8b 83 90 00 00 00 4c 8d 6c 24 20 48 8d 73
    Mar 12 03:57:07 Server kernel: RSP: 0018:ffffc90006778c38 EFLAGS: 00010202
    Mar 12 03:57:07 Server kernel: RAX: 0000000000000080 RBX: ffff88837c8303c0 RCX: ffff88811e834880
    Mar 12 03:57:07 Server kernel: RDX: 0000000000000180 RSI: ffffc90006778d14 RDI: ffff88837c8303c0
    Mar 12 03:57:07 Server kernel: RBP: ffffc90006778d00 R08: 0000000000000000 R09: ffff889083c68160
    Mar 12 03:57:07 Server kernel: R10: 0000000000000158 R11: ffff8881e79c1400 R12: 0000000000000000
    Mar 12 03:57:07 Server kernel: R13: 0000000000000000 R14: ffffc90006778d14 R15: 0000000000000001
    Mar 12 03:57:07 Server kernel: FS:  0000000000000000(0000) GS:ffff88903fc40000(0000) knlGS:0000000000000000
    Mar 12 03:57:07 Server kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Mar 12 03:57:07 Server kernel: CR2: 000000c000b040b8 CR3: 000000000200c005 CR4: 00000000001706e0
    Mar 12 03:57:07 Server kernel: Call Trace:
    Mar 12 03:57:07 Server kernel: <IRQ>
    Mar 12 03:57:07 Server kernel: ? activate_task+0x9/0x12
    Mar 12 03:57:07 Server kernel: ? resched_curr+0x3f/0x4c
    Mar 12 03:57:07 Server kernel: ? ipt_do_table+0x49b/0x5c0 [ip_tables]
    Mar 12 03:57:07 Server kernel: ? try_to_wake_up+0x1b0/0x1e5
    Mar 12 03:57:07 Server kernel: nf_nat_alloc_null_binding+0x71/0x88 [nf_nat]
    Mar 12 03:57:07 Server kernel: nf_nat_inet_fn+0x91/0x182 [nf_nat]
    Mar 12 03:57:07 Server kernel: nf_hook_slow+0x39/0x8e
    Mar 12 03:57:07 Server kernel: nf_hook.constprop.0+0xb1/0xd8
    Mar 12 03:57:07 Server kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe
    Mar 12 03:57:07 Server kernel: ip_local_deliver+0x49/0x75
    Mar 12 03:57:07 Server kernel: ip_sabotage_in+0x43/0x4d
    Mar 12 03:57:07 Server kernel: nf_hook_slow+0x39/0x8e
    Mar 12 03:57:07 Server kernel: nf_hook.constprop.0+0xb1/0xd8
    Mar 12 03:57:07 Server kernel: ? l3mdev_l3_rcv.constprop.0+0x50/0x50
    Mar 12 03:57:07 Server kernel: ip_rcv+0x41/0x61
    Mar 12 03:57:07 Server kernel: __netif_receive_skb_one_core+0x74/0x95
    Mar 12 03:57:07 Server kernel: process_backlog+0xa3/0x13b
    Mar 12 03:57:07 Server kernel: net_rx_action+0xf4/0x29d
    Mar 12 03:57:07 Server kernel: __do_softirq+0xc4/0x1c2
    Mar 12 03:57:07 Server kernel: asm_call_irq_on_stack+0x12/0x20
    Mar 12 03:57:07 Server kernel: </IRQ>
    Mar 12 03:57:07 Server kernel: do_softirq_own_stack+0x2c/0x39
    Mar 12 03:57:07 Server kernel: do_softirq+0x3a/0x44
    Mar 12 03:57:07 Server kernel: netif_rx_ni+0x1c/0x22
    Mar 12 03:57:07 Server kernel: macvlan_broadcast+0x10e/0x13c [macvlan]
    Mar 12 03:57:07 Server kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan]
    Mar 12 03:57:07 Server kernel: process_one_work+0x13c/0x1d5
    Mar 12 03:57:07 Server kernel: worker_thread+0x18b/0x22f
    Mar 12 03:57:07 Server kernel: ? process_scheduled_works+0x27/0x27
    Mar 12 03:57:07 Server kernel: kthread+0xe5/0xea
    Mar 12 03:57:07 Server kernel: ? __kthread_bind_mask+0x57/0x57
    Mar 12 03:57:07 Server kernel: ret_from_fork+0x22/0x30
    Mar 12 03:57:07 Server kernel: ---[ end trace b3ca21ac5f2c2720 ]---

     




    User Feedback

    Recommended Comments



      

    On 2/9/2022 at 8:14 PM, Mr_Jay84 said:

    I've now disabled "Host access to custom networks" - Let's see how that goes.

    How did you go with this? I've been suffering this issue for a while and independently from this thread came across this as a possible cause. I've only been running a few hours and it can take anywhere between 12 hours and 30+ days for it to occur.

    Link to comment
    40 minutes ago, Shonky said:

      

    How did you go with this? I've been suffering this issue for a while and independently from this thread came across this as a possible cause. I've only been running a few hours and it can take anywhere between 12 hours and 30+ days for it to occur.

    Still testing various methods out. ATM I have the dockers on another NIC and using IPLAN.

     

    This has been plaguing me for two years now!

    Link to comment
    13 minutes ago, Mr_Jay84 said:

    Still testing various methods out. ATM I have the dockers on another NIC and using IPLAN.

     

    This has been plaguing me for two years now!

    Up until a week ago I was still running 6.9.x as mentioned earlier, with host access, with flawless uptime, given the workaround I indicated. I recently accidentally downgraded to 6.8.3 (flash device problems, long story) and I'm still stable. Perhaps try 6.8.3?

     

     

     

    PS, anyone on the 6.10 rc series who can verify stability? I'm unwilling to touch anything until Limetech has this issue figured out. I tried briefly with a brand new flash device but it issued me a trial license without telling me what's what it was going to do, and since keys are no longer locally managed I can't fix it without contacting support (GREAT JOB LIMETECH) so capital F that, until I know it's gonna work.

    Link to comment
    9 minutes ago, codefaux said:

    Up until a week ago I was still running 6.9.x as mentioned earlier, with host access, with flawless uptime, given the workaround I indicated. I recently accidentally downgraded to 6.8.3 (flash device problems, long story) and I'm still stable. Perhaps try 6.8.3?

     

     

     

    PS, anyone on the 6.10 rc series who can verify stability? I'm unwilling to touch anything until Limetech has this issue figured out. I tried briefly with a brand new flash device but it issued me a trial license without telling me what's what it was going to do, and since keys are no longer locally managed I can't fix it without contacting support (GREAT JOB LIMETECH) so capital F that, until I know it's gonna work.

    I'm using 6.10 RC2. The issue has been present for me since 6.8.

    Link to comment

    6.9.2 was where I'd been for a long time with it occurring semi regularly.

    Upgraded to 6.10-rc2 just the other day and it happened within about 12ish hours. I don't think 6.10-rc2 is any worse, just luck of the draw.

     

    At this point the released 6.9.2 was just as bad so saying release vs unreleased isn't really a thing on my setup.

    Link to comment
    23 minutes ago, codefaux said:

    Up until a week ago I was still running 6.9.x as mentioned earlier, with host access, with flawless uptime, given the workaround I indicated. I recently accidentally downgraded to 6.8.3 (flash device problems, long story) and I'm still stable. Perhaps try 6.8.3?

    Is this the right post? You say host access *disabled* but in the quote above you say "with host access":

     

    I don't have any bridging enabled but the host access does create some sort of bridge. I found firewall was complaining that my unRAID IP machine was changing between two MAC addresses which is bad (tm) so that's how I ended up with turning that host access thing off today.

    Edited by Shonky
    Link to comment
    1 minute ago, Shonky said:

    Is this the right post? You say host access *disabled* but in the quote above you say "with host access":

     

     

    Try 6.8.3, enable vlans even if you're not using them. That seems to be the part which fixed my issues. If that isn't stable I'll screenshot my configurations and we'll try to figure it out -- I was crashing every few days with five or so containers. Now I'm stable with easily a dozen running right now.

     

    Good catch, I forgot about that actually. I had disabled host access at the time, expecting it to have been a part of the fix. Currently host access is enabled, still stable, on 6.8.3 -- other things may have changed since then since I accidentally reverted a version, but stability is unaffected. Here's a screenshot of my currently running system.

     

    2119302970_Screenshot_20220221-041930_Chrome2.thumb.png.40c046795b32662359a7e7d3266bb36e.png

     

    Link to comment

    As an afterthought; because it's probably relevant, here's my network config page.

     

    Heading down for the night, will check in when I wake up.

    Screenshot_20220221-042346_Chrome~2.png

    Link to comment

    If I knew I could hit it in a few days I would probably roll back to 6.8.3 and try, but mine's just as likely to run for 2 months without an issue so I think I prefer to go with 6.10-rc2 for now. One of those hard to prove a negative things. If it fails again I'll revert and/or try your suggestions.

     

    BTW: I don't really follow this ipvlan/macvlan thing but my 6.10-rc2 has a macvlan kernel module loaded (and no ipvlan)

    Link to comment

    I moved all my containers to their own dedicated NIC which are on a part based VLAN and no bridges. I was stable for days until I span up Tdarr & PiHole with their own IPs...crashed again!

    Host Access is enabled as I need my containers to be able to get outside access.

     

    Link to comment
    1 hour ago, Mr_Jay84 said:

    Host Access is enabled as I need my containers to be able to get outside access

     

    This doesn't make sense to me. Any container with its own IP address is accessible without host access, provided your environment (router) is correctly set up.

     

    To get more inside in your crashes we need diagnostics or you need to have syslog mirroring enabled.

     

    Link to comment

    This just seems like a non issue really, since the RC doesn't use IPVLAN and instead uses macvlan i haven't had a single crash with it. presumably the next major Unraid version will also have macvlan so there isn't really anything to fix?

    Link to comment
    3 hours ago, bonienl said:

     

    This doesn't make sense to me. Any container with its own IP address is accessible without host access, provided your environment (router) is correctly set up.

     

    To get more inside in your crashes we need diagnostics or you need to have syslog mirroring enabled.

     

    There's only a few containers with their own IP, all others are on a custom internal network. Within 24-48 this will cause the kernel panics and lockup the system. I've tried both IPVLAN & MACVLAN.

    In the last two years I've changed practically all the hardware on this machine and still suffering the same issue.


    New CPUS
    New RAM

    GPUs out

    GPUs In

    All PCIE cards out

    New USB drive
    New LSI card
    Disks attached to system, they used to be in a QNAP enclosure
    Brand new SATA cables
    Another set of brand new SATA cables

    New SAS cables

    Every time this happens I need to do a parity check as these crashes result sync errors.

    Diagnostics and SYSLOG attached.



     

    ultron-diagnostics-20220225-2022.zip ultron.log

    Link to comment
    4 hours ago, Mr_Jay84 said:

    Host Access is enabled as I need my containers to be able to get outside access.

    You fundamentally misunderstand Host Access.

     

    Host Access allows the HOST (unRAID) to reach the Docker container; otherwise the HOST (and only the HOST) cannot reach the container with Host Access off. The containers can get to the outside Internet with no host access, and no container-specific IP. The containers can receive connections from anywhere EXCEPT the host, with Host Access turned off. The containers can receive connections from anywhere INCLUDING the host, with Host Access on.

     

     

     

     

     

    When it crashes, bring the array up in maintenance mode, do a manual xfs_repair on every drive (/dev/md* from terminal, or manually per drive from GUI) -- I used to, every time unRAID crashed, and do still when we have power interruptions -- and now no longer have to fix parity. I still scan as a test every so often on my 113TB 30-disk array but it no longer requires sync fixes ever.

     

     

     

     

     

    I'm unsubscribing from this thread; I've posted my fix (still 100% stable on my hardware, where it was unstable regularly and repeatedly, and verified by reverting into instability and re-applying to regain stability) and it's not helping and/or nobody is listening. If anyone needs direct access to my information or wishes to work one-on-one with me to look into issues privately, or verify this is the same crash, etc, send me a private message and I'll gladly help; the chatter here is going in circles and I've said my part. I hope at least someone was helped by my presence; good luck.

    Link to comment
    4 hours ago, bonienl said:

     

    This doesn't make sense to me. Any container with its own IP address is accessible without host access, provided your environment (router) is correctly set up.

     

    To get more inside in your crashes we need diagnostics or you need to have syslog mirroring enabled.

     

    SWAG reverse proxy requires host access otherwise it doesn't work.

    Also I tired your solution, didn't work either, thanks any way.

    Edited by Mr_Jay84
    Link to comment
    On 2/25/2022 at 6:35 PM, Fma965 said:

    This just seems like a non issue really, since the RC doesn't use IPVLAN and instead uses macvlan i haven't had a single crash with it. presumably the next major Unraid version will also have macvlan so there isn't really anything to fix?

    RC means Release Client right ? so you are refering to the new Unraid 6.10 RC ?
    Do you mean  i need to update my machine to fix this ? i have seen a few posts saying otherwise saying 6.10 doesn't fix the issue ?

     

    Link to comment
    On 2/28/2022 at 5:19 PM, DrDirtyDevil said:

    RC means Release Client right ? so you are refering to the new Unraid 6.10 RC ?
    Do you mean  i need to update my machine to fix this ? i have seen a few posts saying otherwise saying 6.10 doesn't fix the issue ?

     

     

    Some people do not have this issue at all and never experience crashes.

    Some people have had crashes but changing to 6.10 and ipvlan solved their issue

    A minority of people still see crashes and it is unclear why these are happening.

     

    In short upgrading to 6.10 and changing to ipvlan solves the issue for most people.

     

    Link to comment
    On 3/2/2022 at 7:34 AM, bonienl said:

     

    Some people do not have this issue at all and never experience crashes.

    Some people have had crashes but changing to 6.10 and ipvlan solved their issue

    A minority of people still see crashes and it is unclear why these are happening.

     

    In short upgrading to 6.10 and changing to ipvlan solves the issue for most people.

     

     

    So, I just wanted to come back in here and anyone else that may run into this issue.  I had to find another solution than switching to ipvlan because my network relies on DHCP reservations based on dedicated/predictable MAC.  So, I actually need macvlan to function.
     

    I have seen on the forums that some people have had luck with a dedicated network adapter and swapping docker over to that. 
     

    I did that exact thing. Got an additional NIC (made sure it was different chipset from onboard. Broadcom onboard, new is Realtek).
     

    Installed it. Added the new eth1 with new bridge br1 in settings page and deselected br0 as an option for Docker in Docker Settings panel. Then updated each of my docker containers with the same static ip they had before just on br1.
     

    Now Docker bridged traffic with static ip is being processed over a dedicated NIC and this is the longest uptime I've had in months. 
     

    Version: 6.9.2
    Uptime 2 days, 18 hours, 46 minutes

    Edited by albybum
    Link to comment

    Just passed 7 days of uptime. Probably won't be reporting a month of uptime because I'm taking the array down to add some more capacity. But, a dedicated new NIC for Docker traffic definitely seems to have resolved my issue.  

    Link to comment

    I've had some success of late by turning off the bonding on eth0,eth1 and assigning everything to a dedicated NIC.

    I installed TubeArchivist this morning and assigned its own IP, A few hours later...complete lock up again.

     

    Mar 10 11:10:27 Ultron kernel: eth0: renamed from veth50eda3d
    Mar 10 11:10:27 Ultron kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth39e9426: link becomes ready
    Mar 10 11:10:27 Ultron kernel: docker0: port 8(veth39e9426) entered blocking state
    Mar 10 11:10:27 Ultron kernel: docker0: port 8(veth39e9426) entered forwarding state
    Mar 10 11:10:46 Ultron kernel: ------------[ cut here ]------------
    Mar 10 11:10:46 Ultron kernel: NETDEV WATCHDOG: eth1 (igb): transmit queue 0 timed out
    Mar 10 11:10:46 Ultron kernel: WARNING: CPU: 0 PID: 13 at net/sched/sch_generic.c:477 dev_watchdog+0x10c/0x166
    Mar 10 11:10:46 Ultron kernel: Modules linked in: ipvlan vhost_net vhost vhost_iotlb tap kvm_intel kvm xt_mark xt_comment xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle tun nvidia_modeset(PO) nvidia_uvm(PO) veth xt_nat xt_tcpudp xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod uinput nvidia(PO) ipmi_devintf nct6775 hwmon_vid corefreqk(O) ip6table_filter ip6_tables iptable_filter ip_tables x_tables igb i2c_algo_bit x86_pkg_temp_thermal intel_powerclamp coretemp mxm_wmi ipmi_ssif drm_vram_helper drm_ttm_helper ttm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel drm_kms_helper aesni_intel crypto_simd cryptd rapl drm intel_cstate mpt3sas intel_uncore backlight agpgart i2c_i801 ahci syscopyarea i2c_smbus sysfillrect sysimgblt raid_class corsair_psu fb_sys_fops i2c_core scsi_transport_sas libahci acpi_ipmi
    Mar 10 11:10:46 Ultron kernel: ipmi_si wmi button [last unloaded: kvm]
    Mar 10 11:10:46 Ultron kernel: CPU: 0 PID: 13 Comm: migration/0 Tainted: P S         O      5.14.15-Unraid #1
    Mar 10 11:10:46 Ultron kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./EP2C612 WS, BIOS P2.70 11/15/2019
    Mar 10 11:10:46 Ultron kernel: Stopper: multi_cpu_stop+0x0/0xca <- migrate_swap+0xed/0x10c
    Mar 10 11:10:46 Ultron kernel: RIP: 0010:dev_watchdog+0x10c/0x166
    Mar 10 11:10:46 Ultron kernel: Code: bd af 00 00 75 36 48 89 ef c6 05 4c bd af 00 01 e8 7f f6 fb ff 44 89 e1 48 89 ee 48 c7 c7 3e ac f2 81 48 89 c2 e8 c0 4f 11 00 <0f> 0b eb 0e 41 ff c4 48 05 40 01 00 00 e9 65 ff ff ff 48 8b 83 90
    Mar 10 11:10:46 Ultron kernel: RSP: 0000:ffffc90000003ec8 EFLAGS: 00010282
    Mar 10 11:10:46 Ultron kernel: RAX: 0000000000000000 RBX: ffff888107cbc438 RCX: 0000000000000027
    Mar 10 11:10:46 Ultron kernel: RDX: 0000000000000003 RSI: ffffc90000003d50 RDI: ffff88905f818570
    Mar 10 11:10:46 Ultron kernel: RBP: ffff888107cbc000 R08: ffff88a09ff6a2a8 R09: ffffffff826232c8
    Mar 10 11:10:46 Ultron kernel: R10: 0000000000000000 R11: ffff88a09ff994a7 R12: 0000000000000000
    Mar 10 11:10:46 Ultron kernel: R13: 000000010dbdee00 R14: ffffc90000003f10 R15: ffffffff816952ae
    Mar 10 11:10:46 Ultron kernel: FS:  0000000000000000(0000) GS:ffff88905f800000(0000) knlGS:0000000000000000
    Mar 10 11:10:46 Ultron kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Mar 10 11:10:46 Ultron kernel: CR2: 000000c000215000 CR3: 0000001d125b8002 CR4: 00000000001726f0
    Mar 10 11:10:46 Ultron kernel: Call Trace:
    Mar 10 15:11:31 Ultron nerdpack: Cleaning up packages...
    Mar 10 15:11:31 Ultron rsyslogd: action 'action-3-builtin:omfwd' resumed (module 'builtin:omfwd') [v8.2102.0 try https://www.rsyslog.com/e/2359 ]
    Mar 10 15:11:31 Ultron root: 

     

    ultron.log ultron-diagnostics-20220310-1554.zip

    Link to comment

    I used to have this crash 2-3 times per year, but starting a few weeks ago, I started getting the crash every 24-48 hours out of the blue.

     

    This might sound crazy, but I eventually realized that I had manually installed changedetection.io docker the night before the crashes started.  I realized this 9 days ago, and immediately removed the docker, and I have yet to have a crash since then.  Note that I had installed the docker container myself, i did not use community applications to install it, so I'm guessing it was some kind of conflict caused by that specific container due to me likely setting something up incorrectly.

     

    It could be entirely coincidental, but I just wanted to throw that out there in case it pertains to anyone else.  

     

    I'm on 6.9.2.

    Link to comment
    On 2/28/2022 at 5:19 PM, DrDirtyDevil said:

    RC means Release Client right ? so you are refering to the new Unraid 6.10 RC ?
    Do you mean  i need to update my machine to fix this ? i have seen a few posts saying otherwise saying 6.10 doesn't fix the issue ?

     

    i have updated to 6.10 rc 2 as per suggestion and my server now has a uptime of 6 days and counting where as it crashed every 24 hours before.  knock on wood....

    Link to comment

    And again this morning.....

     

    ar 11 07:05:25 Ultron kernel: docker0: port 25(veth02cb09c) entered disabled state
    Mar 11 07:05:25 Ultron kernel: docker0: port 25(veth02cb09c) entered disabled state
    Mar 11 07:05:25 Ultron kernel: device veth02cb09c left promiscuous mode
    Mar 11 07:05:25 Ultron kernel: docker0: port 25(veth02cb09c) entered disabled state
    Mar 11 07:05:39 Ultron kernel: ------------[ cut here ]------------
    Mar 11 07:05:39 Ultron kernel: NETDEV WATCHDOG: eth1 (igb): transmit queue 0 timed out
    Mar 11 07:05:39 Ultron kernel: WARNING: CPU: 8 PID: 0 at net/sched/sch_generic.c:477 dev_watchdog+0x10c/0x166
    Mar 11 07:05:39 Ultron kernel: Modules linked in: xt_mark xt_comment ipvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap nvidia_modeset(PO) nvidia_uvm(PO) veth xt_nat xt_tcpudp xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xt_addrtype iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 br_netfilter xfs nfsd auth_rpcgss oid_registry lockd grace sunrpc md_mod uinput nvidia(PO) ipmi_devintf nct6775 hwmon_vid corefreqk(O) ip6table_filter ip6_tables iptable_filter ip_tables x_tables igb i2c_algo_bit mxm_wmi x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel ipmi_ssif drm_vram_helper drm_ttm_helper ttm kvm drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel drm mpt3sas aesni_intel crypto_simd cryptd rapl intel_cstate backlight i2c_i801 agpgart i2c_smbus ahci syscopyarea raid_class sysfillrect sysimgblt intel_uncore fb_sys_fops corsair_psu i2c_core scsi_transport_sas libahci acpi_ipmi
    Mar 11 07:05:39 Ultron kernel: ipmi_si wmi button [last unloaded: i2c_algo_bit]
    Mar 11 07:05:39 Ultron kernel: CPU: 8 PID: 0 Comm: swapper/8 Tainted: P S         O      5.14.15-Unraid #1
    Mar 11 07:05:39 Ultron kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./EP2C612 WS, BIOS P2.70 11/15/2019
    Mar 11 07:05:39 Ultron kernel: RIP: 0010:dev_watchdog+0x10c/0x166
    Mar 11 07:05:39 Ultron kernel: Code: bd af 00 00 75 36 48 89 ef c6 05 4c bd af 00 01 e8 7f f6 fb ff 44 89 e1 48 89 ee 48 c7 c7 3e ac f2 81 48 89 c2 e8 c0 4f 11 00 <0f> 0b eb 0e 41 ff c4 48 05 40 01 00 00 e9 65 ff ff ff 48 8b 83 90
    Mar 11 07:05:39 Ultron kernel: RSP: 0018:ffffc900065e8ec8 EFLAGS: 00010282
    Mar 11 07:05:39 Ultron kernel: RAX: 0000000000000000 RBX: ffff88810c114438 RCX: 0000000000000027
    Mar 11 07:05:39 Ultron kernel: RDX: 0000000000000003 RSI: ffffc900065e8d50 RDI: ffff88905fa18570
    Mar 11 07:05:39 Ultron kernel: RBP: ffff88810c114000 R08: ffff88a09ff0b460 R09: ffffffff826232c8
    Mar 11 07:05:39 Ultron kernel: R10: 0000000000000001 R11: ffff88a09ff6b947 R12: 0000000000000000
    Mar 11 07:05:39 Ultron kernel: R13: 0000000103532600 R14: ffffc900065e8f10 R15: ffffffff816952ae
    Mar 11 07:05:39 Ultron kernel: FS:  0000000000000000(0000) GS:ffff88905fa00000(0000) knlGS:0000000000000000
    Mar 11 07:05:39 Ultron kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Mar 11 07:05:39 Ultron kernel: CR2: 00007ffea1d3fff8 CR3: 0000001d2c74a002 CR4: 00000000001726e0
    Mar 11 07:05:39 Ultron kernel: Call Trace:
    Mar 11 07:05:39 Ultron kernel: <IRQ>
    Mar 11 07:05:39 Ultron kernel: ? netif_tx_lock+0x83/0x83
    Mar 11 07:05:39 Ultron kernel: call_timer_fn+0x59/0xde
    Mar 11 07:05:39 Ultron kernel: __run_timers+0x140/0x17e
    Mar 11 07:05:39 Ultron kernel: ? enqueue_hrtimer+0x62/0x69
    Mar 11 07:05:39 Ultron kernel: ? recalibrate_cpu_khz+0x1/0x1
    Mar 11 07:05:39 Ultron kernel: run_timer_softirq+0x19/0x2d
    Mar 11 07:05:39 Ultron kernel: __do_softirq+0xef/0x218
    Mar 11 07:05:39 Ultron kernel: __irq_exit_rcu+0x52/0x8d
    Mar 11 07:05:39 Ultron kernel: sysvec_apic_timer_interrupt+0x66/0x7d
    Mar 11 07:05:39 Ultron kernel: </IRQ>
    Mar 11 07:05:39 Ultron kernel: asm_sysvec_apic_timer_interrupt+0x12/0x20
    Mar 11 07:05:39 Ultron kernel: RIP: 0010:arch_local_irq_enable+0x7/0x8
    Mar 11 07:05:39 Ultron kernel: Code: a2 60 1b 00 85 db 48 89 e8 79 03 48 63 c3 5b 5d 41 5c c3 9c 58 0f 1f 44 00 00 c3 fa 66 0f 1f 44 00 00 c3 fb 66 0f 1f 44 00 00 <c3> 0f 1f 44 00 00 55 49 89 d3 48 81 c7 b0 00 00 00 48 83 c6 70 53
    Mar 11 07:05:39 Ultron kernel: RSP: 0018:ffffc9000634fea0 EFLAGS: 00000246
    Mar 11 07:05:39 Ultron kernel: RAX: ffff88905fa2a980 RBX: 0000000000000002 RCX: 000000000000001f
    Mar 11 07:05:39 Ultron kernel: RDX: 0000000000000008 RSI: 0000000000000008 RDI: 0000000000000000
    Mar 11 07:05:39 Ultron kernel: RBP: ffffe8efff43f100 R08: 00000000ffffffff R09: 071c71c71c71c71c
    Mar 11 07:05:39 Ultron kernel: R10: 0000000000000020 R11: 000000000000023d R12: ffffffff82110ba0
    Mar 11 07:05:39 Ultron kernel: R13: 0000000000000002 R14: 000033018915a577 R15: 0000000000000000
    Mar 11 07:05:39 Ultron kernel: cpuidle_enter_state+0x117/0x1db
    Mar 11 07:05:39 Ultron kernel: cpuidle_enter+0x2a/0x36
    Mar 11 07:05:39 Ultron kernel: do_idle+0x1b7/0x225
    Mar 11 07:05:39 Ultron kernel: cpu_startup_entry+0x1d/0x1f
    Mar 11 07:05:39 Ultron kernel: secondary_startup_64_no_verify+0xb0/0xbb
    Mar 11 07:05:39 Ultron kernel: ---[ end trace 558bc4d050503d86 ]---
    Mar 11 07:05:39 Ultron kernel: igb 0000:09:00.0 eth1: Reset adapter
    Mar 11 07:05:41 Ultron kernel: igb 0000:09:00.0 eth1: igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
    Mar 11 10:41:11 Ultron kernel: microcode: microcode updated early to revision 0x46, date = 2021-01-27
    Mar 11 10:41:11 Ultron kernel: Linux version 5.14.15-Unraid (root@Develop) (gcc (GCC) 11.2.0, GNU ld version 2.37-slack15) #1 SMP Thu Oct 28 09:56:33 PDT 2021
    Mar 11 10:41:11 Ultron kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot nomodeset pcie_aspm=off pci=noaer isolcpus=22-23,46-47
    Mar 11 10:41:11 Ultron kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
    Mar 11 10:41:11 Ultron kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
    Mar 11 10:41:11 Ultron kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'

     

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.