• [6.9.0-RC2] Callbacks related to netfilter, and kernel panics


    Kaldek
    • Closed Urgent

    These keep cropping up on RC2.  I've had a couple of callbacks related to netfilter, as well as some kernel panics.  Two examples posted as well as my diagnostics file, however note that at the time of this diag file I had stopped my Shinobi Pro docker container which I could swear is the catalyst.  I am leaving Shinobi disabled to see if unRAID stays up longer.

     

    I mark this as "Urgent" only because system stability is really poor at the moment.  

     

    Quote

    Dec 18 08:37:52 MU-TH-UR kernel: BUG: unable to handle page fault for address: fffff8ee2e3a9888
    Dec 18 08:37:52 MU-TH-UR kernel: #PF: supervisor read access in kernel mode
    Dec 18 08:37:52 MU-TH-UR kernel: #PF: error_code(0x0000) - not-present page
    Dec 18 08:37:52 MU-TH-UR kernel: PGD 0 P4D 0 
    Dec 18 08:37:52 MU-TH-UR kernel: Oops: 0000 [#1] SMP PTI
    Dec 18 08:37:52 MU-TH-UR kernel: CPU: 10 PID: 15856 Comm: node Tainted: P        W  O      5.9.13-Unraid #1
    Dec 18 08:37:52 MU-TH-UR kernel: Hardware name: EVGA INTERNATIONAL CO.,LTD Default string/131-HE-E095, BIOS 2.08 06/28/2019
    Dec 18 08:37:52 MU-TH-UR kernel: RIP: 0010:compound_head+0x0/0x11
    Dec 18 08:37:52 MU-TH-UR kernel: Code: 89 06 31 c0 c3 48 8b 17 b8 01 00 00 00 48 f7 c2 9f ff ff ff 74 13 48 b8 98 0f 00 00 00 00 f0 7f 48 85 c2 0f 95 c0 0f b6 c0 c3 <48> 8b 57 08 48 89 f8 f6 c2 01 74 04 48 8d 42 ff c3 e8 ea ff ff ff
    Dec 18 08:37:52 MU-TH-UR kernel: RSP: 0018:ffffc90001aa7ca0 EFLAGS: 00010282
    Dec 18 08:37:52 MU-TH-UR kernel: RAX: 0000000000000001 RBX: 000017a88323c000 RCX: 0000000000000034
    Dec 18 08:37:52 MU-TH-UR kernel: RDX: 0000003bb8b8ea62 RSI: ffffea001304ef00 RDI: fffff8ee2e3a9880
    Dec 18 08:37:52 MU-TH-UR kernel: RBP: ffffc90001aa7dd8 R08: fffff8ee2e3a9880 R09: 7c00003bb8b8ea62
    Dec 18 08:37:52 MU-TH-UR kernel: R10: ffffffffffffffff R11: 000000000000000c R12: ffff888e8e2b3ad8
    Dec 18 08:37:52 MU-TH-UR kernel: R13: 80000004c13bc067 R14: ffff888f6ec25e00 R15: ffff88810a500510
    Dec 18 08:37:52 MU-TH-UR kernel: FS:  00001514919e5b20(0000) GS:ffff88907fc80000(0000) knlGS:0000000000000000
    Dec 18 08:37:52 MU-TH-UR kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Dec 18 08:37:52 MU-TH-UR kernel: CR2: fffff8ee2e3a9888 CR3: 0000000fa024a002 CR4: 00000000001706e0
    Dec 18 08:37:52 MU-TH-UR kernel: Call Trace:
    Dec 18 08:37:52 MU-TH-UR kernel: migration_entry_to_page+0x19/0x2e
    Dec 18 08:37:52 MU-TH-UR kernel: unmap_page_range+0x444/0x65c
    Dec 18 08:37:52 MU-TH-UR kernel: unmap_vmas+0x6c/0x9a
    Dec 18 08:37:52 MU-TH-UR kernel: unmap_region+0xad/0x105
    Dec 18 08:37:52 MU-TH-UR kernel: __do_munmap+0x278/0x2f1
    Dec 18 08:37:52 MU-TH-UR kernel: __vm_munmap+0x6d/0xad
    Dec 18 08:37:52 MU-TH-UR kernel: __x64_sys_munmap+0x12/0x15
    Dec 18 08:37:52 MU-TH-UR kernel: do_syscall_64+0x5d/0x6a
    Dec 18 08:37:52 MU-TH-UR kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
    Dec 18 08:37:52 MU-TH-UR kernel: RIP: 0033:0x1514923d3d59
    Dec 18 08:37:52 MU-TH-UR kernel: Code: 89 c7 e8 7d 97 fe ff 5a c3 55 48 89 f5 48 83 ec 10 48 89 7c 24 08 e8 34 ea 01 00 48 8b 7c 24 08 b8 0b 00 00 00 48 89 ee 0f 05 <48> 89 c7 e8 52 97 fe ff 48 83 c4 10 5d c3 31 c0 83 fa 04 74 0c 48
    Dec 18 08:37:52 MU-TH-UR kernel: RSP: 002b:00001514919e57a0 EFLAGS: 00000246 ORIG_RAX: 000000000000000b
    Dec 18 08:37:52 MU-TH-UR kernel: RAX: ffffffffffffffda RBX: 000055dd3631f360 RCX: 00001514923d3d59
    Dec 18 08:37:52 MU-TH-UR kernel: RDX: 0000000000000000 RSI: 0000000000040000 RDI: 000017a883200000
    Dec 18 08:37:52 MU-TH-UR kernel: RBP: 0000000000040000 R08: 0000000000000468 R09: 0000000000000080
    Dec 18 08:37:52 MU-TH-UR kernel: R10: 0000000000000000 R11: 0000000000000246 R12: 000017a883200000
    Dec 18 08:37:52 MU-TH-UR kernel: R13: 0000000000040000 R14: 0000000000000000 R15: 000017a883200000
    Dec 18 08:37:52 MU-TH-UR kernel: Modules linked in: macvlan veth xt_CHECKSUM ipt_REJECT ip6table_mangle ip6table_nat iptable_mangle ip6table_filter ip6_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_MASQUERADE iptable_filter iptable_nat nf_nat ip_tables xfs md_mod nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper drm backlight agpgart syscopyarea sysfillrect nvidia_uvm(PO) sysimgblt fb_sys_fops nvidia(PO) f71882fg ixgbe mdio x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd mxm_wmi glue_helper rapl r8169 intel_cstate i2c_i801 i2c_smbus input_leds i2c_core led_class ahci e1000e intel_uncore libahci realtek wmi button [last unloaded: mdio]
    Dec 18 08:37:52 MU-TH-UR kernel: CR2: fffff8ee2e3a9888
    Dec 18 08:37:52 MU-TH-UR kernel: ---[ end trace cda26355254ea908 ]---
    Dec 18 08:37:52 MU-TH-UR kernel: RIP: 0010:compound_head+0x0/0x11
    Dec 18 08:37:52 MU-TH-UR kernel: Code: 89 06 31 c0 c3 48 8b 17 b8 01 00 00 00 48 f7 c2 9f ff ff ff 74 13 48 b8 98 0f 00 00 00 00 f0 7f 48 85 c2 0f 95 c0 0f b6 c0 c3 <48> 8b 57 08 48 89 f8 f6 c2 01 74 04 48 8d 42 ff c3 e8 ea ff ff ff
    Dec 18 08:37:52 MU-TH-UR kernel: RSP: 0018:ffffc90001aa7ca0 EFLAGS: 00010282
    Dec 18 08:37:52 MU-TH-UR kernel: RAX: 0000000000000001 RBX: 000017a88323c000 RCX: 0000000000000034
    Dec 18 08:37:52 MU-TH-UR kernel: RDX: 0000003bb8b8ea62 RSI: ffffea001304ef00 RDI: fffff8ee2e3a9880
    Dec 18 08:37:52 MU-TH-UR kernel: RBP: ffffc90001aa7dd8 R08: fffff8ee2e3a9880 R09: 7c00003bb8b8ea62
    Dec 18 08:37:52 MU-TH-UR kernel: R10: ffffffffffffffff R11: 000000000000000c R12: ffff888e8e2b3ad8
    Dec 18 08:37:52 MU-TH-UR kernel: R13: 80000004c13bc067 R14: ffff888f6ec25e00 R15: ffff88810a500510
    Dec 18 08:37:52 MU-TH-UR kernel: FS:  00001514919e5b20(0000) GS:ffff88907fc80000(0000) knlGS:0000000000000000
    Dec 18 08:37:52 MU-TH-UR kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Dec 18 08:37:52 MU-TH-UR kernel: CR2: fffff8ee2e3a9888 CR3: 0000000fa024a002 CR4: 00000000001706e0
    Dec 18 10:16:15 MU-TH-UR emhttpd: read SMART /dev/sdc
    Dec 18 11:16:20 MU-TH-UR emhttpd: spinning down /dev/sdc
    Dec 18 16:56:42 MU-TH-UR webGUI: Successful login user root from 192.168.0.74
    Dec 18 16:58:25 MU-TH-UR webGUI: Successful login user root from 192.168.0.112
    Dec 18 16:58:51 MU-TH-UR emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog 
    Dec 18 16:59:25 MU-TH-UR nginx: 2020/12/18 16:59:25 [error] 13675#13675: *657714 upstream timed out (110: Connection timed out) while reading upstream, client: 192.168.0.74, server: , request: "GET /Dashboard HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "192.168.0.198", referrer: "http://192.168.0.198/Main"
    Dec 18 17:00:28 MU-TH-UR nginx: 2020/12/18 17:00:28 [error] 13675#13675: *657849 upstream timed out (110: Connection timed out) while reading upstream, client: 192.168.0.112, server: , request: "GET /Dashboard HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "unraid.melned.local", referrer: "http://unraid.melned.local/Main"


     

    Quote

    [138425.754951] WARNING: CPU: 7 PID: 6857 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x99/0x1e1
    [138425.754953] Modules linked in: macvlan xt_CHECKSUM ipt_REJECT veth ip6table_mangle ip6table_nat iptable_mangle ip6table_filter ip6_tables vhost_net tun vhost vhost_iotlb tap xt_nat xt_MASQUERADE iptable_filter iptable_nat nf_nat ip_tables xfs md_mod nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper drm backlight agpgart syscopyarea sysfillrect sysimgblt nvidia_uvm(PO) fb_sys_fops nvidia(PO) f71882fg ixgbe mdio x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd mxm_wmi e1000e i2c_i801 glue_helper i2c_smbus r8169 i2c_core ahci input_leds rapl led_class intel_cstate intel_uncore libahci realtek wmi button [last unloaded: mdio]
    [138425.755039] CPU: 7 PID: 6857 Comm: kworker/7:0 Tainted: P           O      5.10.1-Unraid #1
    [138425.755041] Hardware name: EVGA INTERNATIONAL CO.,LTD Default string/131-HE-E095, BIOS 2.08 06/28/2019
    [138425.755049] Workqueue: events macvlan_process_broadcast [macvlan]
    [138425.755054] RIP: 0010:__nf_conntrack_confirm+0x99/0x1e1
    [138425.755058] Code: e4 e3 ff ff 8b 54 24 14 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 54 e1 ff ff 84 c0 75 b8 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 89 de ff ff e8 af e0 ff ff e9 1f 01
    [138425.755060] RSP: 0018:ffffc9000026cdd8 EFLAGS: 00010202
    [138425.755064] RAX: 0000000000000188 RBX: 000000000000c4b9 RCX: 000000002c44202b
    [138425.755066] RDX: 0000000000000000 RSI: 0000000000000022 RDI: ffffffff82009b24
    [138425.755068] RBP: ffff88810a5b0f00 R08: 000000001d06d52d R09: ffff888178f35580
    [138425.755070] R10: 0000000000000000 R11: ffff8888c48d2700 R12: 0000000000006c22
    [138425.755072] R13: ffffffff8210da40 R14: 000000000000c4b9 R15: ffff88810a5b0f0c
    [138425.755075] FS:  0000000000000000(0000) GS:ffff88907fbc0000(0000) knlGS:0000000000000000
    [138425.755078] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [138425.755080] CR2: 00000000014ffdf8 CR3: 000000035b51c006 CR4: 00000000001726e0
    [138425.755082] Call Trace:
    [138425.755085]  <IRQ>
    [138425.755091]  nf_conntrack_confirm+0x2f/0x36
    [138425.755096]  nf_hook_slow+0x39/0x8e
    [138425.755104]  nf_hook.constprop.0+0xb1/0xd8
    [138425.755109]  ? ip_protocol_deliver_rcu+0xfe/0xfe
    [138425.755113]  ip_local_deliver+0x49/0x75
    [138425.755119]  __netif_receive_skb_one_core+0x74/0x95
    [138425.755124]  process_backlog+0xa3/0x13b
    [138425.755129]  net_rx_action+0xf4/0x29d
    [138425.755134]  __do_softirq+0xc4/0x1c2
    [138425.755141]  asm_call_irq_on_stack+0x12/0x20
    [138425.755143]  </IRQ>
    [138425.755148]  do_softirq_own_stack+0x2c/0x39
    [138425.755155]  do_softirq+0x3a/0x44
    [138425.755159]  netif_rx_ni+0x1c/0x22
    [138425.755163]  macvlan_broadcast+0x10e/0x13c [macvlan]
    [138425.755169]  macvlan_process_broadcast+0xf8/0x143 [macvlan]
    [138425.755175]  process_one_work+0x13c/0x1d5
    [138425.755179]  worker_thread+0x18b/0x22f
    [138425.755182]  ? process_scheduled_works+0x27/0x27
    [138425.755186]  kthread+0xe5/0xea
    [138425.755189]  ? kthread_unpark+0x52/0x52
    [138425.755194]  ret_from_fork+0x22/0x30
    [138425.755198] ---[ end trace 597d09e6b6cc2d05 ]---

     

    mu-th-ur-diagnostics-20201222-1354.zip




    User Feedback

    Recommended Comments

    I will add that Shinobi is running on a separate VLAN which is a subinterface of br0 (br0.12) and that br0 is a bridge on an Intel 10Gb/s NIC using the ixgbe module.

    Edited by Kaldek
    Link to comment

    I am also using multiple Docker containers, which have a custom static IP assinged to them and I am facing crashes of my Unraid server.

     

    Any of the three proposed solutions in the linked thread above are not viable solutions for me. I can not put my VMs and containers on a seperate VLAN because my router and switch does not support this. I also can not put my VMs and containers on seperate NIC, because i only have one network cable running to my server and i don't want to have a switch in front of it. And i also need to have seperate IPs for some containers, so i can prioritize traffic in my router between them.

     

    The interesting part for me is, that i am running my containers with static IPs since almost three years without any problems on all stable versions of Unraid, including latest stable 6.8.3. These system lookups first happend to me, after upgrading to 6.9.0-beta35 (first beta i have tested), so i would guess that this issue has been introduced with the beta versions and it should be resolved better sooner than later.

    Link to comment
    16 hours ago, JorgeB said:

    Macvlan call traces are usually the result of having dockers with a custom IP address, more info below:

     

    OK I've changed Shinobi to use the normal bridge, no VLANs.  Traffic between Shinobi and the cameras is now being routed.  Let's see how stable it is.

    Link to comment

    This issue is not occurring now that I have moved Shinobi Pro to the normal bridge and sharing the same IP address as the unRAID host.

     

    I can only ask the unRAID team to work on and solve this issue since it appears to have been around since 6.5.

    Edited by Kaldek
    Link to comment

    For me it's still relevant, as i don't have a VLAN active and it worked all fine brfore RC1...

    Edited by DarkMan83
    Link to comment

    Just got the full one : 

     

    [ 2743.152154] kvm: already loaded the other module
    [ 6110.534616] ------------[ cut here ]------------
    [ 6110.534628] WARNING: CPU: 8 PID: 37032 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x99/0x1e1

    [ 6110.534629] Modules linked in: ccp macvlan nfsv3 nfs nfs_ssc veth xt_nat iptable_filter xfs nfsd lockd grace sunrpc md_mod tun nvidia_drm(PO) nvidia_modeset(PO) drm_kms_helper drm backlight agpgart syscopyarea sysfillrect sysimgblt nvidia_uvm(PO) fb_sys_fops nvidia(PO) iptable_nat xt_MASQUERADE nf_nat ip_tables wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha bonding igb i2c_algo_bit sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper rapl mpt3sas ipmi_ssif ahci intel_cstate input_leds acpi_power_meter raid_class i2c_core led_class wmi scsi_transport_sas megaraid_sas intel_uncore libahci button acpi_pad ipmi_si [last unloaded: i2c_algo_bit]
    [ 6110.534757] CPU: 8 PID: 37032 Comm: kworker/8:1 Tainted: P           O      5.10.1-Unraid #1
    [ 6110.534760] Hardware name: Dell Inc. PowerEdge T620/0658N7, BIOS 2.8.0 06/26/2019
    [ 6110.534780] Workqueue: events macvlan_process_broadcast [macvlan]
    [ 6110.534784] RIP: 0010:__nf_conntrack_confirm+0x99/0x1e1
    [ 6110.534787] Code: e4 e3 ff ff 8b 54 24 14 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 54 e1 ff ff 84 c0 75 b8 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 89 de ff ff e8 af e0 ff ff e9 1f 01
    [ 6110.534789] RSP: 0018:ffffc900065a8dd8 EFLAGS: 00010202
    [ 6110.534792] RAX: 0000000000000188 RBX: 000000000000107c RCX: 00000000cbc5d8ed
    [ 6110.534793] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff82009cd4
    [ 6110.534795] RBP: ffff88812e579e00 R08: 00000000bfa88d6d R09: ffff88909caa38a0
    [ 6110.534797] R10: 0000000000000098 R11: ffff888120d81c00 R12: 0000000000001925
    [ 6110.534799] R13: ffffffff8210da40 R14: 000000000000107c R15: ffff88812e579e0c
    [ 6110.534802] FS:  0000000000000000(0000) GS:ffff888fff900000(0000) knlGS:0000000000000000
    [ 6110.534804] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 6110.534806] CR2: 0000000000f03078 CR3: 000000000200c001 CR4: 00000000000606e0
    [ 6110.534808] Call Trace:
    [ 6110.534811]  <IRQ>
    [ 6110.534815]  nf_conntrack_confirm+0x2f/0x36
    [ 6110.534819]  nf_hook_slow+0x39/0x8e
    [ 6110.534824]  nf_hook.constprop.0+0xb1/0xd8
    [ 6110.534842]  ? ip_protocol_deliver_rcu+0xfe/0xfe
    [ 6110.534846]  ip_local_deliver+0x49/0x75
    [ 6110.534851]  __netif_receive_skb_one_core+0x74/0x95
    [ 6110.534855]  process_backlog+0xa3/0x13b
    [ 6110.534860]  net_rx_action+0xf4/0x29d
    [ 6110.534865]  __do_softirq+0xc4/0x1c2
    [ 6110.534872]  asm_call_irq_on_stack+0xf/0x20
    [ 6110.534874]  </IRQ>
    [ 6110.534879]  do_softirq_own_stack+0x2c/0x39
    [ 6110.534885]  do_softirq+0x3a/0x44
    [ 6110.534889]  netif_rx_ni+0x1c/0x22
    [ 6110.534894]  macvlan_broadcast+0x10e/0x13c [macvlan]
    [ 6110.534899]  macvlan_process_broadcast+0xf8/0x143 [macvlan]
    [ 6110.534904]  process_one_work+0x13c/0x1d5
    [ 6110.534908]  worker_thread+0x18b/0x22f
    [ 6110.534911]  ? process_scheduled_works+0x27/0x27
    [ 6110.534915]  kthread+0xe5/0xea
    [ 6110.534918]  ? kthread_unpark+0x52/0x52
    [ 6110.534922]  ret_from_fork+0x1f/0x30
    [ 6110.534927] ---[ end trace aa399fc3a4d4c0e8 ]---
    root@Tower:~#
     

    Edited by ephigenie
    Link to comment
    1 hour ago, ephigenie said:

    Just got the full one : 

     

    On 12/22/2020 at 7:48 AM, JorgeB said:

    Macvlan call traces are usually the result of having dockers with a custom IP address, more info below:

     

     

    Link to comment

    Thank you for that info, i just shut down the one docker that has a fixed IP. All other highly active containers are on the Server IP. 

    I will try with a separate VLAN soon. 

    Link to comment

    230301483_ScreenShot2021-01-30at22_25_56.thumb.png.62accb344cdea7ece822f6184eb6584a.pngJust got another Kernel Panic will full system Lock.
    This is in nf_nat_setup does not have much to do with the macvlan issue - or does it ? 
     

    Link to comment

    Anything i can do - can i build a newer kernel & install it ? is there a repo from unraid somewhere ? I would like to contribute in order to solve this - since it is quit annoying... 

    And since it seems to be in "NAT-SETUP_INFO" i think its not related to only macvlan. I was considering NAT to be stable since 2.0.36 .... not unstable with 5.x ???

     

    i will try now with all containers off except PLEX. Its crashing currently every 4-6 hours.

    Edited by ephigenie
    Link to comment

    My server also crashing. The same nf_conntrack_core.c macvlan errors. It will usually handle a couple of them then kernel panic.

     

    I will try disabling all custom IPs, but is it really expected that we cannot set a static IP or the server will crash every 4-6 hours? This bug continues for my in 6.9RC2.

     

    Not sure how having the containers on a separate VLAN will help since I set them as a static IP because they need to be accessible to my normal network (unifi controller, pihole, etc)

    Edited by anethema
    Link to comment

    In the meantime i updated to 6.9.2 but have the same issue. I disabled all dockers and just left a few in order to not trigger this. is there anything else recommended to check ? 

    Link to comment

    yes i can confirm, i have still the same issue. But in fact it really seems to be related to a dedicated IP i had set before. Now i am still monitoring it - but have not set enabled containers with dedicated IPs and so far its working. 

    Link to comment

    Same here. Crashing at random intervals, sometimes multiple times a day and always before a complete parity check is done. This all started after 6.9.1 for me.

     

    UPDATE: Was a new bios update available and so far it seems stable (4 hours).

    Edited by Gabriel_B
    Link to comment
    On 4/23/2021 at 10:26 PM, Gabriel_B said:

    Same here. Crashing at random intervals, sometimes multiple times a day and always before a complete parity check is done. This all started after 6.9.1 for me.

     

    UPDATE: Was a new bios update available and so far it seems stable (4 hours).

    did it stay stable cause im facing the same issue ?

    Link to comment

    add another. having the same thing as described above except its not crashing the server. 

    Link to comment

    In my case, it was caused by my 10GbE Aquantia NIC. Update its firmware to an "unofficial" one and it's been stable for days now. Used to crash once or twice per day before, with the similar calltrace as everyone else.

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.