denisvic Posted August 25, 2021 Share Posted August 25, 2021 Hello to all, I have been experiencing a random problem on my UNRAID server for a few weeks now, even though it has been running for months without any problems. For a few weeks now, I have regularly found that the server is no longer accessible without any response to the ping. I have to restart it to get access again. The syslog is erased at each startup, I can't make a diagnosis. It is an HP Proliant micro Gen8 server with 4 disks. Has anyone experienced this kind of problem. Translated with www.DeepL.com/Translator (free version) server-diagnostics-20210825-1017.zip Quote Link to comment
JorgeB Posted August 25, 2021 Share Posted August 25, 2021 Enable syslog mirror to flash then post that log after a crash. Quote Link to comment
denisvic Posted August 25, 2021 Author Share Posted August 25, 2021 I just had a new crash. Here's what I find in flash : Aug 21 13:05:09 Server kernel: ------------[ cut here ]------------ Aug 21 13:05:09 Server kernel: WARNING: CPU: 2 PID: 125 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Aug 21 13:05:09 Server kernel: Modules linked in: xt_mark macvlan xt_comment xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap veth xt_nat xt_tcpudp xt_conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs md_mod iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding tg3 ipmi_ssif i2c_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper nvme acpi_ipmi ahci rapl nvme_core ipmi_si libahci acpi_power_meter intel_cstate intel_uncore thermal button ie31200_edac [last unloaded: tg3] Aug 21 13:05:09 Server kernel: CPU: 2 PID: 125 Comm: kworker/2:1 Tainted: G I 5.10.28-Unraid #1 Aug 21 13:05:09 Server kernel: Hardware name: HP ProLiant MicroServer Gen8, BIOS J06 05/21/2018 Aug 21 13:05:09 Server kernel: Workqueue: events macvlan_process_broadcast [macvlan] Aug 21 13:05:09 Server kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Aug 21 13:05:09 Server kernel: Code: e8 dc f8 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 36 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 6d f3 ff ff e8 35 f5 ff ff e9 22 01 Aug 21 13:05:09 Server kernel: RSP: 0018:ffffc900002c4dd8 EFLAGS: 00010202 Aug 21 13:05:09 Server kernel: RAX: 0000000000000188 RBX: 000000000000a9e1 RCX: 0000000028ac8310 Aug 21 13:05:09 Server kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffa02d0c08 Aug 21 13:05:09 Server kernel: RBP: ffff8883ffd09040 R08: 000000004dc1cb9c R09: 0000000000000000 Aug 21 13:05:09 Server kernel: R10: 0000000000000098 R11: ffff88813ec75000 R12: 0000000000000e82 Aug 21 13:05:09 Server kernel: R13: ffffffff8210b440 R14: 000000000000a9e1 R15: 0000000000000000 Aug 21 13:05:09 Server kernel: FS: 0000000000000000(0000) GS:ffff888436e80000(0000) knlGS:0000000000000000 Aug 21 13:05:09 Server kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Aug 21 13:05:09 Server kernel: CR2: 0000145fc6720718 CR3: 000000000200a001 CR4: 00000000000606e0 Aug 21 13:05:09 Server kernel: Call Trace: Aug 21 13:05:09 Server kernel: <IRQ> Aug 21 13:05:09 Server kernel: nf_conntrack_confirm+0x2f/0x36 [nf_conntrack] Aug 21 13:05:09 Server kernel: nf_hook_slow+0x39/0x8e Aug 21 13:05:09 Server kernel: nf_hook.constprop.0+0xb1/0xd8 Aug 21 13:05:09 Server kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe Aug 21 13:05:09 Server kernel: ip_local_deliver+0x49/0x75 Aug 21 13:05:09 Server kernel: __netif_receive_skb_one_core+0x74/0x95 Aug 21 13:05:09 Server kernel: process_backlog+0xa3/0x13b Aug 21 13:05:09 Server kernel: net_rx_action+0xf4/0x29d Aug 21 13:05:09 Server kernel: __do_softirq+0xc4/0x1c2 Aug 21 13:05:09 Server kernel: asm_call_irq_on_stack+0x12/0x20 Aug 21 13:05:09 Server kernel: </IRQ> Aug 21 13:05:09 Server kernel: do_softirq_own_stack+0x2c/0x39 Aug 21 13:05:09 Server kernel: do_softirq+0x3a/0x44 Aug 21 13:05:09 Server kernel: netif_rx_ni+0x1c/0x22 Aug 21 13:05:09 Server kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Aug 21 13:05:09 Server kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Aug 21 13:05:09 Server kernel: process_one_work+0x13c/0x1d5 Aug 21 13:05:09 Server kernel: worker_thread+0x18b/0x22f Aug 21 13:05:09 Server kernel: ? process_scheduled_works+0x27/0x27 Aug 21 13:05:09 Server kernel: kthread+0xe5/0xea Aug 21 13:05:09 Server kernel: ? __kthread_bind_mask+0x57/0x57 Aug 21 13:05:09 Server kernel: ret_from_fork+0x22/0x30 Aug 21 13:05:09 Server kernel: ---[ end trace cebed80e37e250d0 ]--- Quote Link to comment
JorgeB Posted August 25, 2021 Share Posted August 25, 2021 Macvlan call traces are usually the result of having dockers with a custom IP address, upgrading to v6.10 might fix it, or more info below. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/ Quote Link to comment
denisvic Posted August 26, 2021 Author Share Posted August 26, 2021 t worked for months, I don't understand why suddenly the problem appears. I have disabled the containers with custom IP and indeed it is better. Quote Link to comment
denisvic Posted August 27, 2021 Author Share Posted August 27, 2021 I've disabled all custom IP containers but I still see some weird logs : Aug 27 04:11:42 Server kernel: eth0: renamed from vethafe0fd2 Aug 27 04:11:42 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0059477: link becomes ready Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 5(veth0059477) entered blocking state Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 5(veth0059477) entered forwarding state Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 6(vethbcda03c) entered blocking state Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 6(vethbcda03c) entered disabled state Aug 27 04:11:42 Server kernel: device vethbcda03c entered promiscuous mode Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 6(vethbcda03c) entered blocking state Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 6(vethbcda03c) entered forwarding state Aug 27 04:11:42 Server kernel: eth0: renamed from veth8efdec0 Aug 27 04:11:42 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethbcda03c: link becomes ready Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered blocking state Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered disabled state Aug 27 04:11:43 Server kernel: device veth7dc791f entered promiscuous mode Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered blocking state Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered forwarding state Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered disabled state Aug 27 04:11:43 Server kernel: eth0: renamed from veth77fdfbd Aug 27 04:11:43 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7dc791f: link becomes ready Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered blocking state Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered forwarding state Aug 27 04:11:43 Server CA Backup/Restore: ####################### Aug 27 04:11:43 Server CA Backup/Restore: appData Backup complete Aug 27 04:11:43 Server CA Backup/Restore: ####################### Quote Link to comment
JorgeB Posted August 27, 2021 Share Posted August 27, 2021 1 hour ago, denisvic said: still see some weird logs : Those are all normal. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.