Jump to content
rmilyard

Supermicro server getting Trace errors

7 posts in this topic Last Reply

Recommended Posts

Your call traces look like mine - ip/macvlan related.  I assume you have separate IP addresses assigned to one or more dockers?  That's what causes mine.  If I remove the docker IP address assignments and let them go back to the unRAID host IP address, the call traces disappear. 

 

So far, nothing has been identified that causes this issue and most users can assign IP addresses to dockers without generating call traces.  The author of the docker networking functions has been unable to reproduce the issue.  I thought it might be related to my hardware specifically, but, you have a completely different hardware setup than I do.

 

Just today I have 12 call traces in my syslog that look like this (looks very, very similar to yours):

Apr 10 11:40:42 MediaNAS kernel: CPU: 0 PID: 12557 Comm: kworker/0:1 Tainted: G    B   W       4.14.33-unRAID #1
Apr 10 11:40:42 MediaNAS kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./C236 WSI, BIOS P2.50 12/12/2017
Apr 10 11:40:42 MediaNAS kernel: Workqueue: events macvlan_process_broadcast [macvlan]
Apr 10 11:40:42 MediaNAS kernel: task: ffff8807d1803a00 task.stack: ffffc900083f8000
Apr 10 11:40:42 MediaNAS kernel: RIP: 0010:__nf_conntrack_confirm+0x97/0x4d6
Apr 10 11:40:42 MediaNAS kernel: RSP: 0018:ffff88086dc03d30 EFLAGS: 00010202
Apr 10 11:40:42 MediaNAS kernel: RAX: 0000000000000188 RBX: 00000000000057a6 RCX: 0000000000000001
Apr 10 11:40:42 MediaNAS kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffffff81c09498
Apr 10 11:40:42 MediaNAS kernel: RBP: ffff8801319d0800 R08: 0000000000000101 R09: ffff88000881a400
Apr 10 11:40:42 MediaNAS kernel: R10: 0000000000000000 R11: 0000000000000000 R12: ffffffff81c8b080
Apr 10 11:40:42 MediaNAS kernel: R13: 00000000000056e9 R14: ffff880109f59cc0 R15: ffff880109f59d18
Apr 10 11:40:42 MediaNAS kernel: FS:  0000000000000000(0000) GS:ffff88086dc00000(0000) knlGS:0000000000000000
Apr 10 11:40:42 MediaNAS kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Apr 10 11:40:42 MediaNAS kernel: CR2: 000000000070cd34 CR3: 0000000001c0a002 CR4: 00000000003606f0
Apr 10 11:40:42 MediaNAS kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Apr 10 11:40:42 MediaNAS kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Apr 10 11:40:42 MediaNAS kernel: Call Trace:
Apr 10 11:40:42 MediaNAS kernel: <IRQ>
Apr 10 11:40:42 MediaNAS kernel: ipv4_confirm+0xac/0xb4 [nf_conntrack_ipv4]
Apr 10 11:40:42 MediaNAS kernel: nf_hook_slow+0x37/0x96
Apr 10 11:40:42 MediaNAS kernel: ip_local_deliver+0xab/0xd3
Apr 10 11:40:42 MediaNAS kernel: ? inet_del_offload+0x3e/0x3e
Apr 10 11:40:42 MediaNAS kernel: ip_rcv+0x311/0x346
Apr 10 11:40:42 MediaNAS kernel: ? ip_local_deliver_finish+0x1b8/0x1b8
Apr 10 11:40:42 MediaNAS kernel: __netif_receive_skb_core+0x6ba/0x733
Apr 10 11:40:42 MediaNAS kernel: ? enqueue_task_fair+0x94/0x42c
Apr 10 11:40:42 MediaNAS kernel: process_backlog+0x8c/0x12d
Apr 10 11:40:42 MediaNAS kernel: net_rx_action+0xfb/0x24f
Apr 10 11:40:42 MediaNAS kernel: __do_softirq+0xcd/0x1c2
Apr 10 11:40:42 MediaNAS kernel: do_softirq_own_stack+0x2a/0x40
Apr 10 11:40:42 MediaNAS kernel: </IRQ>
Apr 10 11:40:42 MediaNAS kernel: do_softirq+0x46/0x52
Apr 10 11:40:42 MediaNAS kernel: netif_rx_ni+0x21/0x35
Apr 10 11:40:42 MediaNAS kernel: macvlan_broadcast+0x117/0x14f [macvlan]
Apr 10 11:40:42 MediaNAS kernel: ? __switch_to_asm+0x24/0x60
Apr 10 11:40:42 MediaNAS kernel: macvlan_process_broadcast+0xe4/0x114 [macvlan]
Apr 10 11:40:42 MediaNAS kernel: process_one_work+0x14c/0x23f
Apr 10 11:40:42 MediaNAS kernel: ? rescuer_thread+0x258/0x258
Apr 10 11:40:42 MediaNAS kernel: worker_thread+0x1c3/0x292
Apr 10 11:40:42 MediaNAS kernel: kthread+0x111/0x119
Apr 10 11:40:42 MediaNAS kernel: ? kthread_create_on_node+0x3a/0x3a
Apr 10 11:40:42 MediaNAS kernel: ? SyS_exit_group+0xb/0xb
Apr 10 11:40:42 MediaNAS kernel: ret_from_fork+0x35/0x40
Apr 10 11:40:42 MediaNAS kernel: Code: 48 c1 eb 20 89 1c 24 e8 24 f9 ff ff 8b 54 24 04 89 df 89 c6 41 89 c5 e8 a9 fa ff ff 84 c0 75 b9 49 8b 86 80 00 00 00 a8 08 74 02 <0f> 0b 4c 89 f7 e8 03 ff ff ff 49 8b 86 80 00 00 00 0f ba e0 09 
Apr 10 11:40:42 MediaNAS kernel: ---[ end trace 6a47ffb8d14588da ]---

 

Share this post


Link to post

I have the built in 1gb NICs but not really using them. Just have IPs. I am using my 10gb NIC for all. I have few dockers installed using same IP has server. I do have pihole using its own IP address. 

Share this post


Link to post
On 4/10/2018 at 8:07 PM, rmilyard said:

I do have pihole using its own IP address.

And that would be the cause of your call traces.  Pihole is the only docker I currently have using its own IP address as well.  However, this is not a Pihole issue.  I had the same call traces when I had the UniFi and OpenVPN dockers with assigned IP addresses.  As soon as I removed those assignments, the call traces went away. 

 

You can test if this is source of the call traces by temporarily disabling Pihole (and resetting your DNS servers in your router) to see if the traces go away.  They did in my case, but, I want to use Pihole, so I am living with the call traces for now as they do not appear to negatively impact the server in any way I can observe.

 

I am not sure if you can even use Pihole properly without a dedicated IP address.  I haven't looked into that; however, regardless of whether or not it is possible, I prefer it to have a dedicated address and that is causing call traces.

Edited by Hoopster

Share this post


Link to post

I had to disable Pihole this morning.  Overnight it had completely locked up my unRAID server (perhaps due to the ever increasing number of call traces generated).  Since Pihole was my DNS, the whole network was inaccessible.  I had to hard reset the unRAID server since even the GUI was locked up. 

 

There was a Pihole update last night which I applied after rebooting everything as the previous update was causing many issues for many users - perhaps it was the cause of my problems as well.  Still, ip/macvlan call traces were being generated regularly in the syslog.  I have now disabled the Pihole docker and reset my router DNS back to what it was prior to installing Pihole.  I am sure the call traces will go away as well.  Not the solution I want, but, it is the only one available to me now.

Share this post


Link to post

Well after running for 24hrs with PiHole docker disabled no call traces. 

 

So looks like some issue with PiHole. 

Edited by rmilyard

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.