gurulee Posted September 10 Share Posted September 10 (edited) I am experiencing a new issue, and as of recent, with dockers on my custom vlan's (br0, br0.4, br0.5, br0.6) become unresponsive. However, I can still ping IP's on any of the networks. But I just cannot get to the web services running on any of them when the issues occurs, including the unraid webUI on br0. No other network changes have occurred and my unraid has been up for 3.5 months so far. The issue occurs when there is heavier network load on them. For example, Plex mobile app downloading content locally to view offline while another docker is performing downloads, or if multiple people are watching Plex. Uptime-Kuma docker webUI also becomes inaccessible during the issue, but when it resolves, it shows docker monitor events stating: 'Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call? Additionally, external PRTG monitors show HTTPS monitors for Plex and other dockers as timing-out. Additionally, I cannot get to unraid mgmt. webUI on br0 when the issue occurs either. ***When the issue is occurring, unraid CPU, RAM, and network utilization is low as well. The issue resolves itself after approx. 3-5min.... I'm on unraid version: 6.12.8 My network config. (not changed in over a year): Two physical eth interfaces (eth0, eth1) with bonding and bridging enabled. Bond0 (eth0, eth1) is connected to Cisco switch using LAG port config. All vLAN's use parent interface bond0 Docker vLAN br0, br0.5, br0.6 use upstream Opnsense firewall for DHCP pool. Docker custom network type: macvlan I do not see any kernel 'call trace' in my enhanced syslog plugin output. Can someone help me narrow this down and/or recommend if I should try switching to Docker network type: ipvlan ? Edited September 10 by gurulee Quote Link to comment
Solution JorgeB Posted September 10 Solution Share Posted September 10 Fist thing I would recommend updating to latest stable. 1 Quote Link to comment
gurulee Posted September 10 Author Share Posted September 10 4 minutes ago, JorgeB said: Fist thing I would recommend updating to latest stable. I have been holding off due to all the issues I'm reading about in the release notes and with users. But if that has a specific fix for this, then I will plan for it. Quote Link to comment
gurulee Posted September 10 Author Share Posted September 10 1 hour ago, JorgeB said: Fist thing I would recommend updating to latest stable. Okay, I have completed the upgrade from 6.12.8 to 6.12.13 successfully with no known issues. I will monitor it to see if the issue returns. Quote Link to comment
gurulee Posted September 10 Author Share Posted September 10 (edited) Returning to my question as it relates to my issue, should I switch from macvlan to ipvlan even though I am not aware of any macvlan errors and my vLAN's use bond0 ? Edited September 10 by gurulee Quote Link to comment
JorgeB Posted September 10 Share Posted September 10 The Macvlan issue with bridging is no longer a problem with the latest release, so test first to see how it is now. 1 Quote Link to comment
gurulee Posted September 10 Author Share Posted September 10 2 hours ago, JorgeB said: The Macvlan issue with bridging is no longer a problem with the latest release, so test first to see how it is now. Thank you! I will report back 48 hours. Quote Link to comment
gurulee Posted September 10 Author Share Posted September 10 46 minutes ago, gurulee said: Thank you! I will report back 48 hours. The issue just reoccurred at around 3:30pm / 15:30. All webUI connectivity lost to unraid mgmt int (br0)and all my dockers. But I was still able to ping the interfaces and dockers with custom bridge vLAN's static IP's. The issue resolved itself after approx. 3min and the webUI mgmt int of unraid and dockers became accessible again. I looked at my enhanced syslog plugin and this is the only entry around the time of the issue: Sep 10 11:05:13 Tower root: Fix Common Problems: Error: Macvlan and Bridging found ** Ignored Sep 10 11:10:40 Tower kernel: eth0: renamed from vethe9b73b7 Sep 10 11:15:06 Tower webGUI: Successful login user root from 192.168.100.90 Sep 10 11:18:48 Tower kernel: vethb78d931: renamed from eth0 Sep 10 11:19:03 Tower kernel: eth0: renamed from vethd9921c1 Sep 10 12:17:41 Tower emhttpd: spinning down /dev/sdf Sep 10 12:45:53 Tower emhttpd: read SMART /dev/sdf Sep 10 13:06:06 Tower emhttpd: spinning down /dev/sdd Sep 10 13:16:28 Tower emhttpd: spinning down /dev/sdf Sep 10 13:19:08 Tower emhttpd: read SMART /dev/sdd Sep 10 13:54:51 Tower emhttpd: read SMART /dev/sdf Sep 10 13:55:06 Tower emhttpd: spinning down /dev/sdd Sep 10 15:30:46 Tower emhttpd: read SMART /dev/sdd Quote Link to comment
JorgeB Posted September 11 Share Posted September 11 Unfortunately there's nothing relevant logged. Quote Link to comment
gurulee Posted September 11 Author Share Posted September 11 1 hour ago, JorgeB said: Unfortunately there's nothing relevant logged. That is apparent and agreed. Is there a way to log more debug level? So can someone advise me next steps to narrow the cause of this issue down? Essentially all the HTTP / HTTPS / webui become inaccessible to unraid Mgmt int on br0 and all dockers on br0.4 and br0.5 intermittently for approx 3-5min. All the while I can still ping all of the interfaces during the issue. Issue seems intermittent and no pattern. Quote Link to comment
JorgeB Posted September 11 Share Posted September 11 The only thing I can think off it to run the server with half of the containers, if the same try the other half, it that helps then keep drilling down to see if you can find the culprit. Quote Link to comment
gurulee Posted Friday at 09:47 AM Author Share Posted Friday at 09:47 AM (edited) So far so good... 48 hours and still stable. I'm monitoring at the dockers level with Uptime Kuma and the HTTP services of unraid Mgmt int and dockers with PRTG. 🤞🙏🙌 On 9/11/2024 at 4:00 AM, JorgeB said: Edited Friday at 09:50 AM by gurulee 1 Quote Link to comment
gurulee Posted Monday at 09:17 PM Author Share Posted Monday at 09:17 PM Continued stability and reoccurrence of issue since upgrading to 6.12.13 5 days ago. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.