sactoking Posted June 8, 2019 Share Posted June 8, 2019 I'm having issues with my Unraid network connection that started recently. I replaced most of the non-drive hardware due to some failures and it seems like the issue started after the replacements were made. While transferring files through the Windows view of my shares I notice that the transfer speed often drops to 0. While running the Mover I was reviewing the logs and noticed the following: Jun 8 10:41:44 Tower kernel: veth3b0a86f: renamed from eth0 Jun 8 10:41:44 Tower kernel: docker0: port 1(veth7bf85ae) entered disabled state Jun 8 10:41:46 Tower kernel: docker0: port 1(veth7bf85ae) entered disabled state Jun 8 10:41:46 Tower avahi-daemon[2464]: Interface veth7bf85ae.IPv6 no longer relevant for mDNS. Jun 8 10:41:46 Tower avahi-daemon[2464]: Leaving mDNS multicast group on interface veth7bf85ae.IPv6 with address fe80::80a5:37ff:fec9:96f1. Jun 8 10:41:46 Tower kernel: device veth7bf85ae left promiscuous mode Jun 8 10:41:46 Tower kernel: docker0: port 1(veth7bf85ae) entered disabled state Jun 8 10:41:46 Tower avahi-daemon[2464]: Withdrawing address record for fe80::80a5:37ff:fec9:96f1 on veth7bf85ae. Jun 8 10:41:55 Tower kernel: docker0: port 1(vethc23cdc5) entered blocking state Jun 8 10:41:55 Tower kernel: docker0: port 1(vethc23cdc5) entered disabled state Jun 8 10:41:55 Tower kernel: device vethc23cdc5 entered promiscuous mode Jun 8 10:41:55 Tower kernel: IPv6: ADDRCONF(NETDEV_UP): vethc23cdc5: link is not ready Jun 8 10:42:13 Tower kernel: eth0: renamed from veth4e1344b Jun 8 10:42:13 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc23cdc5: link becomes ready Jun 8 10:42:13 Tower kernel: docker0: port 1(vethc23cdc5) entered blocking state Jun 8 10:42:13 Tower kernel: docker0: port 1(vethc23cdc5) entered forwarding state Jun 8 10:42:14 Tower avahi-daemon[2464]: Joining mDNS multicast group on interface vethc23cdc5.IPv6 with address fe80::78a5:6ff:fe07:4839. Jun 8 10:42:14 Tower avahi-daemon[2464]: New relevant interface vethc23cdc5.IPv6 for mDNS. Jun 8 10:42:14 Tower avahi-daemon[2464]: Registering new address record for fe80::78a5:6ff:fe07:4839 on vethc23cdc5.*. It looks like something's happening to the network link but I'm not fluent in Linux. I do have Unraid set up on a static network address and have the network addressed through the router's DHCP, though oddly (to me) the server does not show up in the DHCP list of active devices (I DO have an active, working internet and network connection). Any ideas on what might be causing these intermittent drops? Thanks in advance! Quote Link to comment
Squid Posted June 8, 2019 Share Posted June 8, 2019 22 minutes ago, sactoking said: While running the Mover I was reviewing the logs and noticed the following: That looks like the docker containers are starting up / restarting, and are normal messages Quote Link to comment
John_M Posted June 8, 2019 Share Posted June 8, 2019 I agree with Squid - those are virtual network ports. If you do have a problem that syslog snippet doesn't show it. Post your full diagnostics zip if you think there's some issue. The Mover doesn't use the network so it would be unaffected by a network problem anyway. Quote Link to comment
sactoking Posted June 8, 2019 Author Share Posted June 8, 2019 Ok, good to know that what I was seeing is just Docker info. I've attached my diagnostic ZIP as requested hoping there might be something useful in there. tower-diagnostics-20190608-1917.zip Quote Link to comment
Squid Posted June 8, 2019 Share Posted June 8, 2019 I'd go to settings -> schedules -> Mover -> Disable Mover Logging. No reason for it to be enabled. Then reboot (to clear out the logs), wait for your problem to happen and then paste a fresh set of diagnostics Quote Link to comment
sactoking Posted June 9, 2019 Author Share Posted June 9, 2019 Here is an updated diagnostic ZIP. I Stopped the array, rebooted, started the array, moved a series of files totaling ~16GB, then ran the diagnostic. During the file transfer I had the speed drop to 0 several times. Checking the dashboard I never had CPU usage above 8% or so, RAM usage above 3% or so, and the cache drive had 40+ GB of space available so I'm pretty sure it wasn't a resource issue from those perspectives. tower-diagnostics-20190609-0312.zip Quote Link to comment
sactoking Posted June 16, 2019 Author Share Posted June 16, 2019 It may be worth pointing out that when this occurs I lost complete terminal responsiveness and one of the CPU threads stays pegged at 100%, so maybe it's not a network issue. Are there any known threading bugs or anything that could be causing this? 1 Quote Link to comment
andreidelait Posted March 22, 2020 Share Posted March 22, 2020 On 6/16/2019 at 5:52 PM, sactoking said: It may be worth pointing out that when this occurs I lost complete terminal responsiveness and one of the CPU threads stays pegged at 100%, so maybe it's not a network issue. Are there any known threading bugs or anything that could be causing this? Did you find a solution for this? I have two servers that are doing the same thing. Quote Link to comment
sactoking Posted March 23, 2020 Author Share Posted March 23, 2020 Nope, never got this resolved. Still occurs when I move files. Quote Link to comment
VladL Posted December 13, 2021 Share Posted December 13, 2021 (edited) //edited: needed NIC offloading off. Hello, This is my first post .... as far as I remember .... but I though this might help others so here it goes. Had issues with VLANs enabled on a 10Gbps adapter and after the first reboot copying files using samba shares on any IP of any VLAN interface, I got the same disabled state kernel errors in the logs. After a shit-ton of searches and failed diagnostics, I remembered about Flow control and NIC offloading, so I installed the Tips and Tweaks plugin and set Disable NIC Flow Control to Yes, Disable NIC Offload to "Yes" and Rx and Tx buffers to 2048 and it now works perfectly. Hope this helps you too. Edited December 13, 2021 by VladL 2 Quote Link to comment
burkasaurusrex Posted November 7, 2022 Share Posted November 7, 2022 Just wanted to thank you @VladL for the response ... been trying to find an answer to this for quite some time. After some research I found that I was having a lot of Rx packet drops which led me to increasing the NIC Rx buffer. Just wanted to note that bigger isn't always better - seems like ideal setting it the smallest buffer where you don't have packets dropping. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.