stewartwb Posted January 31, 2014 Share Posted January 31, 2014 I noticed a discussion on dropped packets under the Announcements \ V5.0x discussion thread, which prompted me to check my server. When I telnet to my unRAID server and run ifconfig -a here is what I get. bond0 Link encap:Ethernet HWaddr c8:60:00:e4:2f:a6 inet addr:192.168.67.68 Bcast:192.168.67.255 Mask:255.255.255.0 UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:2258965825 errors:0 dropped:58957 overruns:0 frame:0 TX packets:2277441028 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:884116776324 (823.3 GiB) TX bytes:1365780922237 (1.2 TiB) eth0 Link encap:Ethernet HWaddr c8:60:00:e4:2f:a6 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:1134372891 errors:0 dropped:0 overruns:0 frame:0 TX packets:1139282434 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2010059286 (1.8 GiB) TX bytes:2577379786 (2.4 GiB) Interrupt:47 Base address:0xe000 eth1 Link encap:Ethernet HWaddr c8:60:00:e4:2f:a6 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:1124592934 errors:0 dropped:0 overruns:0 frame:0 TX packets:1138158594 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:882106717038 (821.5 GiB) TX bytes:1363203542451 (1.2 TiB) Interrupt:16 Memory:fe9c0000-fe9e0000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:3138 errors:0 dropped:0 overruns:0 frame:0 TX packets:3138 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:257874 (251.8 KiB) TX bytes:257874 (251.8 KiB) It's been 99.5 days since the server was last rebooted, running unRAID 5.0 (initial release). I'm using the motherboard NIC (Realtek, R8168 driver) plus an Intel PCIe x1 NIC (e1000e driver) bonded together, set to balanced-rr (0) I see about 59,000 dropped receive packets from the bonded NIC, but no dropped packets on the two physical NICs. The error rate is quite low, less than 0.003%. I'm not well versed in bonding, so my configuration is a best guess. Would someone with more knowledge please review this information and let me know whether things look decent, or please make recommendations on the best way to utilize my two server NICs? I've attached my syslog, in case it's helpful (though I may need to reboot and send a fresh one, since the initialization messages have long since been truncated...) Thanks - I really appreciate Tom, the unRAID community, and this excellent server software! -- stewartwb syslog.zip Link to comment
dirtysanchez Posted January 31, 2014 Share Posted January 31, 2014 You can give this thread a read if you haven't already seen it. http://lime-technology.com/forum/index.php?topic=16887.msg153977 TL;DR - Bonding is only beneficial in certain very specific instances. That said, virtually all Ethernet interfaces experience some dropped packets. 0.0003% is negligable and I wouldn't even worry about it. Your server is running just fine. Link to comment
dgaschk Posted January 31, 2014 Share Posted January 31, 2014 See here: http://lime-technology.com/forum/index.php?topic=31472.msg287564#msg287564 Link to comment
stewartwb Posted February 1, 2014 Author Share Posted February 1, 2014 Thanks for pointing me to those threads I hadn't found. It looks like I'm not getting much performance benefit from bonding them in balanced mode. From what I read, balancing is also causing a lot of out-of-order packet delivery, which increases overhead and can hurt performance. I'm going to upgrade to the latest 5.0 release and switch to failover mode when I reboot. I plan to report in again after that's been active for a while, with more statistics. -- stewartwb Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.