kkhan Posted April 3, 2023 Share Posted April 3, 2023 I am a newbie to UNRAID; in the process of moving my 60TB data pool from flexRAID to UNRAID and would appreciate any help on the below problem. The UNRAID server is built and configured and ready for transferring the data from my backup data pool on a Windows machine. To speed things up I purchased a couple of TP-Link TX201 NICs (based on the Realtek RT8125 chipset) and created a peer-to-peer connection. Everything appeared to work out of the box. UNRAID recognised the cards instantly. Windows did the same. In UNRAID made sure the 2.5Gb/s NIC was not bonded to the existing 2x1Gb/s NICs. Then the trouble started. During testing I noticed transfer speed across the new 2.5gb/s connection were 20% or so worse than for the existing 1gb/s connection. Did some trouble shooting: Running ethtool on UNRAID confirmed that the link was at 2.5gbps Likewise using the Get-NetAdapter command in Windows I was also able to confirm that the link was at 2.5gb/s but to make sure I turned auto-negotiation off and forced it to 2.5gb/s. Ran iperf3 (10 parallel streams) and I was only able to get 670mb/s effective bandwidth on the 2.5gb/s link. Running iperf3 on the existing 1gb/s connection gave 954mb/s!!! Any help would be appreciated. Are these cards not compatible? Are there newer drivers? Bug in UNRAID? Have I missed something? Quote Link to comment
JorgeB Posted April 3, 2023 Share Posted April 3, 2023 Please post the diagnostics. Quote Link to comment
MAM59 Posted April 4, 2023 Share Posted April 4, 2023 Usually this problems can be seen if "flow control" is turned off on either side. You need to know that 2,5 (and 5) speed does not really exist, its just "10 with pauses". This is done not to overload old cables that cannot honor the full speed. Some marketing guy came up with the "2,5 speed" and now it is hitting the mass market, making things worse and worse. Look into the configuration of the drivers of your cards and see, if you can turn it on. As I remember from a look some time ago now already, there are no settings for flow control for this cheap chipset. If you are lucky, they have added it in between. If not, you are lost. They will never work correctly back to back. There is a chance that the cards will honor a remote request, but for this you need a switch between them, that forces it to be turned on. Quote Link to comment
ConnerVT Posted April 4, 2023 Share Posted April 4, 2023 I spent a few weeks trying to get the 2.5Gbe Realtek NIC to play nice in my network. Recently upgraded my firewall with one of those Chinese 4x Intel NIC N5105 based PCs, a 2.5Gbe switch, and new NICs (Realtek) for my server and daily driver desktop PCs. Fought me like hell to get everyone to play nice with one another. I've lost count how many iperf3 runs I've done between each of the 3 systems with 2.5Gbe installed. In some cases, I had speeds as low as ~400Mb/sec. Nothing I tried seemed to really help. I eventually ended up getting an Intel NIC for the server, which then moved me to acceptable transfer speeds. Ultimately, the things that finally got my speeds in the 2.3Gbe range in/out of my server were (besides switching to Intel) to disable NIC Offload and update my TX/RX buffers to 4096 in Tips and Tweaks. On the firewall and desktop, I also did the same, as well as disabled as many of the "green" power saving settings as I could. I still have the Realtek in my Windows PC, as it is a USB dongle (ITX with no PCIe slot available) as well as the switch. It is the weakest link at this point, with TX being about 200Mbe/sec slower (at 1.9-2.1Gbe/sec). With the fight it took to get there, I can accept that. 1 Quote Link to comment
kkhan Posted April 4, 2023 Author Share Posted April 4, 2023 Diagnostics as requested. Thanks for the feedback so far; it does not sound good but at least I know I did not screw up (... other than buying the wrong adapters). Feedback in the forum on 10Gb cards based on the Aquantia chipset is positive and the general sentiment appears to be to bypass the Realtek based 2.5gb cards and just go to 10Gb. It is an overkill for my needs but if it works then it will be worth it. lily-diagnostics-20230404-1711.zip Quote Link to comment
JorgeB Posted April 4, 2023 Share Posted April 4, 2023 Nothing obvious on the diags, is it slow for both directions ore just one? Some users see low speed in one direction only, for those disabling bridging sometimes helps. Curiously I have the exact same NIC in one of my servers and have no issues getting line speed in both directions. Quote Link to comment
MAM59 Posted April 4, 2023 Share Posted April 4, 2023 1 hour ago, JorgeB said: Curiously I have the exact same NIC in one of my servers and have no issues getting line speed in both directions. The NIC is not the problem as long as she is connected to a "good" partner, either switch or a card like Mellanox X3 or Intel 10Gbe. She needs a "master" that tells her when to send data and when to do pauses. Without proper flow control the cards simply miss their sending windows, wait for timeouts and resend until it works. This slows down the transmissions. If you just put them back to back, every direction is crawling. If there is no flow control, but one partner is capable of doing 10Gbe, only one direction is affected (because the 10Gbe port can receive at ANY possible window). The problem can be simplied and visualised like: 2,5Gbe is 10Gbe with one Data Part and 3 Pauses, like DPPP. But without flow control, nobody tells WHERE the data is send. Maybe DPPP, PDPP, PPDP or PPPD. If the receiver is 2,5Gbe too, he is also only capable of receiving at a specific time (without flowcontrol). And if these both windows do not match, the packet is simply lost and never received. Flow Control is just a way to say STOP or CONTINUE to the other side, therefore regulating WHEN to send and when to pause. The realtek problem was so far, that those cards are not able to be master, they just listen for others to manage the flow control. As I already said, the "invention" of 2,5 and 5 was a bad mistake... Quote Link to comment
kkhan Posted April 4, 2023 Author Share Posted April 4, 2023 I have managed to improve matters a bit by upgrading to the latest Windows drivers from the Realtek website. I can now get a bandwidth of 1.62 Gbits/s after optimising some NIC parameters (the new driver provides far more control then Windows default driver which is dated 2015). This is still not ideal; transferring data to the new server from my backup Windows Server still saturates the resultant 1.62Gbit/s connection. There is a newer linux driver on the Realtek website but my understanding is that updating drivers outside the UNRAID release cycles is not recommended or supported. Quote Link to comment
kkhan Posted April 4, 2023 Author Share Posted April 4, 2023 I have slow speed in both directions Quote Link to comment
pras1011 Posted September 29, 2023 Share Posted September 29, 2023 Did you resolve this? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.