Jump to content

10gbit network


Recommended Posts

Posted (edited)
58 minutes ago, Vr2Io said:

You may try test in parallel streams by option "-P". With two streams it reach 9.37Gbps for my X520. All my Unraid use X520 and a ConnectX-3 with Windows, so far so good.

I like OpenSpeedTest docker more then iperf3, btw both available per the need.

 

For Unraid network sharing not reach 10Gbps, this is other issue and you may not fix by change different type NIC.

 

 

Hmm, interesting, I tried to run iperf3 -P 10 and it shows 9gbit sum speed. That means it is working somehow.

 

But how to reach good files copying speed? I have one nvme Samsung 990 pro ssd with NTFS filesystem in the unassigned devices which I shared to test copy speed, also I have some btrfs cache drives over some shares like isos folder. Copying files to ntfs drive speed is like ~150 megabytes/second, copying files to cached share is ~175 megabytes/second. Why it is that slow?

 

2024-06-19 12_52_34-cmd (Admin).jpg

2024-06-19 12_55_01-Ariloum^ - Total Commander (x64) 10.52 - NOT REGISTERED.jpg

Edited by Ariloum
Link to comment

To be honest, this issue also happen in different Unraid build, seems some people haven't face this but I always found different user suffer the same ( ceiling at 300MB/s ) . My solution was equip the build with much much memory as cache, so it will regulate slow network transfer issue, but this not a real solution, and you need understand.

 

From my experience, stripe storage may got great performance then single storage ( for network file transfer ), even single storage was high end device. I never use cache pool / mover because this not the best for my use case, I use stripe pool in general.

 

With many memory cache + stripe storage, I can transfer file in an acceptable speed, it haven't issue even transfer in 100TB data. 

Edited by Vr2Io
Link to comment
46 minutes ago, Ariloum said:

Hmm, interesting, I tried to run iperf3 -P 10 and it shows 9gbit sum speed. That means it is working somehow.

This means that you should get good speeds for simultaneous transfer, but a single file transfer is usually limited to a similar speed as a single stream iperf test, not always clear why some users see better speeds with similar NICs, maybe cable or transceiver related, if applicable.

Link to comment
Posted (edited)
2 hours ago, JorgeB said:

This means that you should get good speeds for simultaneous transfer, but a single file transfer is usually limited to a similar speed as a single stream iperf test, not always clear why some users see better speeds with similar NICs, maybe cable or transceiver related, if applicable.

 

If so then why did I have 2-3 times lower copying speed if my single iperf thread speed shows ~4gbit? 4gbit is something like 500 Mb/s but I'm getting 170 Mb/s. For me it seems that unraid has some serious network issues. Did it also has single-threaded packets queue?

 

Edited by Ariloum
Link to comment
3 hours ago, Vr2Io said:

To be honest, this issue also happen in different Unraid build, seems some people haven't face this but I always found different user suffer the same ( ceiling at 300MB/s ) . My solution was equip the build with much much memory as cache, so it will regulate slow network transfer issue, but this not a real solution, and you need understand.

 

Is there any way to find the bottleneck? I'm not a developer thus my possibilities of analyzing is very limited.

Link to comment
12 minutes ago, Ariloum said:

If so then why did I have 2-3 times lower copying speed if my single iperf thread speed shows ~4gbit?

It's never exact, it's a good indication of the current possible speed, but sometimes I see users than get transfer speeds faster than the iperf single stream results, other times slower.

 

But ideally you want a single stream iperf test to be close to line speed, I usually get around 9Gb/s for my 10GbE NICs, and around 1GB/s for an actual transfer, on the other hand, don't get close to 25GB/s for my 25GbE NICs, but still get close to line speed for an actual transfer.

Link to comment
Posted (edited)
42 minutes ago, JorgeB said:

It's never exact, it's a good indication of the current possible speed, but sometimes I see users than get transfer speeds faster than the iperf single stream results, other times slower.

 

But ideally you want a single stream iperf test to be close to line speed, I usually get around 9Gb/s for my 10GbE NICs, and around 1GB/s for an actual transfer, on the other hand, don't get close to 25GB/s for my 25GbE NICs, but still get close to line speed for an actual transfer.

 

I'm curious what 10gbit adapters are you using with unraid and what transceivers?

Looks like I have close to zero chances to get full drivers support for my net adapters from the unraid team thus I will have to replace my hardware. I know Intel got like 3 or more revisions of their X520 adapters - 82599, 82599es, 82599eb, and probably unraid doesn't supports some of them.

 

Edited by Ariloum
Link to comment
31 minutes ago, Ariloum said:

I'm curious what 10gbit adapters are you using with unraid and what transceivers?

I'm using (used) Mellanox X3 with Multimode Fiber modules (RJ45 get too hot the fanless switch does not like them and often shuts down a port because of overheating). A few ports with DAC. No problems, most of the times full speed (of course, if somebody else is doing something, max speed drops to 50% for that period, so you only get constant speeds if you are alone)

 

I had those Intel cards before, a lot of them, they only caused trouble for me (lost links, dropped connections, stale transfers), maybe it was because they were RJ45 only. Never tested SFP+ versions with fiber or DAC. So they all ended up on ebay, lost a lot of money but its a life saver if you wake up and see all backup jobs run properly over night 🙂

 

Because the X3 is rather old already I guess the next computer will get an X5 or even an X7.

Problem with these cards is that they are always some PCIe versions behind. So they need 8 (very old PCIe 2.) or 4 (PCIe 3.0) lanes to run with full speed. More modern chipsets would allow the same speed with a single lane now already. But nobody builds cards with a modern bus interface 😞

 

Edited by MAM59
Link to comment
Posted (edited)

Well, looks like these cards has many revisions too, like Intel?

How do I know which revisions are supported by unraid?

For example: Mellanox Oracle MHQH29B-XSR ConnectX-2

 

There is lots of them:

Mellanox CX-3 EN PRO 10/20 Gb MCX312B-XCCT
Mellanox CX-4 Lx 10/20Gb 25/50 Gb Eth MCX4121A-ACAT
Mellanox ConnectX-3 VPI MCX354A-FCBT
Mellanox ConnectX-3 Pro VPI MCX354A-FCCT
Mellanox ConnectX-3 Pro EN MCX314A-BCCT
Mellanox CX-4 Lx 10/20/40/50Gb Gb Eth MCX4131A-GCAT
Mellanox CX-4 EN 10/20/40/50Gb 56Gb Eth MCX413A-GCAT / MCX413A-BCAT
Mellanox ConnectX-4 VPI MCX455A-ECAT

 

Edited by Ariloum
Link to comment
41 minutes ago, MAM59 said:

I had those Intel cards before, a lot of them, they only caused trouble for me (lost links, dropped connections, stale transfers), maybe it was because they were RJ45 only. Never tested SFP+ versions with fiber or DAC. So they all ended up on ebay, lost a lot of money but its a life saver if you wake up and see all backup jobs run properly over night 🙂

 

 

definitely that is only your exp and related to overheating of RJ45 transceivers, I have a data-center friend who is using tons of these Intel X520 in many cases (mostly on server-racks), but these cards just works everywhere (except unraid) and well supported even by Synology. 

Link to comment
On 6/19/2024 at 5:32 PM, Ariloum said:

definitely that is only your exp and related to overheating of RJ45 transceivers

Yes, thats why I did not tell it before. But now you have asked for it....

I dont say these cards are bad in general.

But they did not work for me. Neither on UNRAID, nor on Windows.

And after half a year of testing, rearranging, changing cables and so on, I gave up and replaced them (and all the cables in the wall are now fiber too). Since then, no problems.

Link to comment
Posted (edited)
11 minutes ago, MAM59 said:

Yes, thats why I did not tell it before. But now you have asked for it....

I dont say these cards are bad in general.

But they did not work for me. Neither on UNRAID, nor on Windows.

And after half a year of testing, rearranging, changing cables and so on, I gave up and replaced them (and all the cables in the wall are now fiber too). Since then, no problems.

 

Actually I heard that for the Intel x520 and many other sfp adapters Windows is most capricious system.

Usually all these adapters is working good on *nix systems.

But for now most capricious system leadership, in my eyes, is overtaking by unraid.

 

I've ordered some cheap Mellanox adapter.

Guess I'm going to bruteforce these hardware adapters until the unraid finally deigns to work.

If it won't run within 3 adapters I'm going to switch to Debian with SnapRAID (same parity disks system).

 

Edited by Ariloum
Link to comment
3 minutes ago, Ariloum said:

I heard that for the Intel x520 and many other sfp adapters

Dunno, those days I had no SFP(+) stuff, wanted to keep RJ45 based cards and switches (cables in the walls, 3 houses...).

The Intel cards were fixed RJ45, no way to test anything else.

But the "historical grown cabling system" turned out to be very unstable with 10G (I dont blame it on the cards alone). So I replaced cards, switches AND wall cables. Now everthing is SFP and the cables in the wall are fiber.

As you might can expect, this was an expensive change.

So I'm glad everything runs stable now and I wont look back.

 

Link to comment
1 minute ago, MAM59 said:

Dunno, those days I had no SFP(+) stuff, wanted to keep RJ45 based cards and switches (cables in the walls, 3 houses...).

 

actually I have 2xRJ45 transceivers and used them a few days on small speeds in 2 different cases

1. 2.5gbit ethernet was plugged into 10gbit rj45 sfp switch

2. x520 to sfp switch 

 

I've had no any issues but I didn't tried to test it on heavy load for long time.

 

Link to comment
2 minutes ago, Ariloum said:

I've had no any issues but I didn't tried to test it on heavy load for long time.

Good for you. But here it worked ok for some weeks, even months too. Then rare link losts occured (instant reconnect), then rare became more often and finally more often became almost all the time.

Changeing something "healed" it for a short period, then the effects came back.

DROVE ME CRAZY!

(of course NO error anywhere!)

also transmission speed was more or less random at the end

Maybe you are lucky, maybe your lines are short enough so RJ45 can still work properly (I have many links with more then 20m cable between).

 

So, since I have never found out who really was the culprit, I cannot blame it on the cards. But I wont give Intel another chance here anymore.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...