Dual Port Nic Card Help Please


Badboy

Recommended Posts

Hi Everybody.

I wanted to save one of my PCIE slots for a sata card so I bought a 2 port 10G network card for transferring files.. https://www.amazon.ca/10Gtek-X540-T2-Converged-Network-Adapter/dp/B06XH4HV96

 

I think it works, but I don't think I'm configuring it right in Unraid. When I tried it, my dockers with custom ips for Br0 were unavailable. 

 

I had the following:

Enable bonding: No

Enable Bridging Yes

 

Bridging members of br0:   I selected  Eth0, Eth1, Eth2

 

Under Interface Rules  My main Eth0 was first, followed by the other 2, which I didn't realize it read each port separate on the the card.

 

I  had 2 single ones between my desktop and unraid before. Maybe that caused some issues.  Any  help would be appreciated. This card mentions running in a x8 slot, can it run in a x4 slot? Really doesn't matter going to use the x4 for for my sata card.  When I link the 2 systems together with the cards, does it hurt to leave the 1 gig internet plugged in on the desktop? I've always unplugged it.  Thank you!

 

ASUSTeK COMPUTER INC. - ROG STRIX X470-F GAMING

AMD Ryzen 9 3900X 12-Core @ 4 GHz

 

 

Link to comment
7 hours ago, Badboy said:

Hi Everybody.

I wanted to save one of my PCIE slots for a sata card so I bought a 2 port 10G network card for transferring files.. https://www.amazon.ca/10Gtek-X540-T2-Converged-Network-Adapter/dp/B06XH4HV96

 

I think it works, but I don't think I'm configuring it right in Unraid. When I tried it, my dockers with custom ips for Br0 were unavailable. 

 

I had the following:

Enable bonding: No

Enable Bridging Yes

 

Bridging members of br0:   I selected  Eth0, Eth1, Eth2

 

Under Interface Rules  My main Eth0 was first, followed by the other 2, which I didn't realize it read each port separate on the the card.

The Dual Port Card is just a more dense physical package, but still two seperate ports, like two seperate single cards.

It will "only" save you from utilizing a second physical PCIe slot.

 

A Bridge "only" creates a Layer-2 Switch from the number of individual ETH Ports.

Allocating an IP to the bridge will not aggregate the bandwidth of all ports, which I think is what you are trying to achieve.

 

7 hours ago, Badboy said:

 

I  had 2 single ones between my desktop and unraid before. Maybe that caused some issues. 

...see my comment above. You are mixing Layer 2 and layer 3 conecpts.

In order to aggregate the bandwidth of all ports, enable bonding as well and allocate the unRaid IP to the Bond0 Interface (still use br0 for Dockers and their IPs...)

However, the other side of the connection (i.e. your physical switch) also needs to support "bonding", often refered to as LACP...make sure you are using the same bonding method on either side of the link.

Also be aware, that this will not necessarilly increase bandwith for a single service connection (like SMB, which is single threaded).

This concept is for sharing the bandwith accorss the ports for concurrent connections (like many users accessing the NAS at the same time)

 

7 hours ago, Badboy said:

Any  help would be appreciated. This card mentions running in a x8 slot, can it run in a x4 slot?

technically it will. But physically only if the x4 slot is open at the inside end, so the x8 connector fits in/going beyond the x4 length.

A x4 PCIe v3 slot can provide up to 3.9GByte/sec (see: https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions), so still enough for your Dual Card.

Edit2: OK, this card is an Intel clone and is only advertised as PCIe v2...so 2GByte/sec it is...tight for your card but maybe good enough

 

7 hours ago, Badboy said:

When I link the 2 systems together with the cards,

So you want a direct link, between a PC and NAS?...no switch involved?

See my comments regarding bonding above.

You'll have to configure it in your PC as well.

TBH, for a direct link, using faster Single Port cards would be an easier approach, like a SFP28+ or QSP+.

Edit: but you'll need very capable hardware to sustain a 10+Gbps transfer from/to unraid....I believe even a x4-PCIe-v3 NVMe is not good enough....the margin is a bit tight for 20G and even with a PCIe-v4 NVME when its DRAM/SLC-cache is full everything will slow down. Even achieving 10Gbps is not easy....a standard SATA SSD will only deliver 500Mbps.

 

 

7 hours ago, Badboy said:

does it hurt to leave the 1 gig internet plugged in on the desktop? 

Depends on your setup and IP numbering.

 

I believe, your problem is not hardware related with the NIC card(s) but rather releated to a faulty/incomplete IP-network concept and setup.

 

One typical scenario is to use the 1G NICs attached to the central Switch, along with your internet Router and WiFI Access Points.

Then use a dedicated direct link for the 10G link(s) between a single PC and unraid.

In this case, I'd recommend to use a different IP network for the direct link.

 

A second scenario is that you use a more capable central switch and attach all wired components to it with a single wire (use the one with the highest capability).

This would enable all clients to use the higher bandwidth connection of the 10G link(s).

Here a single IP network would suffice.

To enable all wired ports on each device/component, use bondig (remember my remarks regarding single threaded applications above).

Edited by Ford Prefect
Link to comment
12 hours ago, Ford Prefect said:

The Dual Port Card is just a more dense physical package, but still two seperate ports, like two seperate single cards.

It will "only" save you from utilizing a second physical PCIe slot.

 

A Bridge "only" creates a Layer-2 Switch from the number of individual ETH Ports.

Allocating an IP to the bridge will not aggregate the bandwidth of all ports, which I think is what you are trying to achieve.

 

...see my comment above. You are mixing Layer 2 and layer 3 conecpts.

In order to aggregate the bandwidth of all ports, enable bonding as well and allocate the unRaid IP to the Bond0 Interface (still use br0 for Dockers and their IPs...)

However, the other side of the connection (i.e. your physical switch) also needs to support "bonding", often refered to as LACP...make sure you are using the same bonding method on either side of the link.

Also be aware, that this will not necessarilly increase bandwith for a single service connection (like SMB, which is single threaded).

This concept is for sharing the bandwith accorss the ports for concurrent connections (like many users accessing the NAS at the same time)

 

technically it will. But physically only if the x4 slot is open at the inside end, so the x8 connector fits in/going beyond the x4 length.

A x4 PCIe v3 slot can provide up to 3.9GByte/sec (see: https://en.wikipedia.org/wiki/PCI_Express#History_and_revisions), so still enough for your Dual Card.

Edit2: OK, this card is an Intel clone and is only advertised as PCIe v2...so 2GByte/sec it is...tight for your card but maybe good enough

 

So you want a direct link, between a PC and NAS?...no switch involved?

See my comments regarding bonding above.

You'll have to configure it in your PC as well.

TBH, for a direct link, using faster Single Port cards would be an easier approach, like a SFP28+ or QSP+.

Edit: but you'll need very capable hardware to sustain a 10+Gbps transfer from/to unraid....I believe even a x4-PCIe-v3 NVMe is not good enough....the margin is a bit tight for 20G and even with a PCIe-v4 NVME when its DRAM/SLC-cache is full everything will slow down. Even achieving 10Gbps is not easy....a standard SATA SSD will only deliver 500Mbps.

 

 

Depends on your setup and IP numbering.

 

I believe, your problem is not hardware related with the NIC card(s) but rather releated to a faulty/incomplete IP-network concept and setup.

 

One typical scenario is to use the 1G NICs attached to the central Switch, along with your internet Router and WiFI Access Points.

Then use a dedicated direct link for the 10G link(s) between a single PC and unraid.

In this case, I'd recommend to use a different IP network for the direct link.

 

A second scenario is that you use a more capable central switch and attach all wired components to it with a single wire (use the one with the highest capability).

This would enable all clients to use the higher bandwidth connection of the 10G link(s).

Here a single IP network would suffice.

To enable all wired ports on each device/component, use bondig (remember my remarks regarding single threaded applications above).

Thanks very much for your input. I will look over the suggestions and see what I can make work.  Networking with Unraid is not something I'm good at. LOL

Link to comment
On 9/2/2021 at 3:20 PM, Badboy said:

Thanks very much for your input. I will look over the suggestions and see what I can make work.  Networking with Unraid is not something I'm good at. LOL

Hi, 

Just going over your notes. I think the simplest for me might be getting a 10 gig switch plug it into modem, plug unraid and my desktop into the switch. What do you think? Any recommendations for a switch?

Edited by Badboy
Link to comment
22 hours ago, Badboy said:

Any recommendations for a switch?

 

Are all 10G ports based on Copper and do you still want them to be bonded?

 

10G Copper consumes quite same energy, so these Switches are most likely not with passive cooling.

Also, when using bonding this requires you to go for a managed switch.

 

I personally love the network gear from MikroTik.

However, they only have one model with fixes Copper ports, the CRS312 https://mikrotik.com/product/crs312_4c_8xg_rmbut many more with SFP+.

 

Do you need to use Copper/LAN?

Also, some Users complain about Performance with some Intel 10G Cards.

A Mellanox ConnectX-, with SFP+ is much cheaper and will work flawlessly in unRAID.

With a direct connection, in a Rack or on your Desk/Room, you could trade Copper for a SFP+ (DAC or pure Fiber "cable").

Copper makes only sense, if you need to run the connection accross rooms or floors and when you already have the CAT7 wires.

 

Link to comment

Sorry, my main goal is to be able to transfer files faster between my desktop and unraid. The plan was to utilize the 10 gig port on my isp modem, which would go directly to the server. Then use the other nic port to run it to the desktop. As I'm typing this I'm thinking I might just forget about the desktop.

 

My separate backups are going to be created on the unraid server so the large files won't have to go far. I have a about 25 tb of media files that I need to create a  backup for in case unraid crashes a few drives.  I doubt I'm going to move those type of file sizes to the desktop.  Would be nice to have the connection there just in case. I think i can live without it. I do like that switch you sent me, and not that pricey. I might consider that. Need one with at least 3 10 gig ports. I use cat7 cabling. I really appreciate your help and input. 

 

Thank you. 

Edited by Badboy
Link to comment
35 minutes ago, Badboy said:

Sorry, my main goal is to be able to transfer files faster between my desktop and unraid. 

OK, in your first post, the way I understood it, you said that you had two connection/wires between desktop and unraid. hence my assumption you wanted to bond/combine the bandwidth.

 

42 minutes ago, Badboy said:

The plan was to utilize the 10 gig port on my isp modem, which would go directly to the server.

If this is just a modem, that link will need to go into a Router (with firewall)...unraid does not have that (besides the option to run a Router-VM, but I would not recommend that)

 

What's the 10G type on your GPON/mediabox from your ISP...normally, with 10G, I'd think it is fiber/SFP+ (or is this 1G only?)

And what Router make&model do you have?

 

The "normal" setup is something like this;

 

ISP -> ISP-GPON/mediabox (1/10G SFP+ or 1/10G-T) -> 1/10G Router(WAN)/LAN -> 10G Switch with 1x10G (unRaid) + 1x10G (desktop) + Nx1/2.5G/10G (other LAN/WiFi APs) devices/clients connected to the switch.

 

35 minutes ago, Badboy said:

My separate backups are going to be created on the unraid server so the large files won't have to go far. I have a about 25 tb of media files that I need to create a  backup for in case unraid crashes a few drives.  I doubt I'm going to move those type of file sizes to the desktop.  Would be nice to have the connection there just in case. I think i can live without it.  I use cat7 cabling. I really appreciate your help and input. 

So what is the physical setup...CAT7 cabling, accross the house (structured cabling, inside walls and sockets) or are you planning to just use a length of CAT7-Patchcable, lying loosly on the floor, accross the room for that "project" of yours?

Do you already have a 1G switch and do you plan to exchanbge it for a 10G model?

For the new switch: how many 1G ports are still required and how many 10G Ports?

As an example, there are (relatively) low priced Switches from Aruba, fanless with 24/48x1G and 4xSFP+ ... you could add 2 10G-T tranceivers and 2 SFP+ DAC/Fiber transceivers.

See: https://skinflint.co.uk/hp-aruba-instant-on-1930-rackmount-gigabit-smart-switch-jl682a-a2314000.html

 

 

As said, some users have reported performance problems with Intel based 10G NICs.

If you can, I'd stay away from RJ45 (10G-T) where possible.

Link to comment

 Sorry about that. I was referring to the dual network card. Which I'm going to send back and eventually get the switch box that you showed me. I've heard of that product, pretty good  Might be better because I use a cheap little switch for Xbox, tv etc. I can plug everything into that. Are they quite? Amazon Canada sold out right now. 

 

Other question, if I plug directly from my single 10 gig network card to the isp modem.( Bell Hub 4000), does the network card have to be the first selection in the Unraid network settings or should it be the 1 gig connection? Any other settings that need to be changed?

 

Thanks for showing me the switch, should have did that in the first place.. Lol 

Link to comment

OK, sorry but I am not used to equipment from Bell CA...I gather the Bell Hub 4000 is this: https://www.dslreports.com/forum/r33043422- ??

That is not just a modem, but a WiFi station, including a firewall, I hope ;-).

 

It looks like it has 5 LAN ports, of which the first (silver) is 10G....now I have a better understanding, I think.

 

59 minutes ago, Badboy said:

 Might be better because I use a cheap little switch for Xbox, tv etc. I can plug everything into that. Are they quite? Amazon Canada sold out right now. 

The HP Aruba model, I just linked above?

Yes, this model is fanless...no noise at all.

But you will need a 10G-T Tranceiver in order to connect to the Bell, like this: https://www.fs.com/products/87588.html

Mind you, that Aruba equipment is picky, when it comes to the brand, which is "flashed/burned" into the transceiver...so this will be more expensive, than a standard cisco/mirotik/generic one.

 

 

Here is another, more flexible options:

QNAP:  https://skinflint.co.uk/qnap-qsw-m400-desktop-gigabit-managed-switch-qsw-m408-2c-a2305308.html

 

Zyxel: https://skinflint.co.uk/zyxel-xgs1250-desktop-gigabit-smart-switch-xgs1250-12-zz0101f-a2492032.html

...but I cannot comment on their noise level, as they do have a fan.

 

If you go (for unraid and desktop) for a mellanox connectx-3 card (https://www.ebay.ca/itm/223088957450?hash=item33f123580a:g:XUgAAOSwjXZcXHsa), you could use DAC "cables" (good for up to 7m length).

For a fanless switch, then these are other good options (which I use) - 4 x SFP+ Ports: https://skinflint.co.uk/mikrotik-cloud-router-switch-crs305-dual-boat-desktop-10g-smart-switch-crs305-1g-4s-in-a1923200.html or 8 x SFP+ Ports: https://skinflint.co.uk/mikrotik-cloud-router-switch-crs309-dual-boat-desktop-10g-smart-switch-crs309-1g-8s-in-a2023154.html?hloc=at&hloc=de&hloc=pl&hloc=uk&hloc=eu

As a 10G-T module: https://www.fs.com/products/87588.html ... this one is good for up to 80m (cisco, mikrotik, generic...Mikrotik switches ar not picky at all) or https://skinflint.co.uk/mikrotik-routerboard-10g-lan-transceiver-s-rj10-a1827894.html (I have a pair running with 10G over 18m of cat5e cables).

 

Mind you, these 10G-T tranceivers will get hot, so only populate every second port with one of these in a fanless switch!!!!

 

59 minutes ago, Badboy said:

Other question, if I plug directly from my single 10 gig network card to the isp modem.( Bell Hub 4000), does the network card have to be the first selection in the Unraid network settings or should it be the 1 gig connection?

For a start, use 10G as eth0...you don't need the 1G connection.....use no bond, but a bridge with only the 10G/eth0 NIC in it.

Alternatively or later, you could use the 1G in a bond in failover mode, if the 10G link goes down.

Or use it as a dedicated NIC in a VM.

 

59 minutes ago, Badboy said:

Thanks for showing me the switch, should have did that in the first place.. Lol 

no worries...see my additional comments / alternatives above.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.