Duplicate ethernet adaptors after 10Gb upgrade? Need help troubleshooting.


Recommended Posts

Hi all,

I've been getting very strange behaviour on both my Unraid systems since upgrading my NICs to 10Gb Mellanox ConnectX-3 cards. I've chronicled my journey to using 10Gb in a separate thread over here. Started this thread to try and figure out if there is a way to resolve the issues I'm encountering.

 

My intention is to have the bonded network (active backup mode 1) working with the first port of the Mellanox 10Gb card as primary, and the onboard 1Gb as secondary failover. My understanding of how this should work is that the "primary" interface should be prioritised and fail-over to the secondary, then further interfaces (if configured) if primary interface is interrupted. I understand the bond should also fail-over back to the primary interface, should it reconnect. That doesn't seem to be happening, or I'm misunderstood.

 

So far I've not been able to make that work as expected, and I'm struggling to determine why. When either system boots up, the onboard 1Gb is set as primary bonding interface until I manually unplug the 1Gb and it fails-over to the 10Gb. It does not seem to fail-over back to the 1Gb, unless I unplug the 10Gb.

 

I'm quite sure its not the cards themselves that are the issue, as they seem to operate within expectations, but some software configuration or settings in Unraid. At this point I'm running everything pretty standard, and still experiencing issues.

Is there some way to manually set the bonding 

 

I'm also seeing some other strange behaviours that I can't determine the reason for, and I continue to troubleshoot:

On both servers, I have (and are using) the 10Gb Mellanox card, and the 1Gb onboard for the respective motherboard.

On both servers, I have tried wiping the network.cfg and network-rules.cfg but the same issues return.

On both servers, bonding (active-backup mode 1) and bridging is enabled by default.

 

If anyone has any advice or can help troubleshoot I'd be thankful.

 

A few issues are present, being:

- network interface assignments changed.

On Unraid1 I have eth0, eth1, eth2 and also eth3(?) yet I only should have 3 physical interfaces.

On Unraid2 I'm getting eth0, eth2, and eth3. I'm not sure where eth1 has been skipped.

 

- duplicate entries in the network-rules.cfg file.

On Unraid1, I'm seeing 4 entries in config for the rules (eth0 to eth3) but still show 3 physical interfaces. This causes an apparently known issue where I am unable to edit the "interface rules" in the network setting GUI. I have to go back to the network-rules.cfg file, delete one of the entries and back to the GUI. This reoccurs after each reboot.

On Unraid2, this occured once but has not seemed to be an issue since, the system has apparently settled on eth0, eth2 and eth3.

image.thumb.png.562a7646281de029b4b8ebe415c76373.png

 

- br0 has disappeared.

On Unraid1, as of today, the br0 interface apparently seems to have disappeared.

I'm not sure why this is, I haven't modified anything in networking as yet, apart from modifying the "interface rules" order and setting an interface description.

I can access the webgui but I now get an error when trying to start any VMs:

image.png.2d087d33b6df49f1f252b9408db04033.png

 

- boot up log errors (I think) related to interface assignments.

On Unraid1, I have started seeing errors in the bootup log stating:

'Not enough information "dev argument is required.

Error: Argument "down" is wrong: Device does not exist'

20211025_171105.thumb.jpg.1fc7f4a7e166d02c9a3c8c2df6add815.jpg

 

There doesn't seem to show an IP on the main bootup screen either, not sure if this is related:

20211025_171133.thumb.jpg.8b7f160e7f7606a40a1843967bbd44b4.jpg

Edited by KptnKMan
Link to comment

So I got the br0 network interface back to a "working" state by essentially:

- in settings, disabling Docker

- in settings, disabling VMManager

- network settings, disable bonding and bridging

- reboot

- fix "interface rules" duplicates manually again

- confirm that everything is in correct order

- reboot

- fix "interface rules" again, and confirm

- network settings, enable bonding and bridging

- reboot

- confirm settings stuck, fix "interface rules" again

- confirm br0 is present and working

- enable docker

- enable VMManager

- verify VMs work again

 

Really not sure why that got broken.

 

Also on Unraid1, the physical interfaces now seem to show up as eth0, eth1 and eth3.

I still need to manually modify the network-rules.cfg file after each bootup, otherwise 4 interfaces show up in the "interface rules" GUI.

I just encountered the same br0 missing issue on Unraid2, and resolved it following the same process above, but the interfaces are labelled the same afterward.

 

Would really appreciate if anyone has any idea what might be happening?

Edited by KptnKMan
Link to comment
On 10/25/2021 at 5:58 PM, KptnKMan said:

I'm also seeing some other strange behaviours that I can't determine the reason for, and I continue to troubleshoot:

On both servers, I have (and are using) the 10Gb Mellanox card, and the 1Gb onboard for the respective motherboard.

On both servers, I have tried wiping the network.cfg and network-rules.cfg but the same issues return.

On both servers, bonding (active-backup mode 1) and bridging is enabled by default.

 

[...]

 

- boot up log errors (I think) related to interface assignments.

On Unraid1, I have started seeing errors in the bootup log stating:

'Not enough information "dev argument is required.

Error: Argument "down" is wrong: Device does not exist'

 

I notice with my single mellanox cards, even on windows as well, that booting/bringing them up to full state does take a substantial amount of time, much longer than with my i350-T4 cards I used to have before going 10G.

 

If I recall correctly from your other thread, you are using DUAL Port Mellanox X3 cards.

This behaviour sounds like a kind of "ghost"-port that is visible for some time, than disappears.

As to why, I can only speculate.

But this kind of behaviour could match what is observed.

AFAIR you are only using one SFP+ port in the DUALs, aren't you.

 

As I am running mines without a monitor, I never saw them POST.

Maybe there is a BIOS on the card, that is adding to the boot-time and maybe responsible for this kind of behaviour, too.

I wonder...

  • if you could configure same parameters when entering secondary BIOS during BIOS setup/cold boot, or
  • as it is the case with a HBA card, whether you could flash the card with firmware but without card-BIOS?
Link to comment
4 hours ago, Ford Prefect said:

I notice with my single mellanox cards, even on windows as well, that booting/bringing them up to full state does take a substantial amount of time, much longer than with my i350-T4 cards I used to have before going 10G.

You mean beside the netboot bios?

I haven't seen any delays other than that with the cards coming up, they seem to be recognised and active when Unraid boots on both servers.

 

4 hours ago, Ford Prefect said:

If I recall correctly from your other thread, you are using DUAL Port Mellanox X3 cards.

This behaviour sounds like a kind of "ghost"-port that is visible for some time, than disappears.

As to why, I can only speculate.

But this kind of behaviour could match what is observed.

AFAIR you are only using one SFP+ port in the DUALs, aren't you.

Yep, I'm using the dual-port SFP+ cards, and only using a single SFP+ on each card, just as you say.

I'm not sure why more "ghost" ports would be registered, as I can see all 3 (Including the onboard 1Gb) listed when the system comes online.

Is a ghost port something that has been observed in other uses of the Mellanox X-3 cards? Is this known behaviour with dual-port cards?

The BIOS on these cards is the latest installed from the Mellanox site, installed using the Mellanox Tools for Unraid, exactly as instructed for the tools (As detailed in the GUI).

 

Also, I have this issue on both servers where there is seemingly a skipped interface.

I noticed in the network.cfg that initially there is the line for 'BONDNICS[0]'.

Originally, this read as 'BONDNICS[0]="eth0 eth1 eth2 eth3"'

But now it reads as 'BONDNICS[0]="eth0 eth1 eth3"' on Unraid1, and 'BONDNICS[0]="eth0 eth2 eth3"' on Unraid2.

Unraid1 is where I'm having the issue where it seems to not be able to decide which interface id is being used, and lists 4 in the network-rules.cfg file. Unraid2 has not shown this particular issue.

 

4 hours ago, Ford Prefect said:

- boot up log errors (I think) related to interface assignments.

On Unraid1, I have started seeing errors in the bootup log stating:

'Not enough information "dev argument is required.

Error: Argument "down" is wrong: Device does not exist'

Also, this bootup error seems to have disappeared after I jumped through hoops to remove-then-re-add the br0 interface, that I mentioned above.

I suspected that was related to the bridge getting messed up somehow, and that seems to have reset it.

I understand that the bridge itself is also listed as a device, as basically everything is in linux. So with it being broken the startup scripts were probably either looking for it or assuming it was there? to setup the bridge and/or bond?

 

Interestingly, this s***show began after I simply set the Interface Description. Changed nothing else.

Before that I had wiped the network.cfg and network-rules.cfg files, all settings were running default, and I noted that the network.cfg file was not recreated until I set the Interface Description.

 

As soon as I did that, everything seemed to get weird. Happened on both Unraid systems. ¯\_(ツ)_/¯

Link to comment
11 hours ago, KptnKMan said:

Yep, I'm using the dual-port SFP+ cards, and only using a single SFP+ on each card, just as you say.

I'm not sure why more "ghost" ports would be registered, as I can see all 3 (Including the onboard 1Gb) listed when the system comes online.

Is a ghost port something that has been observed in other uses of the Mellanox X-3 cards? Is this known behaviour with dual-port cards?

The BIOS on these cards is the latest installed from the Mellanox site, installed using the Mellanox Tools for Unraid, exactly as instructed for the tools (As detailed in the GUI).

Well, I just invented that "term", as obviously during boot/after reboot the number allocation changes.

Also that error message from an initialisation script "device does not exist" during boot is suspicious.

Either the device was there triggering/when the script had been started, but got lost until the script reached that part of the code (ghost device) OR that script is no longer functioning in itself.

Speaking of which...in your "tinkering" with the network.cf and network-rules.cfg....what Editor did you use from what system? Maybe there are some non-printable characters, that have found their way into the script/file. This could cause that behaviour as well, hmmm.

Can you check with a "cat -vet <filename>" ... maybe its as simple as that there are come CR/LF problems, but I don't think so.

 

Besides that, you find me clueless. 🥴

Link to comment
On 10/27/2021 at 12:55 PM, Ford Prefect said:

Well, I just invented that "term", as obviously during boot/after reboot the number allocation changes.

Also that error message from an initialisation script "device does not exist" during boot is suspicious.

Either the device was there triggering/when the script had been started, but got lost until the script reached that part of the code (ghost device) OR that script is no longer functioning in itself.

Yeah that's fair enough, I didn't know if there was something else behind your comment.

I suppose it is fair enough, I often think out loud as well.

 

The script error does seem to have gone after the recreation, so I think there was a script with an "assumption" somewhere that didn't check if the bridge device existed first.

it is a bit strange though. I'm not sure if I'll get to the bottom of that one, but hopefully someone can find this and it might help them in the future.

 

On 10/27/2021 at 12:55 PM, Ford Prefect said:

Speaking of which...in your "tinkering" with the network.cf and network-rules.cfg....what Editor did you use from what system? Maybe there are some non-printable characters, that have found their way into the script/file. This could cause that behaviour as well, hmmm.

Can you check with a "cat -vet <filename>" ... maybe its as simple as that there are come CR/LF problems, but I don't think so.

I usually use notepad++ from another machine, which allows for whitespace and non-printable character detection, as I've had that annoyance before.

Its a good thing to keep vigilant for, and I will do a deeper look to see if something is occurring.

For now, it seems to have confirmed my suspicions that the files were created when the network configuration was modified in the GUI, but that caused a bunch of things to go wrong at the same time.

 

It's quite sad that this doesn't want to behave with a relatively simple configuration.

Link to comment
  • 4 months later...

A couple of us with same issue, any dual port mellanox nic + onboard nic - basically the same stuff you are seeing. Did you figure out what was wrong? I filed a bug report and they said it was card related, so did you have to flash anything? I have updated to the latest FW and disabled netboot for faster startups. Can recreate with all my dual port cards, single port seems fine, but I only have 1 to test.

 

 

Link to comment
  • 1 month later...
On 3/25/2022 at 4:53 PM, Jclendineng said:

A couple of us with same issue, any dual port mellanox nic + onboard nic - basically the same stuff you are seeing. Did you figure out what was wrong? I filed a bug report and they said it was card related, so did you have to flash anything? I have updated to the latest FW and disabled netboot for faster startups. Can recreate with all my dual port cards, single port seems fine, but I only have 1 to test.

 

 

Hey man, I haven't solved this issue, I've just sort of been ignoring it for a little while because everything "works" well enough in its current state.

My plan to have failover working is not possible with this issue present though, so that's a blocker.

 

Every time I come back and see if anyone has gotten further, I see that the best advice is to upgrade the firmware and see if it fixes it.

So far from what I've seen, reports show that the latest firmwares don't fix the issue. Bummer.

Link to comment
  • 3 weeks later...

So this is essentially the same post as I left in the other thread, but I'm leaving it here because of completeness and also if possible, I'd like to find out @Ford Prefect's opinion on the basic testing I did:


Yesterday I took the dive and updated my secondary unRAID to 6.10.1, then later my primary once I saw everything was working well.

In particular regard to the 10Gbit dual ConnectX-3 cards, I did some rudimentary testing after upgrade on both systems (I had 164 days solid uptime on my secondary 😁) and found that it works but (at least in my case) not perfectly. I'm going to be keeping an eye on it still, but I'll try to explain my findings.

 

So after upgrading my secondary (My less fussy system because I hardly ever mess with it), I saw that the installed cards were all listed but they were listed as eth0 (mlx4_core), eth2 (mlx4_core) and eth3 (igc). So I went to the network.cfg and network-rules.cfg, and changed them to be eth0, eth1, eth2, and rebooted. Upon reboot I found the same issue where the cards were duplicated and had become eth0 (mlx4_core), eth1 (igc), eth2 (mlx4_core duplicate?), eth3 (mlx4_core).

 

So I thought "uh oh" and edited the network.cfg and network-rules.cfg to reflect the original setup eth0, eth2, eth3. That seemed to work again, and I fully rebooted 3 times to check confirm that the configuration would persist through restarts. Seems good, so then I thought upgrade the primary unRAID to see what's really up.

 

Upon upgrading my primary unRAID, I immediately saw that 4 cards were listed, eth0 (mlx4_core), eth1 (r8169), eth2 (mlx4_core duplicate?), eth3 (mlx4_core). So I thought to immediately try to duplicate the eth0+2+3 configuration of the secondary. After setting network.cfg and network-rules.cfg and rebooting, that seemed to work and no duplicated were present. They showed up as eth0 (mlx4_core), eth2 (mlx4_core), eth3 (r8169).

 

I rebooted my primary server a further 3 times, to check that the card assignments were persistent.

It looks at this moment like the configuration stuck, but I'm still hesitant to change anything. I'm planning in the coming days to reboot my primary a few more times and see if anything switches around, or goes strange.

 

So far, so good though. 👍

Link to comment
42 minutes ago, KptnKMan said:

So this is essentially the same post as I left in the other thread, but I'm leaving it here because of completeness and also if possible, I'd like to find out @Ford Prefect's opinion on the basic testing I did:

Well, it's been a while but as the info from the other thread suggests, this is still not fixed and probably a firmware issue with the card.

 

I can't see a flaw in what you tried out / tested...so basically I tend to agree with the others.

Maybe a final test, as I am not sure if this has been tried in the past, is to find out if the problem is somewhat related to the presence of another NIC in the system.

What one could do for testing is to disable the other NICs in the BIOS or moving them away from the kernel during boot by means of enabling them for vfio.

 

Besides that, I am clueless, but I don't have a DUAL CX3 and also am still running on 6.9.2

Edited by Ford Prefect
  • Thanks 1
Link to comment
2 hours ago, Ford Prefect said:

Well, it's been a while but as the info from the other thread suggests, this is still not fixed and probably a firmware issue with the card.

 

I can't see a flaw in what you tried out / tested...so basically I tend to agree with the others.

Maybe a final test, as I am not sure if this has been tried in the past, is to find out if the problem is somewhat related to the presence of another NIC in the system.

What one could do for testing is to disable the other NICs in the BIOS or moving them away from the kernel during boot by means of enabling them for vfio.

 

Besides that, I am clueless, but I don't have a DUAL CX3 and also am still running on 6.9.2

I'm not actively using the onboard for anything, although I still want to have working "active-backup mode 1" interface fault tolerance, which I noticed is now set by default apparently since upgrade 6.10.1 from 6.9.2.

I'll try to test disabling the onboard nic in BIOS and see how that goes as well, but I'm not sure when I'm going to have time in the next days/week.

As soon as I do this, I'll post my results and hope someone finds it useful.

 

I'll just add that the upgrade to 6.10.1, in my setup at least, was flawless and currently without any noticeable issues.

Bravo to limetech for this achievement, I know that 6.10 was long in testing. 👍

Edited by KptnKMan
Link to comment
30 minutes ago, Ford Prefect said:

...did you see the response in the other thread, here ? ->

Oh wow, just seen it now thanks.

I'm glad this is being investigated.

 

I guess I don't need to test disabling the other nics, if its being chased down.

It doesn't "seem" (in my uneducated view) to be due to other nics being present, but I have no idea how to prove that.

 

Still, I will test to see if the settings can at least persist in current state throughout like half a dozen reboots or so.

It also seemed in my testing, that as soon as I try to line the interfaces up, it freaks out. Seems like interfaces get created then mess up what I'm blindly doing.

Still, if the config assignments can at least persist, at least I can leave it alone and wait for smart people to fix it. Hopefully.

Link to comment

SO I upgraded both my unRAID main systems to 6.10.2 and some issues were encountered.

It seems that the duplicate interfaces is still the more-or-less same as what I saw in 6.10.1 but I can't really verify.

 

On primary unRAID1, looks like the upgrade went without notice, and everything came up eth0 (mlx4_core), eth2 (mlx4_core), eth3 (r8169). I'm not going to complain about that because there seems to be no duplicate, but the skipping eth1 is still present. As long as it works, I'm not fussed.

 

On secondary unRAID2, the upgrade seems to have reset and swapped around my interfaces to eth0 (igc), eth1 (mlx4_core) and eth2 (mlx4_core). The main unraid screen booted up with both ipv4 and ipv6 as "not set" and I couldn't seem to change the interface order, and had to reboot into safe mode to swap the interfaces around to eth0 (mlx4_core), eth1 (mlx4_core) and eth2 (igc). After this, after normal bootup, the interface rules selection is completely missing from the my network settings, as the network-rules.cfg file disappeared. I read on another page that by creating a blank network-rules.cfg and rebooting would fix the problem, but that new network-interfaces.cfg file is gone as well and I'm still stuck without any interface rules selection. I also tried to disable bridging then bonding, then both, to see if it would trigger the appearance of interface rules, but nothing.

 

So in the end, my primary unRAID has the odd interface numbering, but seems to work, and the secondary unRAID has normal interface numbering but suddenly a missing network interfaces dialogue, and I'm not sure how to fix.

 

@bonienl if I may ask, I saw that you notified of the fixes in the other related thread, do you have any idea what's happening? Any idea how I can force get access to the network interfaces dialogue? I definitely have multiple interfaces installed and listed.

I've added diagnostic files if that helps.

primary-diagnostics-20220529-1351.zipsecondary-diagnostics-20220529-1353.zip

Edited by KptnKMan
Link to comment
  • 3 weeks later...

Well, this just became a major issue for me today.

Rebooted my secondary, and the cards swapped around again, so that the onboard is eth0, with the mellanox as eth1 and eth2.

This also means that the bond MAC address is now my onboard, which is not good. My assignments use the Mellanox card.

 

While this is happening, I still have no access to the interface rules dialogue, so now everything is messed up and I cannot switch it back.

Also checking the system, I can see that the network-interfaces.cfg file is not created.

 

@bonienl I realise that this is not like a support ticket or something but this is supposed to be fixed?

I don't know what to do. I'm kinda stuck here. Any advice?

Link to comment

Thanks for the advice. This secondary system is my "stable" unRAID that I basically never mess with, so I'm not keen on non-stable releases here.

 

I already downgraded to 6.10.1, and both the interface rules and interface-rules.cfg have reappeared.

I saw the downgrade worked for people in this thread:

 

However, now I have this (THe dropdown shows duplicates):

image.png.91d54a7e4ce38e5504a84848608fe406.png

 

I'm going to delete the interface-rules and reboot to see how that helps.

Edited by KptnKMan
Link to comment

FYI it is not fixed in 6.10.3, they just removed eth2, so I have eth0 (mellanox port 1), eth1 (onboard management nic), and eth2 (mellanox port 2).  Eth0/2 have the same MAC address now and interface rules only shows eth0/1, with my duplicate Nic in the list along side the built in Nic. So its still duplicate...

Link to comment
3 minutes ago, KptnKMan said:

so I'm not keen on non-stable releases here.

IMHO v6.10.3-rc1 is the most stable of all v6.10.x releases, but if you don't want to update wait for v6.10.3 final which should be release very soon and likely will be basically the same as 6.10.3-rc1, no point in using the older releases that have known issues that are already fixed.

Link to comment
2 minutes ago, Jclendineng said:

FYI it is not fixed in 6.10.3

6.10.3 fixed the problem where some Mellanox NICs weren't detected or could not be set has eth0, Mellanox NIC that have duplicate MAC addresses are a different issue, and likely a NIC problem, I never had that issue with my dual port Mellanox NICs.

  • Thanks 1
Link to comment

Well, I rebooted and the same issue reappeared same as if nothing happened.

I hacked the network-rules.cfg manually with the correct interface IDs, kernel modules and hardware addresses... and it reboots fine now.

I've rebooted the system 3 times in a row now just to see if something drifts, but its ok... for now.

How mine is supposed to look in my config:

image.png.8d0ee5f9289c961df002555df6f3a4aa.png

 

24 minutes ago, JorgeB said:

IMHO v6.10.3-rc1 is the most stable of all v6.10.x releases, but if you don't want to update wait for v6.10.3 final which should be release very soon and likely will be basically the same as 6.10.3-rc1, no point in using the older releases that have known issues that are already fixed.

I understand what you're saying, but I don't have time to verify 6.10.3 right now.

I only rebooted to add something to the system, and then all hell broke loose.

 

I know the issues with 6.10.1 at this time, and I thought I knew 6.10.2, but I was wrong there as usual.

I'm sticking on 6.10.1 until a stable 6.10.3 comes out.

 

22 minutes ago, JorgeB said:

6.10.3 fixed the problem where some Mellanox NICs weren't detected or could not be set has eth0, Mellanox NIC that have duplicate MAC addresses are a different issue, and likely a NIC problem, I never had that issue with my dual port Mellanox NICs.

In my experience, this doesn't match the behaviour at all. I'm using dual-port Mellanox CX-3 cards in both my servers and can verify the behaviour.

What seems to happen quite consistently, and as I've documented extensively on this forum in threads, is that the first Mellanox interface seems to be fine but the second appears to be created twice. Then some kind of cleanup happens and a gap is left. THat process of creating/removing the second interface seems to mess up other assignments. If the Mellanox dual-port card is assigned last, it doesn't seem to have the issue as far as I can tell, but in unRAID if you want the Mellanox MAC as bond MAC then it needs to be the first MAC on eth0. 😐

So in my experience, on both my servers, eth0 has never been the issue if the first Mellanox is set to eth0.

 

I could be wrong here, but I'm just saying what happened to me.

Edited by KptnKMan
Link to comment
7 minutes ago, KptnKMan said:

What seems to happen quite consistently, and as I've documented extensively on this forum in threads, is that the first Mellanox interface seems to be fine but the second appears to be created twice.

That it's not a common issue, and possibly NIC related, I myself have several dual port Mellanox NICs and never saw that problem, I did see the problem with v6.10.2 where I could not set a Mellanox NIC as eth0, but of course cannot exclude your issue as some kind of bug.

 

11 minutes ago, KptnKMan said:

If the Mellanox dual-port card is assigned last, it doesn't seem to have the issue

Well, this could explain why I've never seen that, I use all my Mellanox NICs as the last NICs in the list, I'll see if I can test that when I have some time.

Link to comment
46 minutes ago, JorgeB said:

That it's not a common issue, and possibly NIC related, I myself have several dual port Mellanox NICs and never saw that problem, I did see the problem with v6.10.2 where I could not set a Mellanox NIC as eth0, but of course cannot exclude your issue as some kind of bug.

 

Well, this could explain why I've never seen that, I use all my Mellanox NICs as the last NICs in the list, I'll see if I can test that when I have some time.

I've found that splitting them up as Mellanox-eth0, onboard-eth1, Mellanox-eth2 produces the most consistent results. See the screenshots I posted earlier, this seems to be working now as it did quite consistently for me in previous releases.

 

37 minutes ago, JorgeB said:

But first test with v6.10.3 when it's released since there were changes to NIC detection.

Yeah, we'll see when that happens, I'm not trying to rush anyone.

I'm just trying to work an angle that I know "reliably", rather than test a new workaround. Nothing is perfect.

 

As an aside though, the networking issues since I upgraded to 10Gbit have put things on hold for about as long as I've had 10Gbit now.

All I wanted was to setup a working 10Gbit(Mellanox)/1Gbit(Onboard) failover bond, but that seems too much to ask. These days I just want a stable server and single 10Gbit connection that will persist reboots. Details of that journey in my other long thread.

Edited by KptnKMan
  • Like 1
Link to comment

Had some time to test this today, this is with v6.10.3, it's working correctly for me with one Mellanox NIC as eth0:

 

image.thumb.png.890db730f89397304a23d38575725468.png

 

This is the Mellanox NIC I'm using, a ConnectX-3:

 

image.png.d7905e53a917a4e2af91c0ed67b0c4f1.png

 

When you have a chance check if v6.10.3 solves the issue for you, if it doesn't it might be a NIC related, like bad firmware or other issue.

  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.