Jump to content

Move NIC and boot drive to new build - network changed?


Recommended Posts

As the title states, I moved my array drives as well as the boot drive and an Intel 4-port NIC I was using to a new box I've build.  Went from an Intel with HP mobo to AMD with MSI X570 box, and while the flash drive boots I'm getting no IP.  I moved the boot disk and NIC back to the older box and I'm seeing the same issue there now. 

 

Any idea why this would happen?   The network.cfg looked fine to me, but there is at least a difference in that the HP mobo had a Realtek NIC onboard which I had disabled in Unraid, and this box has a 2.5GB Intel NIC onboard.  When I read the steps on moving to a new server I didn't see anything about network prep.

 

UPDATE - I moved the internal interface cable to another port on my switch that has no VLANs configured and it got an IP.   I accessed the GUI and it is only showing that interface as having info, VLANs now disabled and other differences.  Is this standard Unraid behavior?  Any steps I can take to prevent this from happening in the future, or do I have to write all the network config down and recreate any time I move servers going forward?

 

Thanks for any insights on this.

Edited by BurntOC
UPDATED title to reflect issue
Link to comment

So thankfully I had most of the network config from before written down so now that I have eth0 working I'm reconfiguring it as before.  Some things that I've noted:

  1. Before I had eth0-eth3 as the 4 ports on the Intel card, and the integrated NIC was eth4.  Now it seems to have selected eth2 for the onboard Intel NIC and I can't seem to change it.  Maybe in the initial boot I hadn't disabled it yet and it is missing from the GUI b/c it's disabled now?  Not sure if it's worth fixing or if I should just push through my OCD here, LOL.
  2. Related to #1 and having to re-setup my VLANs, I suppose, I have VLAN bridges that have changed a bit, such as br2.60 is now br3.60.  br2.6 still shows in the interface for my Docker settings, with a DHCP pool showing, but I suppose I should just de-select it and use the br3.60 that shows the proper network config.

If that's generally correct, how can I delete br2.60 and maybe even also get rid of eth2 so that I can keep my Intel NIC ports contiguous?  On a separate, but possibly related note, I've been seeing additional messages about devices and bridges missing (pic attached), though based on my config with VLAN 60 on eth3 and VLAN 70 on eth4 at least those SHOULD be there and do appear to otherwise work fine, unlike br2.60.

 

With some of these answers in hand I'll be much better equipped going forward, and I expect I'll be able to get my containers and VMs back up and then I can mark this as solved....

IMG_20210101_110053.jpg

Link to comment

So I just decided to delete the network.cfg and start over.  It got rid of the br2.60 issue, but I still get the warning on those other items from the pic above.  It's been that way for a while and it's worked fine, so I guess that's just an Unraid issue with the timing as it brings things up or something.  I had to change the networks for my containers to reflect the new ones, but they're all working now, too.  Looks like the last challenge is my VM, which doesn't like something about the change.

 

I'd still like to understand why I had to go through this process and if there's a more graceful solution for whenever "next time" comes.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...