CentOS VM - Unable to detect network


Recommended Posts

To begin with apologies I'm a software dev/apps guy, a lot of this sort of thing is dark magic to me!

 

I've been trying to create a CentOS 6 VM on my UnRAID box to match my production CentOS environment, however it hasn't been playing ball with networking. The first environment I created (minimal iso), I could see eth-0 on ifconfig -a, however there was no config file. I manually created a config file, but no luck, after running reboot, network service restart, ifconfig still just returned 'lo'

 

I have a public bridge (no idea if that's working), screen print of settings attached.

Capture.thumb.JPG.97e9351c3fd7b7e768b73607e6985652.JPG

 

I have now tried with a netinstall ISO and am presented with this screen for me to select a device driver. I had a look but the QEMU one mentioned n other posts to do with drivers, but it's not present.

Capture.JPG.d0a4de415e92c6c44377a6c2a04eff32.JPG

 

Is this a common problem with CentOS? Is it a hardware problem?

 

Any help greatly appreciated :)

Link to comment
  • 5 months later...

Sorry to bump an old post, but I'm having the same issue.

 

I don't know if this was ever resolved by OP,  but I can't seem to figure out why my CentOS install cannot find a network controller. 

 

Very frustrating.  Any help would be appreciated.

 

Thanks.

Link to comment
  • 4 months later...

Problem solved.

 

Had to manually pass a nic to the vm by editing the xml of the CentOS vm. 

 

It all sounds very confusing, fortunately someone created a video. 

 

 

Once I passed through the nic, the install was simple.

Link to comment

I am sure that there is a way to emulate a different NIC type (other than 'vfio') that may not need passing through, but for the life of me I cannot find out any documentation on how to do this.  I seem to remember that 'e1000' and 'pcnet' are ones that I have seen used in the past.

 

EDITED:

Just found that you can run:

qemu -net nic,model=? >/dev/null
qemu: Supported NIC models: e1000,e1000-82544gc,e1000-82545em,e1000e,i82550,i82551,i82557a,i82557b,i82557c,i82558a,i82558b,i82559a,i82559b,i82559c,i82559er,i82562,i82801,ne2k_pci,pcnet,rocker,rtl8139,virtio-net-pci,vmxnet3

which gives a wide variety of options for the network card to be emulated.  Good chance that the VM already has drivers for at least one of these so you can then avoid passing through the network card.   You would have to make the change at the XML level to change the type of the network card type from 'vfio' to one of these as the GUI does not support entering them.

Edited by itimpi
Link to comment
  • 7 months later...
On 6/2/2019 at 3:47 AM, itimpi said:

I am sure that there is a way to emulate a different NIC type (other than 'vfio') that may not need passing through, but for the life of me I cannot find out any documentation on how to do this.  I seem to remember that 'e1000' and 'pcnet' are ones that I have seen used in the past.

 

EDITED:

Just found that you can run:


qemu -net nic,model=? >/dev/null
qemu: Supported NIC models: e1000,e1000-82544gc,e1000-82545em,e1000e,i82550,i82551,i82557a,i82557b,i82557c,i82558a,i82558b,i82559a,i82559b,i82559c,i82559er,i82562,i82801,ne2k_pci,pcnet,rocker,rtl8139,virtio-net-pci,vmxnet3

which gives a wide variety of options for the network card to be emulated.  Good chance that the VM already has drivers for at least one of these so you can then avoid passing through the network card.   You would have to make the change at the XML level to change the type of the network card type from 'vfio' to one of these as the GUI does not support entering them.

I'm running into this problem again - hope you can help. 

 

Passing through the nic no longer appears to be working with the update to 6.8.1.  How can I emulate one of the existing generic nic's even if it means editing the xml?

Edited by eds
Link to comment
  • 2 weeks later...
  • 2 weeks later...

Hi @eds

 

I have just had a look at this issue. The XML for my test Centos 8 VM I just fired up (on Unraid 6.8.1)...

 

    <interface type='bridge'>
      <mac address='52:54:00:e0:a3:3c'/>
      <source bridge='br0'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

 

That is the default, generated by Unraid on my behalf (I selected br0 is my NIC to pass to the VM, as per usual).

 

As itimpi said, there are a variety of NIC models you can use in kvm/qemu.. I have tested the following as working:

- e1000, virtio, vmxnet3
 

You just edit the model type='X' parameter above. In my experience, e1000 has the best compatibility, vmxnet3 has the best performance (ony matters 10G and above, again, in my experience).

 

Are you able to try making that change and see how it goes?

 

Edit - note, it's worth mentioning my server is a Dell R730, using 1G broadcom based NICs. I am using the tg3 driver (as seen under settings>network settings> interface rules). It might be worth checking which driver you have. I know a chap with bnx2 drivers and he has trouble with Centos VMs too... Upgrading the variant of driver your NIC uses might be a solution in itself.

 

Edited by jmbrnt
Clarity
Link to comment
2 hours ago, jmbrnt said:

Hi @eds

 

I have just had a look at this issue. The XML for my test Centos 8 VM I just fired up (on Unraid 6.8.1)...

 


    <interface type='bridge'>
      <mac address='52:54:00:e0:a3:3c'/>
      <source bridge='br0'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

 

That is the default, generated by Unraid on my behalf (I selected br0 is my NIC to pass to the VM, as per usual).

 

As itimpi said, there are a variety of NIC models you can use in kvm/qemu.. I have tested the following as working:

- e1000, virtio, vmxnet3
 

You just edit the model type='X' parameter above. In my experience, e1000 has the best compatibility, vmxnet3 has the best performance (ony matters 10G and above, again, in my experience).

 

Are you able to try making that change and see how it goes?

 

Edit - note, it's worth mentioning my server is a Dell R730, using 1G broadcom based NICs. I am using the tg3 driver (as seen under settings>network settings> interface rules). It might be worth checking which driver you have. I know a chap with bnx2 drivers and he has trouble with Centos VMs too... Upgrading the variant of driver your NIC uses might be a solution in itself.

 

I think folks may not be understanding the true nature of this problem. 

 

The vm is not looking for a driver.  The vm is unable to see the nic. 

 

Again, this was solved in the previous version on unraid.  All I had to do is pass through the nic so that the VM can see it (as seen in the post above)

 

So the previous posts about needed a driver is moot.  The centOS vm is unable to see the nic.   Without seeing the nic, what good is a driver?

Edited by eds
added link to post.
Link to comment

The driver I'm referring to is on the unraid host system, nothing to do with the VM. My theory based on comparing a working and non-working system was due to a difference in broadcom drivers seen in Unraid OS itself.

 

I also suggested some XML edits you were asking about to try, did they have any effect?

Link to comment
On 2/22/2020 at 9:40 AM, jmbrnt said:

Hi @eds

 

I have just had a look at this issue. The XML for my test Centos 8 VM I just fired up (on Unraid 6.8.1)...

 


    <interface type='bridge'>
      <mac address='52:54:00:e0:a3:3c'/>
      <source bridge='br0'/>
      <target dev='vnet1'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

 

That is the default, generated by Unraid on my behalf (I selected br0 is my NIC to pass to the VM, as per usual).

 

As itimpi said, there are a variety of NIC models you can use in kvm/qemu.. I have tested the following as working:

- e1000, virtio, vmxnet3
 

You just edit the model type='X' parameter above. In my experience, e1000 has the best compatibility, vmxnet3 has the best performance (ony matters 10G and above, again, in my experience).

 

Are you able to try making that change and see how it goes?

 

Edit - note, it's worth mentioning my server is a Dell R730, using 1G broadcom based NICs. I am using the tg3 driver (as seen under settings>network settings> interface rules). It might be worth checking which driver you have. I know a chap with bnx2 drivers and he has trouble with Centos VMs too... Upgrading the variant of driver your NIC uses might be a solution in itself.

 

I tried editing the line model type = 'virtio' by changing the value to 'e1000'  and 'vmxnet3' and neither worked.

On 2/22/2020 at 12:23 PM, jmbrnt said:

The driver I'm referring to is on the unraid host system, nothing to do with the VM. My theory based on comparing a working and non-working system was due to a difference in broadcom drivers seen in Unraid OS itself.

 

I also suggested some XML edits you were asking about to try, did they have any effect?

I am a little lost here.  I, of course, assume the drivers are installed in the Unraid host since this worked once before after passing through the nic.  In addition, if the drivers were not installed in Unraid wouldn't the OS have an issue with network connects?  How to check to see what network drivers are installed in Unraid?

 

Edited by eds
typo
Link to comment
8 hours ago, eds said:

I tried editing the line model type = 'virtio' by changing the value to 'e1000'  and 'vmxnet3' and neither worked.

Thanks for checking that. 

 

You can see which driver your NICs have under Settings > Network Settings > Interface rules. The driver name is in parentheses. I am wondering if there are variants (or older/incompatible versions) of the driver on Unraid in use that don't play nice with a Centos VM. Narrowing down the problem in lieu of any advice from the Unraid developers might take a few shots.

 

For the record, I don't think it's a problem with Unraid 6.8.2 or Centos, as it works flawlessly for me, without needing to pass through the PCI device - simply setting the VM to use br0 works. Would be great to solve this problem as it must be affecting a few people. 

 

 

 

 

Link to comment
4 hours ago, jmbrnt said:

You can see which driver your NICs have under Settings > Network Settings > Interface rules. The driver name is in parentheses. I am wondering if there are variants (or older/incompatible versions) of the driver on Unraid in use that don't play nice with a Centos VM. Narrowing down the problem in lieu of any advice from the Unraid developers might take a few shots.

 

If that's the case I'm showing b2net and igb.

 

4 hours ago, jmbrnt said:

Would be great to solve this problem as it must be affecting a few people. 

 

Agreed.

Link to comment
  • 1 year later...

Thanks for updating the thread with new info.

I actually solved my problem (fairly recently).

 

I did what it looks like I said I did a year ago and it worked.  It may have been an update to CentOS that did it, I don't know. 

But editing the xml bridge type to e1000-82545em, my NIC showed up and I was able to get an ip address and log in via ssh.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.