Jump to content

Network ONLY between VMs and Unraid is not working


Recommended Posts

I've got two VMs that can't reach reach Unraid itself, and Unraid can't reach them either. Other than that, their and Unraid's networking works totally fine.

 

To give you a practical overview:

  • VM cannot ping Unraid
  • VM CAN ping the internet
  • VM CAN ping other devices on the network
  • VM CAN ping the router
  • Unraid cannot ping VM
  • Unraid CAN ping the internet
  • Unraid CAN ping other devices on the network
  • Unraid CAN ping the router
  • Other devices on the network CAN ping VM
  • Other devices on the network CAN ping Unraid

 

And when I say "ping" I also mean "establishing any sort of connection". Also, all these have been attempted by their IP-address, just to eliminate name resolution issues.

 

So it seems there's something going on ONLY between Unraid and the VMs. It goes for all VMs, among which are both Windows and Linux based ones. I cannot find for the life of me anything in the way of VM network settings. Usually hypervisors have some kind of VM network settings thing, don't they?

 

The "Network source" is set to "vhost0" and "Network model" is set to either "virtio" or "virtio-net". I'm not sure what these mean exactly, but they've always worked fine, and so they ought to work fine now as well. I shouldn't have to emphasise that network does work fine, so I'm guessing this bit of config is correct. It's just very specifically the communication between VMs and Unraid.

 

So how do I fix this? Has Unraid recently added some type of restriction/security measure to prevent VMs from accessing their hypervisor? I'm sorry that I don't know exactly since when this happened. Definitely not for months, I reckon. I only know for sure when I discovered this problem, which is just now.

 

As for the network topology: it literally couldn't be simpler. No VLANs, no advanced networking stuff. Just a router and some clients.

Link to comment

I suspect the fact that vhost0 doesn't get an IP-address (for whatever reason I can't figure out yet) might have something to do with it. I think that might be the interface VMs get connected to when they attempt to access their Unraid host:

 

root@unraid:~# ifconfig vhost0
vhost0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 02:60:71:10:c2:74  txqueuelen 500  (Ethernet)
        RX packets 3361579  bytes 208546658 (198.8 MiB)
        RX errors 0  dropped 2700291  overruns 0  frame 0
        TX packets 37913  bytes 1592346 (1.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

Link to comment

Reboot unraid did that fix it?

I had experienced this before and its a temp glitch in how unraid bridge networking works with macvtap.

I can't reliable reproduce but I encounter this sometimes if i start and shutdown a Windows 11 vm multiple times. it can mess with unraids internal networking... Eventually, the vm will refuse network traffic to unraid. For my use case and instance, a reboot restores it and resets whatever vfio / unraid had in the network. this may be a remant error/issue form the dev when moving off macvlan to ipvlan...

I did find that when it was in this state that the network comands showed different status...

netstat - ?a or was it i
ip link 

ip a
ifconfig

etc...

compare what they are before and after the reboot.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...