What is the difference between virtio and virtio-net ?


Recommended Posts

I currently get about 20Gbps between vm and unraid using virtio(test by iperf3), but at the same time I noticed a lot of logs in my syslog.

Jan 12 22:20:12 Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1192, hdr_len 1258
Jan 12 22:20:12 Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jan 12 22:20:12 Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jan 12 22:20:12 Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jan 12 22:20:12 Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jan 12 22:20:12 Unraid kernel: tun: unexpected GSO type: 0x0, gso_size 1192, hdr_len 1258
Jan 12 22:20:12 Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jan 12 22:20:12 Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jan 12 22:20:12 Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Jan 12 22:20:12 Unraid kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................

 

After looking through the forums I found that I need to change virtio to virtio-net to fix this problem. However, after testing with virtio-net the transfer speed between vm and unraid was reduced to 1.2Gbps, accompanied by high CPU usage.

Is this a bug of virtio-net?

Link to comment
  • 5 months later...
  • 1 month later...

 

 

 

Both settings use the same "virtio-net-pci" device as "virtio-net" is only an alias:

qemu -device help
...
name "virtio-net-pci", bus PCI, alias "virtio-net"

 

The only difference is that the slower "virtio-net" setting removes the "vhost=on" flag (open the VM logs to see this setting):

 

virtio-net

-netdev tap,fd=33,id=hostnet0 \
-device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \

 

virtio

-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \

 

And it's absolutelly logic that this causes a bad performance for the "virtio-net" setting as QEMU then creates an additional "virtio-net device":

https://www.usenix.org/sites/default/files/conference/protected-files/srecon20americas_slides_krosnov.pdf

image.png.37d1ad18c9d35c2794ff31b27f2df9c1.png

 

Instead of sharing the memory with the host:

image.png.fe341fc7233dacf2e56bf41f2cfe8255.png

 

A good write-up can be found here:

https://insujang.github.io/2021-03-15/virtio-and-vhost-architecture-part-2/

 

And now we understand the help text as well:

Quote

Default and recommended is 'virtio-net', which gives improved stability. To improve performance 'virtio' can be selected, but this may lead to stability issues.

 

 

Not sure about the stability thing, but if the Guest supports it, I would use "virtio", which enables vhost. As I think the names of the adapters are confusing I opened a bug report:

 

  • Like 5
  • Thanks 5
Link to comment
On 8/2/2021 at 8:06 PM, mgutt said:

This is really confusing. If you start a VM with the "virtio" network adapter it starts QEMU with the "virtio-net-pci" NIC:


-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \

 

By using "virtio-net" it starts QEMU with the "virtio-net" NIC:


-device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \

 

But "virtio-net" is not part of the supported NICs?!


qemu -nic model=help
Supported NIC models:
e1000
e1000-82544gc
e1000-82545em
e1000e
i82550
i82551
i82557a
i82557b
i82557c
i82558a
i82558b
i82559a
i82559b
i82559c
i82559er
i82562
i82801
ne2k_pci
pcnet
rtl8139
tulip
virtio-net-pci
virtio-net-pci-non-transitional
virtio-net-pci-transitional
vmxnet3

 

And now the super confusing part:


qemu -device help
...
Network devices:
name "e1000", bus PCI, alias "e1000-82540em", desc "Intel Gigabit Ethernet"
name "e1000-82544gc", bus PCI, desc "Intel Gigabit Ethernet"
name "e1000-82545em", bus PCI, desc "Intel Gigabit Ethernet"
name "e1000e", bus PCI, desc "Intel 82574L GbE Controller"
name "i82550", bus PCI, desc "Intel i82550 Ethernet"
name "i82551", bus PCI, desc "Intel i82551 Ethernet"
name "i82557a", bus PCI, desc "Intel i82557A Ethernet"
name "i82557b", bus PCI, desc "Intel i82557B Ethernet"
name "i82557c", bus PCI, desc "Intel i82557C Ethernet"
name "i82558a", bus PCI, desc "Intel i82558A Ethernet"
name "i82558b", bus PCI, desc "Intel i82558B Ethernet"
name "i82559a", bus PCI, desc "Intel i82559A Ethernet"
name "i82559b", bus PCI, desc "Intel i82559B Ethernet"
name "i82559c", bus PCI, desc "Intel i82559C Ethernet"
name "i82559er", bus PCI, desc "Intel i82559ER Ethernet"
name "i82562", bus PCI, desc "Intel i82562 Ethernet"
name "i82801", bus PCI, desc "Intel i82801 Ethernet"
name "ne2k_isa", bus ISA
name "ne2k_pci", bus PCI
name "pcnet", bus PCI
name "rocker", bus PCI, desc "Rocker Switch"
name "rtl8139", bus PCI
name "tulip", bus PCI
name "usb-net", bus usb-bus
name "virtio-net-device", bus virtio-bus
name "virtio-net-pci", bus PCI, alias "virtio-net"
name "virtio-net-pci-non-transitional", bus PCI
name "virtio-net-pci-transitional", bus PCI
name "vmxnet3", bus PCI, desc "VMWare Paravirtualized Ethernet v3"

 

I highlight it:


name "virtio-net-pci", bus PCI, alias "virtio-net"

 

Only an alias?! Doesn't make sense at all.

 

EDIT: Ok, I finally understood the magic behind "virtio-net". It is really only an alias of "virtio-net-pci"! No joke.

 

The only difference is that "virtio-net" removes the "vhost=on" flag:

 

virtio-net


-netdev tap,fd=33,id=hostnet0 \
-device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \

 

virtio


-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:99:b8:93,bus=pci.0,addr=0x3 \

 

And it's absolutelly logic that this causes a bad performance as QEMU then creates an internal "virtio-net device":

https://www.usenix.org/sites/default/files/conference/protected-files/srecon20americas_slides_krosnov.pdf

image.png.37d1ad18c9d35c2794ff31b27f2df9c1.png

 

Instead of sharing the memory with the host:

image.png.fe341fc7233dacf2e56bf41f2cfe8255.png

 

A good write-up can be found here:

https://insujang.github.io/2021-03-15/virtio-and-vhost-architecture-part-2/

 

And now we understand the help text as well:

 

 

Not sure about the stability thing, but if the Guest supports it, I would use "virtio", which enables vhost. As I think the names of the adapters are confusing I opened a bug report:

 

Great research, but I'm also super confused that why virtio-net act like a 1G network, but virtio is 10G network. Just same feeling as in the first post.

Why is there so big difference for the speed since it is just a alias?  Can we make it work like a 10G network?

Link to comment
8 hours ago, jinlife said:

Why is there so big difference for the speed since it is just a alias?

You should read my complete post. There is a difference. It's the vhost access flag. If it's disabled, QEMU needs to emulate a virtio-net device which produces huge CPU load (and limits the bandwidth).

 

Or in other words: There is really no reason to use the "virtio-net" adapter (with disabled vhost access). Regarding the unRAID help text it's only the default because it's more "stable" without any further explanation which OS could be "unstable". Maybe someone finds the reason why it became the default.

Link to comment
21 minutes ago, mgutt said:

There is really no reason to use the "virtio-net" adapter

virtio-net was introduced as a solution for people using both VMs and docker custom networks on the same interface, e.g. br0.

The virtio driver would cause system crashes when docker containers are active at the same time. Hence the help text talking about stability when using virtio-net.

 

Stability was chosen as default above performance.

 

  • Like 3
  • Thanks 1
Link to comment
22 hours ago, mgutt said:

why isn't "virtio" greyed out if a docker uses the br0 network?

 

First of all, thank you very much for these information, without reading qemu code I noticed performance issue and changed from virtio-net to virtio, but knowing the reason is always good.

I think it's not so simple, because what if one enable a docker after the vm?

Instead, I would explain that in the "info/help" box with some more info, instead of "stability".

Link to comment
5 minutes ago, ghost82 said:

I think it's not so simple, because what if one enable a docker after the vm?

The other way around would be possible, too (grey out br0 for dockers if already used by a vm)

 

Or only show a warning like I showed in a screenshot of this idea:

https://forums.unraid.net/topic/102826-add-warning-to-use-cache-share-settings-and-disallow-change-for-appdata-share/

 

5 minutes ago, ghost82 said:

I would explain that in the "info/help" box

I think its better if the OS is "self-explaining".

  • Like 1
Link to comment

...interesting finds.

Two/three more scenarios, that will probably add to the headache:

 

- what about br0.xx, when using VLANs? Which combination of possible allocation/attachments of VMS and Dockers will be "safe" or will all be unsafe, too?

- what if a VM tries to switch the NIC into promiscuous mode, like when starting a VPN gateway SW inside the VM? (maybe this kind of behavior is what kills the NIC in the first place?)

- what if you enable a second bridge on a second NIC in unraid? Is only this single bridge affected or do all bridges inherit this "bug"?

 

 

  • Like 1
Link to comment
  • 2 months later...

I came here because I'm getting a lot of performance issues / stability using virtio-net.  In particular even at the console it freezes all the time.  I don't know this is the net driver, but I suspect it is - I've known about it for a long time but have been too lazy to have the discussion.  Now I'm trying to set up a VM I need for an Agile board and the web sockets aren't working, I assume for the same reason.

 

I've tried re-installing the VM, I've tried CentOS and Ubuntu.  I see in the latest beta there are now four options - virtio, virtio-net, vmxnet3 and e1000.

virtio used to work for me, with all the dockers I have, but I don't know neither option has worked well for a very long time.  And virtio-net is actually not just a little slow, it's insanely slow to the point I can't really use it for anything other than a basic web page.

 

By contrast running this up in TrueNAS I haven't had this problem.  They offer virtio and e1000.  I'm running virtio with no problems, but I'm not running a lot of dockers there, those are now going to be migrated to a VM as I've come to the conclusion that it's better for me.  Avoids all these dramas and makes them a lot more portable.

 

So I'm going to try e1000 and hope that is better - though it's a very old driver I think.

Link to comment
7 hours ago, Marshalleq said:

I see in the latest beta there are now four options - virtio, virtio-net, vmxnet3 and e1000

Just a clarification: these are the options that the gui offer, which as far as I know are the most common.

But since it's a traditional qemu/libvirt configuration you have a lot more options, that you can configure manually in the xml.

You can have a list of available network types with this terminal command:

qemu -nic model=help

 

  • Like 1
Link to comment
  • 1 year later...

Default 'virtio-net' opinions provide awful performance.

'virtio-net' also gets much higher cpu usage

VM -> Host

virtio-net :

[ ID] Interval Transfer Bitrate Retr

[ 5] 0.00-10.00 sec 2.25 GBytes 1.93 Gbits/sec 0 sender

[ 5] 0.00-10.00 sec 2.25 GBytes 1.93 Gbits/sec receiver

 

virtio :

[ ID] Interval Transfer Bitrate Retr

[ 5] 0.00-10.00 sec 30.7 GBytes 26.4 Gbits/sec 0 sender

[ 5] 0.00-10.00 sec 30.7 GBytes 26.4 Gbits/sec receiver

 

proxmox seems to just use 'virtio' as the default without causing any stability problem

Edited by akami
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.