Network speed between UNRAID shares and VMs


Recommended Posts

Hello

 

I have a bunch of media shares on my UNRAID server and tried copying a 10GB+ file to a W11 VM on the same UNRAID server.

The virtio NIC on the W11 VM is running at 10gbps but I cannot copy from the UNRAID shares any faster than 200MB/s approx 2gbps.

My physical NIC on the UNRAID server is a 2.5gbps onboard adaptor, is this the limiting factor? Is it possible to create a seperate virtual 10gb network so that VMs and shares on the same UNRAID server can copy at the full 10gbps speed available?

 

Just to add, all shares and VMS are on nvme/ssd drives (seperate).

Edited by mikeyosm
Link to comment

I couldnt call it a "known issue" per se, just that achieving full 10Gb throughput almost always takes some tuning.

 

10Gb is a whole other can of worms, and achieving that level of throughput requires both careful planning, and a decent amount of optimization (both host and hypervisor side).

 

For one, you're far more likely to need to worry about your peak CPU frequency, context switching, and high interrupt counts. Beyond that, you're much more likely to encounter IO bottlenecks in "weird" (or at least previously unexpected) places. You'll have to start by determining where the bottleneck is. For instance, are you simply mounting an SMB share? Have you tested NFS to see if you get similar IO behavior? What about share passthrough in the VM config (this usually sucks)? Tried the virtio network driver instead of virtio-net? And so on and so on. Just changing things without at least having an inkling of where you're bottlenecking is a recipe for pain. 

 

I'd start by installing something like the netdata docker container; start it up, initiate your 10GB copy, and look for anything that appears to spike in the reported statistics. High IRQ remapping? Single core pegged at 100% utilization? What else does that core have going on if so? What's the reported disk utilization at that time?

 

Once you figure out what the bottleneck is, then you can start doing research on how to correct it; the solution will be unique to your configuration and the cause of the bottleneck, so just be prepared to do a little googling, and some trial and error along the way.

 

Happy hunting!

  • Like 1
Link to comment
3 hours ago, BVD said:

I couldnt call it a "known issue" per se, just that achieving full 10Gb throughput almost always takes some tuning.

 

10Gb is a whole other can of worms, and achieving that level of throughput requires both careful planning, and a decent amount of optimization (both host and hypervisor side).

 

For one, you're far more likely to need to worry about your peak CPU frequency, context switching, and high interrupt counts. Beyond that, you're much more likely to encounter IO bottlenecks in "weird" (or at least previously unexpected) places. You'll have to start by determining where the bottleneck is. For instance, are you simply mounting an SMB share? Have you tested NFS to see if you get similar IO behavior? What about share passthrough in the VM config (this usually sucks)? Tried the virtio network driver instead of virtio-net? And so on and so on. Just changing things without at least having an inkling of where you're bottlenecking is a recipe for pain. 

 

I'd start by installing something like the netdata docker container; start it up, initiate your 10GB copy, and look for anything that appears to spike in the reported statistics. High IRQ remapping? Single core pegged at 100% utilization? What else does that core have going on if so? What's the reported disk utilization at that time?

 

Once you figure out what the bottleneck is, then you can start doing research on how to correct it; the solution will be unique to your configuration and the cause of the bottleneck, so just be prepared to do a little googling, and some trial and error along the way.

 

Happy hunting!

Some good tips, thanks. Switched from vitio-net to vitio and boom! avg 700MB/s - happy with that.

Link to comment

Glad to hear it, and happy to help!

 

As luck would have it, I'm actually working on some performance tuning for 40Gb this week, and the number of things that play into it are wild to think about. Things most would never think of like chip architecture, bios settings (NUMA, memory interleaving, etc), driver specific flags, and so on - super interesting stuff!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.