Jump to content

How to get more then 100MB/s from VM > unraid share?


Recommended Posts

I have unraid running good now and just setup a windows 10 VM that I will use to manage backups / synchronization of my data as the software I use does not work on linux (viceversa) and I really like this software.

 

Sadly though the VM is limited to gigabit speeds and is actually using the network according to netdata to access the shares? I get much faster speeds using my separate PC that is connected with a 10gb link (although still slower then bare metal due to the network overhead) but can't tie up that PC while the server runs backups.)

 

This will take a very long time to run backups if using the network as I have around ~1 million small files it has to scan and a normal backup can have upwards of ~100gb of changed data that has to be updated.

 

How can I setup the VM to interact directly on the unraid machine and not go out into the network?

 

I have tried:

 

Installing proper drivers from the virtio iso.

mapping the shares using both the server name and IP address

using different bridges, although only the primary bridge seems to work, virbr gives error "Cannot get interface MTU on 'virbr0': No such device"

Edited by TexasUnraid
Link to comment

Something is not correct there, if the VM is using a virtual NIC by default it's a 10GbE virtual connection, and any traffic doesn't go trough the network, e,g,, this is iperf running on one of my Windows VMs with a virtual NIC:


 

C:\iperf>iperf3 -c 192.168.1.10
Connecting to host 192.168.1.10, port 5201
[  4] local 192.168.1.6 port 64736 connected to 192.168.1.10 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.03 GBytes  8.89 Gbits/sec
[  4]   1.00-2.00   sec  1.02 GBytes  8.80 Gbits/sec
[  4]   2.00-3.00   sec  1021 MBytes  8.56 Gbits/sec
[  4]   3.00-4.00   sec  1.08 GBytes  9.24 Gbits/sec
[  4]   4.00-5.00   sec  1.13 GBytes  9.73 Gbits/sec
[  4]   5.00-6.00   sec  1.01 GBytes  8.67 Gbits/sec
[  4]   6.00-7.00   sec  1001 MBytes  8.40 Gbits/sec
[  4]   7.00-8.00   sec   949 MBytes  7.95 Gbits/sec
[  4]   8.00-9.00   sec  1.06 GBytes  9.10 Gbits/sec
[  4]   9.00-10.00  sec  1.12 GBytes  9.59 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  10.4 GBytes  8.89 Gbits/sec                  sender
[  4]   0.00-10.00  sec  10.4 GBytes  8.89 Gbits/sec                  receiver

iperf Done.

 

 

Link to comment

Thanks for the info.

 

After a few more hours of troubleshooting I narrowed it down to an issue with the libvirtio driver. I ended up manually stopping it and then rebooting and it seems to of come back online and everything is working as expected now.

 

although I have no idea how the issue started in the first place, this is a fresh install of unraid and this is the first VM I have made.

 

Hopefully it keeps working, if not then I guess I will reinstall before I move my data to an array.

Link to comment

Well the issue was back today, so looks like I am going to wipe it and start over, hopfully that will fix it.

 

Good thing I have not committed to unraid yet, I was also messing around with cache setups and didn't realize if you try to remove more then 1 device from the pool at once you will loose the entire pool even though there is plenty of space on the remaining drives to re-balance. Just dead, no option to restore the old pool even though I followed the FAQ on the subject and simply unassigned the devices and then tried to start the array.

 

Luckily unraid is super simple to setup, just sucks to have to spend those hours doing it again.

Link to comment
11 minutes ago, TexasUnraid said:

I followed the FAQ on the subject

From de FAQ:

Quote

-You can only remove devices from redundant pools (raid1, raid10, etc), and make sure to only remove one device at a time from a redundant pool, i.e., you can't remove 2 devices at the same time from a 4 disk raid1 pool, you can remove them one at a time after waiting for each balance to finish (as long as there's enough free space on the remaining devices).

 

  • Haha 1
Link to comment
14 minutes ago, johnnie.black said:

From de FAQ:

 

lol, fair point, my bad. I read that a few days ago but forgot it when I went to actually put it into practice. Since you could add drives at will, I guess I assumed you could remove them the same way.

 

So you can't remove a drive from a raid 0 pool at all? Well that makes things more complicated.

 

I am guessing I would need to convert to a raid 1 or raid 5 setup temporarily (depending on available space), remove the drive, re-balance, convert back to raid 0?

 

I don't care about the parity for the cache pool since I will have backups of anything important and everything else can be replaced and I want the speed boost it provides.

 

Although in testing, I am not seeing the expected raid 0 speed. with 8x 128gb laptop SSD's from my days in IT I was only seeing about ~1.5GB/s reads and ~800MB/s writes?

 

I would of expected almost double those numbers? Is there a way to change the stripe size/width?

Link to comment
7 minutes ago, TexasUnraid said:

So you can't remove a drive from a raid 0 pool at all? Well that makes things more complicated.

That might have changed since I wrote that FAQ entry, need to test it, but you can always convert to a redundant pool before removing.

 

9 minutes ago, TexasUnraid said:

Is there a way to change the stripe size/width?

Not AFAIK.

 

Link to comment

Just tested it, and looks like you can remove a drive from a raid 0 array, you just have to only remove one drive at a time. Cool. Might be worth adding a warning to the GUI that removing more then 1 drive will kill the pool.

 

This is why I always like to test out the limits of things before I commit to them, allows me to work out the kinks with nothing on the line except some time.

Edited by TexasUnraid
  • Thanks 1
Link to comment

Been doing some more speed tests with different raid 0 setups. With 2x drives, I get more or less the expected improvement, around 800mb/s writes and 1gb/s reads. From there I only get minor gains adding more drives in my testing. The controller is not the limit, it has been tested to at least 4.5GB/s and in windows I was able to test all the drives in parallel at full speed.

 

Guessing it is using a stripe width of 2, oh well, more speed would be neat but not really necessary. More a matter of me being a bit of a perfectionist and wanting everything to work as good as they can.

 

Thanks for the help. Hopefully a reinstall will take care of the VM NIC issue.

Link to comment

The plot thickens, it seems that BTRFS  writes are heavily single threaded. I noticed that it seems to be maxing out a single core when doing file copies.

 

So looks like I might be CPU bottle necked on the  write speeds. Makes sense considering the numbers I was seeing.

 

Although reads do not seem to be pegging a single core yet still top out around 1.5GB/s. Guessing that is due to the stripe width.

Link to comment

Ok, got everything reinstalled and tried to create a windows 10 VM again but I got the same error as before:

 

VM creation error

Cannot get interface MTU on 'virbr0': No such device

 

After some googling I found this and it seems to suggest that this can happen if the network drops out?

 

https://bugzilla.redhat.com/show_bug.cgi?id=1053532

 

My network is solid though, never have any dropout issues when playing games etc. No idea why this would be an issue.

 

This thread has some more details:

 

https://serverfault.com/questions/534484/libvirt-network-error-no-default-network-device-found

 

It was suggested to check

 

virsh net-list

 

Sure enough it was empty. So I ran this command:

 

virsh net-start default

 

This worked, it brought the network back up when running virsh net-list. This worked to get the VM created and booting.

 

So for whatever reason the virtual network seems to drop out, could just be the teething of getting everything installed and setup that is causing the issue.

 

If not looks like I could run the virsh net-start default command before starting the VM.

 

Is there a way to automate a script to run before starting a VM?

 

Right now the best option I can figure is either manually typing in the command before starting the VM (this will annoy me in a big way lol). Or use the info from this video: https://www.youtube.com/watch?v=QoVJ0460cro

 

 

to make a script to start the virtual network and then start the VM, although I would still have to go to the user scripts first unless someone knows how to make a direct button / shortcut to run a script?

 

I am now getting much faster speeds, although still not bare metal. Bare metal I am getting ~800MB/s writes and ~1.4GB/s reads. I am only getting 600 read and 700 write using mapped drives in windows? Plenty fast enough for backups but curious if there is a reason it is not reaching full speeds? Strange that the read and write speeds are reversed.

Edited by TexasUnraid
Link to comment

A small side bug: When using QXL video driver with firefox, the mouse turns into a white square. It only happened after installing the virtio driver. If I switch to cirrus driver it works fine but I am stuck with a tiny 800x600 resolution.

 

On this same vane, firefox does not work with the stock terminal / console windows of unraid, it just gives a blank page with a blinking cursor, you can't read any of the text. The command line plug in does work with firefox though.

Edited by TexasUnraid
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...