Glimmerman911 Posted April 25, 2015 Share Posted April 25, 2015 When I used VirtualBox, it can be configured so the Guest accesses the Host unraid shares locally, instead of mounting them as network drives and having the network overhead. How do I do this with KVM? Quote Link to comment
itimpi Posted April 25, 2015 Share Posted April 25, 2015 When I used VirtualBox, it can be configured so the Guest accesses the Host unraid shares locally, instead of mounting them as network drives and having the network overhead. How do I do this with KVM? KVM has the concept of the 9p drivers. However users are finding that it does not seem to provide better performance than going via the network, and can also causes problems with permissions on folder/files. Quote Link to comment
jonp Posted April 25, 2015 Share Posted April 25, 2015 When I used VirtualBox, it can be configured so the Guest accesses the Host unraid shares locally, instead of mounting them as network drives and having the network overhead. How do I do this with KVM? What network overhead? Quote Link to comment
Glimmerman911 Posted April 25, 2015 Author Share Posted April 25, 2015 By mounting the network drive, I assumed the VM would reach outside of the server through the network, then back into my UnRaid server to access a network mounted shared drive. I know on VirtualBox it was much faster when you could have VirtualBox mount the drive instead of Windows mounting a network share. And I noticed the speed difference, it was quite substantial. Quote Link to comment
jonp Posted April 25, 2015 Share Posted April 25, 2015 By mounting the network drive, I assumed the VM would reach outside of the server through the network, then back into my UnRaid server to access a network mounted shared drive. I know on VirtualBox it was much faster when you could have VirtualBox mount the drive instead of Windows mounting a network share. And I noticed the speed difference, it was quite substantial. No, this isn't the case. The network bridge allows communication to occur internal to the system, so you won't be tethered to the limit of traditional 1gbps Ethernet. Try it out. You'll be pleasantly surprised. Quote Link to comment
mr-hexen Posted April 28, 2015 Share Posted April 28, 2015 By mounting the network drive, I assumed the VM would reach outside of the server through the network, then back into my UnRaid server to access a network mounted shared drive. I know on VirtualBox it was much faster when you could have VirtualBox mount the drive instead of Windows mounting a network share. And I noticed the speed difference, it was quite substantial. No, this isn't the case. The network bridge allows communication to occur internal to the system, so you won't be tethered to the limit of traditional 1gbps Ethernet. Try it out. You'll be pleasantly surprised. Completely agree. The VM actually reports the link as a 10G link for me (Win7Pro). File copies from an unRAID share are almost instantaneous. Mike Quote Link to comment
glennv Posted May 26, 2018 Share Posted May 26, 2018 (edited) I cant seem to get this to work as advertised. In my OSX Sierra VM the speeds seems limited to the actual network interface(s) not reported as 10G link as mentioned. Using the bridge device in the VM (using vmxnet3 ). I have 2x1Gb bonded on unraid and available as br0 and then used that in VM as network adapter. Speeds seem consistently with about 200mb/s if i copy data to a share on the dual samsung ssd btrfs cache pool. Copying a 100GB file to bypass the effect of initial cache to memory , but even that seems to go at a steady 200mb/s. What could be possibly causing this. Or does this only work on Windows/Linux VM's ?? Internal file copies on Unraid server run at proper ssd speeds minus btrfs overhead edit : correct that. I see the same slow speed when writing on unraid level so must be something weard with my btrfs raid 1. Will keep experimenting..... edit2 : switching array write mode to direct io solved my local copy speeds (before writing to /mnt/user was about 50% of the speed of writing to /mnt/cache. Now i get similar speeds). Writing from VM to this share still suggests its using the network and not some local bridging mechanism edit3: seems to be a mac thing as from a windows VM i get 10GB speeds (initialy due to mem cache) saving to this same share. Mac VM always stuck at about 200Mb/s same share. Set the network speed manualy to 10G instead of auto (which report 1G) , but no difference so far. Any tips welcome... edit4 : i think my troubleshooting focuses down on the VMXNET3 driver/adapter showing only 1GB , which should according to all i read show 10GB. I tested 3 OSX vm's , El capitan, Sierra and High Sierra with the vmxnet3 driver and all same problem so not specific within an OSX release at least. Googeling show some mention of this without much details . Maybe Spaceinvader One has an idea Edited May 26, 2018 by glennv Quote Link to comment
glennv Posted May 28, 2018 Share Posted May 28, 2018 Well i got my 10GB card in and even with the 10G card bridged to the VM the vmxnet3 bridge in OSX only works only at 1GB speeds. That is crap. If i physically pass the port to the VM and connect the 2 ports of the card together having one for unraid and one for the vm talking to each other , i get proper 10G speeds on the network and 400MB/s read speeds and about 250MB/s write from my btrfs 2xssd cache . But i need it as a bridge to access unraid local shares the normal way so i can use the other port in the vm for vm to a workstation connection. So no matter if the bridge is backed by a real 1GB or 10Gb card, its stuck at 1GB speeds. Is there ANYONE who got a virtual bridged 10GB up and running in OSX who could help me here ? Quote Link to comment
1812 Posted May 29, 2018 Share Posted May 29, 2018 11 hours ago, glennv said: Well i got my 10GB card in and even with the 10G card bridged to the VM the vmxnet3 bridge in OSX only works only at 1GB speeds. That is crap. If i physically pass the port to the VM and connect the 2 ports of the card together having one for unraid and one for the vm talking to each other , i get proper 10G speeds on the network and 400MB/s read speeds and about 250MB/s write from my btrfs 2xssd cache . But i need it as a bridge to access unraid local shares the normal way so i can use the other port in the vm for vm to a workstation connection. So no matter if the bridge is backed by a real 1GB or 10Gb card, its stuck at 1GB speeds. Is there ANYONE who got a virtual bridged 10GB up and running in OSX who could help me here ? apple's implementation of vmxnet3 is less than desirable. I was only able to get it up to about 150-200MBps about a year ago. afaik there is no working osx virtual nic capable of 10gbe. my tests are here (before I basically stopped using the virtual nic): https://lime-technology.com/forums/topic/54641-increase-os-x-networking-performance-by-80-or-your-money-back/ 1 Quote Link to comment
glennv Posted May 29, 2018 Share Posted May 29, 2018 (edited) Tnx for confirming. Yeah pretty crap. I did find on github an virtio-net driver but was even slower. so guess i need either a 10gb switch (auch) or cheaper just another 10gb card then. It is what it is...... Edited May 29, 2018 by glennv Quote Link to comment
1812 Posted May 29, 2018 Share Posted May 29, 2018 3 hours ago, glennv said: Tnx for confirming. Yeah pretty crap. I did find on github an virtio-net driver but was even slower. so guess i need either a 10gb switch (auch) or cheaper just another 10gb card then. It is what it is...... sadly.... 1 Quote Link to comment
gcoppin Posted October 22, 2019 Share Posted October 22, 2019 (edited) Hi, I've got exactly the same issue with my linux vm (ubuntu 18.04). Is there any news regarding this ? Can a moderator confirm linux has the same issue as osx or not? @jonp any hint ? Adding some outputs : Quote lshw -C network *-network description: Ethernet interface product: VMXNET3 Ethernet Controller vendor: VMware physical id: 0 bus info: pci@0000:05:00.0 logical name: enp5s0 version: 01 serial: 52:54:00:7b:ec:75 size: 1Gbit/s capacity: 10Gbit/s width: 32 bits clock: 33MHz capabilities: bus_master cap_list rom ethernet physical tp 1000bt-fd 10000bt-fd configuration: autonegotiation=off broadcast=yes driver=vmxnet3 driverversion=1.4.13.0-k-NAPI duplex=full ip=192.168.0.63 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:21 memory:93003000-93003fff memory:93002000-93002fff memory:93000000-93001fff memory:93040000-9307ffff From VM (Ubuntu 18.04.3 - linux 4.16.0-041600-generic) : Quote dd if=/dev/zero of=./output bs=8k count=20k; rm -f ./output 20480+0 records in 20480+0 records out 167772160 bytes (168 MB, 160 MiB) copied, 7.768 s, 21.6 MB/s From host (Linux unRaid 4.19.56-Unraid): Quote dd if=/dev/zero of=./output bs=8k count=20k; rm -f ./output 20480+0 records in 20480+0 records out 167772160 bytes (168 MB, 160 MiB) copied, 1.22464 s, 137 MB/s fstab : Quote sshfs#[email protected]:/mnt/user/work /mnt/work fuse IdentityFile=/home/username/.ssh/id_rsa,uid=userid,gid=usergroup,users,idmap=user,noatime,allow_other,_netdev,reconnect,exec,rw 0 0 Thank you, Geoffrey Edited October 22, 2019 by gcoppin Quote Link to comment
testdasi Posted October 23, 2019 Share Posted October 23, 2019 @gcoppin: This topic was already resurrected so you have made it into a zombie. Please raise a separate topic with your own details. Whatever was done in 2015 and 2018 is unlikely to have any relevance in 2019. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.