UnRaid VM Win10 SSD passthrough slow issue


6 posts in this topic Last Reply

Recommended Posts

Hello, just setup a new build for UnRaid, created VM Windows 10 and SSD (Unassigned Device) is passed through to VM Win 10.

 

I have noticed the read/write performance in VM Win 10 is not great. When comparing Win 10 in VM with native Win 10, I can feel the speed of opening the application is slower, and thus I have done the disk test as below. I would like to know is this result is expected or there is anything else that can be improved? Thank you.

 

CPU: Xeon E3-1245v3 (Assigned 3c 3t to this VM)

Motherboard: Gigabyte H97n-wifi

SSD: Crucial MX500

 

Tested but result is same:

1. Bus: SATA, VirtIO, SCSI

2. Win 10 installed on SSD in UnRaid cache

3. Win 10 installed on SSD passthrough (Unassigned Device)

 

## VM Win 10 in UnRaid

727188979_InVM.png.8390a82c395f062dabcf234bd92250db.png.3c4660aceec12cd711c5af3d0e39d832.png

 

## Native Win 10

774173380_Screenshot2019-06-08at8_32_15PM.png.a66e4f25982088adb2071d8e575e74ae.png.52b195d26b21fe8c965e132c6224cd98.png

 

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm'>
  <name>Windows 10</name>
  <uuid>0a011a70-3b26-6c5b-c1ee-6a0b4f7b3ffd</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>6</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='6'/>
    <vcpupin vcpu='4' cpuset='3'/>
    <vcpupin vcpu='5' cpuset='7'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-3.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/0a011a70-3b26-6c5b-c1ee-6a0b4f7b3ffd_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='3' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/disk/by-id/ata-CT500MX500SSD1_1807E10EA5F2'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:a9:3a:49'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <address type='usb' bus='0' port='3'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom file='/mnt/user/isos/gtx1060.dump'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x046d'/>
        <product id='0x0825'/>
      </source>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc07e'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1532'/>
        <product id='0x010d'/>
      </source>
      <address type='usb' bus='0' port='4'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
</domain>

Link to post
16 hours ago, Coffee said:

<disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/disk/by-id/ata-CT500MX500SSD1_1807E10EA5F2'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

Sata isn't the best option. It has more overhead and latency as using virtio or scsi. If you switch it, you need the drivers installed on Windows first or it won't boot. Add a small dummy hdd (1G) as scsi first, mount the virtio driver iso, boot up your VM and install the driver (vioscsi/win10/amd64). You can remove that disk after that. Change your xml like in the following example.

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' discard='unmap'/>
      <source dev='/dev/disk/by-id/ata-CT500MX500SSD1_1807E10EA5F2'/>
      <target dev='hdc' bus='scsi'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

You should see better performance with this. The hosts RAM isn't used for caching with this and trim operations performed on the guest will be passed through to the device itself which isn't possible with sata.

Edited by bastl
Link to post

I had this a while ago and couldn't get anywhere near bear metal performance. 

 

You can get quite complex with this and start looking into IOthread pinning to help, but there's only so much you can do with a virtual disk controller vs a hardware one. 

You'll probably notice a bit of a difference if you use the emulatorpin option to take the workload off CPU0 (which will be competing with unraid stuff). if you have a few cores to spare, give it a hyperthreaded pair (one that you're not already using in the VM). 


In the end I got a PCIe riser for an NVME, and passed that through to the VM. You get about 90% of the way there performance wise compared to bare metal as the controller is part of the NVME itself.

 

Good luck. 

Link to post
  • 8 months later...
On 6/10/2019 at 11:55 AM, billington.mark said:

I had this a while ago and couldn't get anywhere near bear metal performance. 

 

You can get quite complex with this and start looking into IOthread pinning to help, but there's only so much you can do with a virtual disk controller vs a hardware one. 

You'll probably notice a bit of a difference if you use the emulatorpin option to take the workload off CPU0 (which will be competing with unraid stuff). if you have a few cores to spare, give it a hyperthreaded pair (one that you're not already using in the VM). 


In the end I got a PCIe riser for an NVME, and passed that through to the VM. You get about 90% of the way there performance wise compared to bare metal as the controller is part of the NVME itself.

 

Good luck. 

Thanks for posting! I just bought the v2 version of the Asus Hyper m2 nvme x4 Riser based on your recommendations, and I was wondering what vDisk bus you used when passing through your nvme drive (VirtIO, SCSI, or SATA)? I used SATA, but it doesn't seem to be quite as 'snappy' as I had hoped. Did you use SCSI like suggested above?

Edited by DoeBoye
Link to post
2 hours ago, DoeBoye said:

Thanks for posting! I just bought the v2 version of the Asus Hyper m2 nvme x4 Riser based on your recommendations, and I was wondering what vDisk bus you used when passing through your nvme drive (VirtIO, SCSI, or SATA)? I used SATA, but it doesn't seem to be quite as 'snappy' as I had hoped. Did you use SCSI like suggested above?

Neither. To get performance at 90% of bare metal, you need to pass it through as a PCIe device (i.e. stub it and then select it in the other PCIe device on the VM template GUI). Everything else won't get close to 90% and very certainly not SATA.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.