Unraid Windows VM extremely bad read/write from gen4 ssd


je82

Recommended Posts

Testing out running vms in unraid,

 

1. cache pool where all vmdata is hosted is set to cache prefer and has 2 tb free (Samsung Pro 980 Gen4 nvme).

2. installed windows 2019 server using the following settings:

Quote

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='1'>
  <name>Server</name>
  <uuid>be7caecc-f6fe-dbfe-d80d-dd7514e6ae3e</uuid>
  <description>Windows Server 2019</description>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows Server 2016" icon="windows.png" os="windows2016"/>
  </metadata>
  <memory unit='KiB'>66060288</memory>
  <currentMemory unit='KiB'>66060288</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>32</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='16'/>
    <vcpupin vcpu='2' cpuset='1'/>
    <vcpupin vcpu='3' cpuset='17'/>
    <vcpupin vcpu='4' cpuset='2'/>
    <vcpupin vcpu='5' cpuset='18'/>
    <vcpupin vcpu='6' cpuset='3'/>
    <vcpupin vcpu='7' cpuset='19'/>
    <vcpupin vcpu='8' cpuset='4'/>
    <vcpupin vcpu='9' cpuset='20'/>
    <vcpupin vcpu='10' cpuset='5'/>
    <vcpupin vcpu='11' cpuset='21'/>
    <vcpupin vcpu='12' cpuset='6'/>
    <vcpupin vcpu='13' cpuset='22'/>
    <vcpupin vcpu='14' cpuset='7'/>
    <vcpupin vcpu='15' cpuset='23'/>
    <vcpupin vcpu='16' cpuset='8'/>
    <vcpupin vcpu='17' cpuset='24'/>
    <vcpupin vcpu='18' cpuset='9'/>
    <vcpupin vcpu='19' cpuset='25'/>
    <vcpupin vcpu='20' cpuset='10'/>
    <vcpupin vcpu='21' cpuset='26'/>
    <vcpupin vcpu='22' cpuset='11'/>
    <vcpupin vcpu='23' cpuset='27'/>
    <vcpupin vcpu='24' cpuset='12'/>
    <vcpupin vcpu='25' cpuset='28'/>
    <vcpupin vcpu='26' cpuset='13'/>
    <vcpupin vcpu='27' cpuset='29'/>
    <vcpupin vcpu='28' cpuset='14'/>
    <vcpupin vcpu='29' cpuset='30'/>
    <vcpupin vcpu='30' cpuset='15'/>
    <vcpupin vcpu='31' cpuset='31'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-6.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/be7caecc-f6fe-dbfe-d80d-dd7514e6ae3e_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv mode='custom'>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none' migratable='on'>
    <topology sockets='1' dies='1' cores='16' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/VMData/Server/vdisk1.img' index='3'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/Applications/Operating Systems/Windows 2019 Server/SW_DVD9_WIN_SERVER_STD_CORE_2019_1809.18_64BIT_ENGLISH_DC_STD_MLF_X22-74330.ISO' index='2'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/Applications/Operating Systems/virtio-win-0.1.221-1.iso' index='1'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:31:fd:d7'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio-net'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/16'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/16'>
      <source path='/dev/pts/16'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Server/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='sv'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

3. testing performance via windows remote desktop to the machine feels extremely sluggish, trying installing another nested VM inside this VM i can see that in the unraid gui its writing to disk at around 60mb/s, this disk should be good to go well above 5000mb/s

 

Any performance tips, what have i done wrong? Or is it just this bad?

Link to comment
24 minutes ago, Vr2Io said:

Passthrough whole NVMe instead vdisk will got better performance. You can directly boot the NVMe in VM.

Thanks testing this now, do you know if i still should install the

balloon

vioserial

viostor?

 

my guess is no i just installed the nic driver for now and we'll see how performance is

Link to comment

ok performance numbers are back, i simply just did manual and pointed the vm to install on /dev/nvme0n1 which i guess is what passing through means?

 

image.png.4b532ed47f70155eb1df16540e3e85df.png

 

the performance "feels better" but still far away from anything useful as a hypervisor, i still feel like its running atmay maybe 35% of actual performance.

 

what more can i try? do i need to install some specific drivers in windows to utilize "nvme bandwidth"? Right now i see 2 pcie devices undetected in the device manager, the disk is called "QEMU HARDDISK"

I did not install ballon, vioserial or viostor

 

image.png.c191bc2bd717d749f09cb437aa50bcba.png

 

I will try to run some windows updates and see if drivers are found and installed

Edited by je82
Link to comment

Ok i've now passed through the nvme by binding it to "vfi0 at boot" and speeds seems better and the vm feels all around "more rapid"

Speeds are not even near nvme gen4 though but this is not supposed to be a performance monster, just equal or better than my old threadripper barebone host so i can shut it down and run everything on unraid to save some power.

 

image.png.5611112f61f2132bbd2773318ae38abe.png

 

If you have any tips in why i seem limited to around 3500mb/s on this nvme yet it easily does 6000+ on barebone, is the overhead that large? Not that sequential performance matters much on this server, it wont serve large files, random read/write is more important for the workload it will run.

 

Any tips to further optimize performance is welcome, cheers!

Edited by je82
Link to comment

ouch, after installing the hyper-v service on the VM and rebooting the machine feels very unresponsive. Its defintely some very strange problems with the drivers here, what are your experiencing virtualising anything other than windows? Is it this bad with linux to?

 

It looks like my sequential read/writes are bottoming out again too. Not sure if Hyper V Service installation caused this or if it is just a come and go

Link to comment

more trial and error, it seems whenever i install the hyper-v role on the guest vm random read/writes on the nvme drive goes down by over 100% , is this working as intended or is KVM just terrible for nesting hyper-v? or perhaps some issue with current kvm builds? Anyone know anything? Seems unreasonable slow to be "working as intended"

 

image.png

Without hyper-v role installed

 

image.png.e708bf80e3665c679b19013aea0e608b.png

With hyper-v role installed

Edited by je82
Link to comment

i realize now that kvm cannot pass through gen4 pcie devices, is this because unraid is not updated with "QEMU 4.0.0" or maybe i missed something?

EDIT: Nevermind, QEMU Version is 6.2.0 so definitely something missing in my config

nevermind the port is actually pcie3 i realize now, nevermind.

Edited by je82
Link to comment

interesting findings, this is probably not very much unraid related but i enabled bitlocker which should results in signifigantly lower io performance and the q32t1 write performance doubled for whatever reason while the read halft, so it swapped the two around but they are essentially the same, q1t1 was little lower but not that much.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.