Jump to content
Sign in to follow this  
goober07

[Solved] KVM Overhead / DPC Latency ?

24 posts in this topic Last Reply

Recommended Posts

I'm lost. I've tried my Windows 10 VM with OVFM i440fx, Seabios i440fx, and Seabios Q35. Tried enabling & disabling MSI. I'm still unable to resolve high DPC latency that results in audio pops, echo, stutter, & crackle. This occurs on HDMI audio and the USB headset. In fact, removing the USB headset helped but simply playing a youtube video with another browser open is unbearable.

 

I'm also observing a huge difference between CPU usage reported in Windows 10 and the Unraid dashboard. Running 1 thread on prime95 puts me ~30% in Windows and 44-50% reported in the webUI.

 

Is there something I've done wrong? There are no other VM's or activities going on that would explain the difference in CPU usage, and the latency prevents gaming & media playback (edit: without stutter/crackle)

 

current XML:

<domain type='kvm' id='1'>
  <name>W10-613</name>
  <uuid>b230fbaf-8c0c-e446-13cc-dd5578648779</uuid>
  <metadata>
    <vmtemplate name="Custom" icon="windows.png" os="windows"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/ssd/Windows10/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISO/Windows10/Windows10.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISO/VirtIO/virtio0.1.109.iso'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:19:9a:c0'/>
      <source bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/W10-613.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x045e'/>
        <product id='0x00dd'/>
        <address bus='1' device='3'/>
      </source>
      <alias name='hostdev2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc01d'/>
        <address bus='1' device='4'/>
      </source>
      <alias name='hostdev3'/>
    </hostdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

System Devices

PCI Devices

00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04)
00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04)
00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 04)
00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d4)
00:1c.4 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 (rev d4)
00:1c.6 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #7 (rev d4)
00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation Z87 Express LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 04)
01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 770] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
04:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)
IOMMU Groups

/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/4/devices/0000:00:14.0
/sys/kernel/iommu_groups/5/devices/0000:00:16.0
/sys/kernel/iommu_groups/6/devices/0000:00:1a.0
/sys/kernel/iommu_groups/7/devices/0000:00:1c.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.4
/sys/kernel/iommu_groups/9/devices/0000:00:1c.6
/sys/kernel/iommu_groups/10/devices/0000:00:1d.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.2
/sys/kernel/iommu_groups/11/devices/0000:00:1f.3
/sys/kernel/iommu_groups/12/devices/0000:03:00.0
/sys/kernel/iommu_groups/13/devices/0000:04:00.0
USB Devices

Bus 004 Device 002: ID 8087:8000 Intel Corp. 
Bus 004 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 002: ID 8087:8008 Intel Corp. 
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 045e:00dd Microsoft Corp. Comfort Curve Keyboard 2000 V1.0
Bus 001 Device 002: ID 0951:1665 Kingston Technology Digital DataTraveler SE9 64GB
Bus 001 Device 004: ID 046d:c01d Logitech, Inc. MX510 Optical Mouse
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
SCSI Devices

[0:0:0:0]    disk                              1.00  /dev/sda 
[1:0:0:0]    disk    ATA      WDC WD40EFRX-68W 0A82  /dev/sdb 
[2:0:0:0]    disk    ATA      WDC WD40EFRX-68W 0A82  /dev/sdc 
[5:0:0:0]    disk    ATA      Crucial_CT256MX1 MU01  /dev/sdd 
[6:0:0:0]    cd/dvd  HL-DT-ST BD-RE  WH14NS40  1.01  /dev/sr0 
[7:0:0:0]    disk    ATA      WDC WD1600AAJS-0 1D58  /dev/sde 

 

i5-4670 CPU

MSI Z87M-G43 Motherboard (latest bios)

Asmedia ASM1062 SATA card

NVIDIA GTX 770 passed through to Windows 10

 

Unraid 6.1.3

DPC.png.2870ddffc096a83f673429eeb6093954.png

taskmanager.png.6059e8cdaece988aca6b62dd060f3b05.png

webui.png.621b616aa507c20f68127b36cd9b3b52.png

Share this post


Link to post

I'm lost. I've tried my Windows 10 VM with OVFM i440fx, Seabios i440fx, and Seabios Q35. Tried enabling & disabling MSI. I'm still unable to resolve high DPC latency that results in audio pops, echo, stutter, & crackle. This occurs on HDMI audio and the USB headset. In fact, removing the USB headset helped but simply playing a youtube video with another browser open is unbearable.

 

I'm also observing a huge difference between CPU usage reported in Windows 10 and the Unraid dashboard. Running 1 thread on prime95 puts me ~30% in Windows and 44-50% reported in the webUI.

 

Is there something I've done wrong? There are no other VM's or activities going on that would explain the difference in CPU usage, and the latency prevents gaming & media playback (edit: without stutter/crackle)

 

current XML:

<domain type='kvm' id='1'>
  <name>W10-613</name>
  <uuid>b230fbaf-8c0c-e446-13cc-dd5578648779</uuid>
  <metadata>
    <vmtemplate name="Custom" icon="windows.png" os="windows"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='4' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/ssd/Windows10/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISO/Windows10/Windows10.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISO/VirtIO/virtio0.1.109.iso'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:19:9a:c0'/>
      <source bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/W10-613.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x045e'/>
        <product id='0x00dd'/>
        <address bus='1' device='3'/>
      </source>
      <alias name='hostdev2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc01d'/>
        <address bus='1' device='4'/>
      </source>
      <alias name='hostdev3'/>
    </hostdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

System Devices

PCI Devices

00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06)
00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04)
00:16.0 Communication controller: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 (rev 04)
00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 04)
00:1c.0 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 (rev d4)
00:1c.4 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 (rev d4)
00:1c.6 PCI bridge: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #7 (rev d4)
00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation Z87 Express LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller (rev 04)
01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 770] (rev a1)
01:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
04:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)
IOMMU Groups

/sys/kernel/iommu_groups/0/devices/0000:00:00.0
/sys/kernel/iommu_groups/1/devices/0000:00:01.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.0
/sys/kernel/iommu_groups/1/devices/0000:01:00.1
/sys/kernel/iommu_groups/2/devices/0000:00:02.0
/sys/kernel/iommu_groups/3/devices/0000:00:03.0
/sys/kernel/iommu_groups/4/devices/0000:00:14.0
/sys/kernel/iommu_groups/5/devices/0000:00:16.0
/sys/kernel/iommu_groups/6/devices/0000:00:1a.0
/sys/kernel/iommu_groups/7/devices/0000:00:1c.0
/sys/kernel/iommu_groups/8/devices/0000:00:1c.4
/sys/kernel/iommu_groups/9/devices/0000:00:1c.6
/sys/kernel/iommu_groups/10/devices/0000:00:1d.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.0
/sys/kernel/iommu_groups/11/devices/0000:00:1f.2
/sys/kernel/iommu_groups/11/devices/0000:00:1f.3
/sys/kernel/iommu_groups/12/devices/0000:03:00.0
/sys/kernel/iommu_groups/13/devices/0000:04:00.0
USB Devices

Bus 004 Device 002: ID 8087:8000 Intel Corp. 
Bus 004 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 003 Device 002: ID 8087:8008 Intel Corp. 
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 045e:00dd Microsoft Corp. Comfort Curve Keyboard 2000 V1.0
Bus 001 Device 002: ID 0951:1665 Kingston Technology Digital DataTraveler SE9 64GB
Bus 001 Device 004: ID 046d:c01d Logitech, Inc. MX510 Optical Mouse
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
SCSI Devices

[0:0:0:0]    disk                              1.00  /dev/sda 
[1:0:0:0]    disk    ATA      WDC WD40EFRX-68W 0A82  /dev/sdb 
[2:0:0:0]    disk    ATA      WDC WD40EFRX-68W 0A82  /dev/sdc 
[5:0:0:0]    disk    ATA      Crucial_CT256MX1 MU01  /dev/sdd 
[6:0:0:0]    cd/dvd  HL-DT-ST BD-RE  WH14NS40  1.01  /dev/sr0 
[7:0:0:0]    disk    ATA      WDC WD1600AAJS-0 1D58  /dev/sde 

 

i5-4670 CPU

MSI Z87M-G43 Motherboard (latest bios)

Asmedia ASM1062 SATA card

NVIDIA GTX 770 passed through to Windows 10

 

Unraid 6.1.3

I think I have something you can try to help with this. I'll post details on Monday as I need to do some testing with it first.

Share this post


Link to post

I was thinking about passing the Asmedia card with SSD attached through to the VM. I'm guessing this would eliminate the vdisk image and Viostor driver, at the expense of making the ssd unvailable for docker & other VM's? But latencymon blamed USB, Nvidia, Networkig... Pretty much everything was taking too long to execute. I don't see my proposal helping.

 

Right now I have:

2x4TB WD Reds

1x160GB WD Blue (cache)

1x240GB Samsung SSD (mounted by editing the go script)

 

I don't desire to use the ssd as cache. I'd like most of it available for Windows & games.

Share this post


Link to post

I see from another thread that version 0.1.110 of the virtio drivers are available. I'll try installing those and see if anything changes. The VM's in question were created with 0.1.109 since Windows 10 wouldn't recognize storage with the "stable" drivers

 

Update: no changes to the AMD64 W8.1 drivers with 0.1.110, so cross that off the list of potential solutions

Share this post


Link to post

Has anyone else experienced this issue?

 

Should I be running different hardware, or am I missing something in the XML?

Share this post


Link to post

Sorry Goober.  Has been a long week.  Will try to get to this tomorrow or Friday.

Share this post


Link to post

Sorry Goober.  Has been a long week.  Will try to get to this tomorrow or Friday.

 

Not a problem - I've just got this weekend & next week to work on it before traveling for two weeks of training.

Share this post


Link to post

Off topic, but what do I need to change in the XML to make the Microsoft boot manager permanent? Every time I power off the server, the next VM launch brings up the "press any key to boot from CD" followed by the shell. I've got to manually go into the boot maintenance manager from i440fx bios and add the boot option by selecting "no volume label, efi, boot, bootx64.efi"

Share this post


Link to post

Off topic, but what do I need to change in the XML to make the Microsoft boot manager permanent? Every time I power off the server, the next VM launch brings up the "press any key to boot from CD" followed by the shell. I've got to manually go into the boot maintenance manager from i440fx bios and add the boot option by selecting "no volume label, efi, boot, bootx64.efi"

You could just disconnect the install CD by removing the image file from the GUI.  The boot order should be changeable as well but I can't say I've tried that so cannot confirm.

Share this post


Link to post

Off topic, but what do I need to change in the XML to make the Microsoft boot manager permanent? Every time I power off the server, the next VM launch brings up the "press any key to boot from CD" followed by the shell. I've got to manually go into the boot maintenance manager from i440fx bios and add the boot option by selecting "no volume label, efi, boot, bootx64.efi"

You could just disconnect the install CD by removing the image file from the GUI.  The boot order should be changeable as well but I can't say I've tried that so cannot confirm.

 

Removing the boot disc solves the "press any key" as expected, but changing the boot order doesn't persist. I've got to add the boot option, name it, change the boot order, and then I can start & stop the VM as many times as I want. Power the server down, then next time I'm back to booting the efi shell instead of Windows.

Share this post


Link to post
Removing the boot disc solves the "press any key" as expected, but changing the boot order doesn't persist. I've got to add the boot option, name it, change the boot order, and then I can start & stop the VM as many times as I want. Power the server down, then next time I'm back to booting the efi shell instead of Windows.

Not sure what to tell you as all I have done to boot my 3 Windows VMs (2 32bit Win7 and 1 64bit Win7) was remove the installation image from the GUI and it then booted from the virtual hardrive image file every time.

Share this post


Link to post

Been doing some testing and research on this.  I think the issue is your CPU may be a little lightweight here.  Do you have any other VMs, docker containers, plugins, or anything else installed / running on this system?  Can you upload your diagnostics (from the Tools -> Diagnostics page and clicking Download)?

 

If you try reducing the number of CPUs assigned to the VM (and perhaps try just pinning CPU 1 and 4), does that change the results with latencymon?

 

I can tell you that in my setup, I'm using the on-board audio (not HDMI through the video card or a USB device, but through the onboard analog audio which it doesn't look like your system has).  I don't experience audio pops or anything like that and I still see high DPC latency reported for my config.

Share this post


Link to post

It's just Unraid and the affected VM running. I have docker enabled, but no containers running during this test. No reason I'm aware of for 1 thread to take up 50% CPU utilization in the webUI while showing 25-30% in the Windows task manager, unless I'm misunderstanding how host-passthrough works. I imagined it like the GPU (very little overhead).

 

Nothing was accessing the array at the time, just displaying the webUI. Serving that page up shouldn't be a large load for Unraid.

 

I will upload diagnostic tonight and test out other cpu pinnings. More horsepower would be great, but with limited funds I'd be better off building a separate box for Unraid and reverting the i5 machine to media playback/gaming duty. (Edit: Just need a case and AMD APU... The rest I have laying around from a cancelled HTPC project)

 

Share this post


Link to post

It's just Unraid and the affected VM running. I have docker enabled, but no containers running during this test. No reason I'm aware of for 1 thread to take up 50% CPU utilization in the webUI while showing 25-30% in the Windows task manager, unless I'm misunderstanding how host-passthrough works. I imagined it like the GPU (very little overhead).

 

Looking at task manager in Windows and comparing that to the unRAID dashboard is not a scientific test at all and shouldn't be used to gauge overhead.  More importantly, overhead doesn't even matter if you weren't using it anyways, so I'd focus on problems that actually affect your usage of the system, not numbers in a task manager window.

 

More horsepower would be great, but with limited funds I'd be better off building a separate box for Unraid and reverting the i5 machine to media playback/gaming duty. (Edit: Just need a case and AMD APU... The rest I have laying around from a cancelled HTPC project)

 

You're saying case + AMD APU costs more than just upgrading to a better processor?  If so, that may be a better route to go.  The challenge here is that you're using a mid-grade CPU that doesn't have hyper threading support and you are giving all of the threads to the VM while simultaneously using unRAID as the host.  In my setup with my 4790k processor, I dedicate 6 threads to my gaming VM and 2 to the host and my docker apps, keeping my gaming VM completely isolated from the host.

 

I'd like to hear your feedback on reducing the CPU count for the VM and if that doesn't work, I have another syslinux.cfg tweak you can try to isolate two of the CPU cores to nothing but that VM.

Share this post


Link to post

Looking at task manager in Windows and comparing that to the unRAID dashboard is not a scientific test at all and shouldn't be used to gauge overhead.  More importantly, overhead doesn't even matter if you weren't using it anyways, so I'd focus on problems that actually affect your usage of the system, not numbers in a task manager window.

 

I agree it's non-scientific, but from a novice's perspective, watching Netflix or having 2 instances of Chrome open shouldn't bring a mid-level PC to a stutter. It definitely didn't stutter as bare-metal. In non-virtualized Windows installations, DPC latency resulting from driver issues is usually blamed for audio issues.

 

You're saying case + AMD APU costs more than just upgrading to a better processor?  If so, that may be a better route to go.  The challenge here is that you're using a mid-grade CPU that doesn't have hyper threading support and you are giving all of the threads to the VM while simultaneously using unRAID as the host.  In my setup with my 4790k processor, I dedicate 6 threads to my gaming VM and 2 to the host and my docker apps, keeping my gaming VM completely isolated from the host.

 

I'd like to hear your feedback on reducing the CPU count for the VM and if that doesn't work, I have another syslinux.cfg tweak you can try to isolate two of the CPU cores to nothing but that VM.

Antec 900 + APU would be $240, then it's just a matter of finding a place to fit it & maintaining the "wife acceptance factor"

 

A 4760k (new) would be $340, at which point it makes more sense to go all-in For the 5820k, new motherboard, etc...

 

Will report back once I get to test the new cpu pinning. VM wouldn't boot last night and I didn't have time to start with a clean install.

Share this post


Link to post

Looking at task manager in Windows and comparing that to the unRAID dashboard is not a scientific test at all and shouldn't be used to gauge overhead.  More importantly, overhead doesn't even matter if you weren't using it anyways, so I'd focus on problems that actually affect your usage of the system, not numbers in a task manager window.

 

I agree it's non-scientific, but from a novice's perspective, watching Netflix or having 2 instances of Chrome open shouldn't bring a mid-level PC to a stutter. It definitely didn't stutter as bare-metal. In non-virtualized Windows installations, DPC latency resulting from driver issues is usually blamed for audio issues.

 

You're saying case + AMD APU costs more than just upgrading to a better processor?  If so, that may be a better route to go.  The challenge here is that you're using a mid-grade CPU that doesn't have hyper threading support and you are giving all of the threads to the VM while simultaneously using unRAID as the host.  In my setup with my 4790k processor, I dedicate 6 threads to my gaming VM and 2 to the host and my docker apps, keeping my gaming VM completely isolated from the host.

 

I'd like to hear your feedback on reducing the CPU count for the VM and if that doesn't work, I have another syslinux.cfg tweak you can try to isolate two of the CPU cores to nothing but that VM.

Antec 900 + APU would be $240, then it's just a matter of finding a place to fit it & maintaining the "wife acceptance factor"

 

A 4760k (new) would be $340, at which point it makes more sense to go all-in For the 5820k, new motherboard, etc...

 

Will report back once I get to test the new cpu pinning. VM wouldn't boot last night and I didn't have time to start with a clean install.

 

Ok, I've been doing a lot of testing with this today and I think I may have figured some things out that could help you.  Here's what I want you to try doing:

 

1 - Click on the word 'flash' on the Main tab and locate the Syslinux Configuration section

2 - Locate the line that says 'menu default'

3 - Two lines below it, you will see a line that starts with the word 'append', it could look like this:

 

append pcie_acs_override=downstream initrd=/bzroot

 

You may not have the 'pcie_acs_override' part in there.  Change that line to look like this:

 

append isolcpus=2,3 pcie_acs_override=downstream initrd=/bzroot

 

Then change your VM XML to the following:

 

<domain type='kvm' id='1'>
  <name>W10-613</name>
  <uuid>b230fbaf-8c0c-e446-13cc-dd5578648779</uuid>
  <metadata>
    <vmtemplate name="Custom" icon="windows.png" os="windows"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='3'/>
    <emulatorpin cpuset='0-1'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/ssd/Windows10/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISO/Windows10/Windows10.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISO/VirtIO/virtio0.1.109.iso'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:19:9a:c0'/>
      <source bridge='virbr0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/W10-613.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x045e'/>
        <product id='0x00dd'/>
        <address bus='1' device='3'/>
      </source>
      <alias name='hostdev2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc01d'/>
        <address bus='1' device='4'/>
      </source>
      <alias name='hostdev3'/>
    </hostdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

The only sections I changed were these:

 

  <cpu mode='host-passthrough'>

    <topology sockets='1' cores='2' threads='1'/>

  </cpu>

  <vcpu placement='static'>2</vcpu>

  <cputune>

    <vcpupin vcpu='0' cpuset='2'/>

    <vcpupin vcpu='1' cpuset='3'/>

    <emulatorpin cpuset='0-1'/>

  </cputune>

 

Once done, reboot the host for the syslinux change to take affect and try your VM again, using the same tool to measure DPC latency as you did originally.  Let me know if this changes anything for you.

Share this post


Link to post

<domain type='kvm'>
  <name>W10_2core</name>
  <uuid>d76c51cf-d6a9-84e7-56f8-1b0b4fb14e30</uuid>
  <metadata>
    <vmtemplate name="Custom" icon="windows.png" os="windows"/>
  </metadata>
  <memory unit='KiB'>10485760</memory>
  <currentMemory unit='KiB'>10485760</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='3'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/ssd/windows/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISO/VirtIO/virtio0.1.109.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:bf:19:a7'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/W10_2core.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc01d'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x045e'/>
        <product id='0x00dd'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x0d8c'/>
        <product id='0x000c'/>
      </source>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

Clean Windows 10 install, tried pinning cores 1+3 and 2+3 with the same result. Still have the annoying issue where the VM won't boot after powering down the server. I'm forced to add the boot option in the OVFM every time. Afterwards, I can start & stop the VM and Windows boots fine, until the next power down.

 

I'll try the syslinux config next.

Share this post


Link to post

<domain type='kvm'>
  <name>W10_2core</name>
  <uuid>d76c51cf-d6a9-84e7-56f8-1b0b4fb14e30</uuid>
  <metadata>
    <vmtemplate name="Custom" icon="windows.png" os="windows"/>
  </metadata>
  <memory unit='KiB'>10485760</memory>
  <currentMemory unit='KiB'>10485760</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='3'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/ssd/windows/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISO/VirtIO/virtio0.1.109.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:bf:19:a7'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/W10_2core.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc01d'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x045e'/>
        <product id='0x00dd'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='yes'>
      <source>
        <vendor id='0x0d8c'/>
        <product id='0x000c'/>
      </source>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

Clean Windows 10 install, tried pinning cores 1+3 and 2+3 with the same result. Still have the annoying issue where the VM won't boot after powering down the server. I'm forced to add the boot option in the OVFM every time. Afterwards, I can start & stop the VM and Windows boots fine, until the next power down.

 

I'll try the syslinux config next.

 

Ok, the VM not booting is actually a known issue with OVMF.  If you type the following commands at the UEFI shell, you can force it to boot:

 

fs0:

cd efi

cd boot

bootx64.efi

 

The syslinux config change is the one I'm really interested in.

Share this post


Link to post

Followed your instructions, and the initial results are positive! LatencyMon still says I could experience audio issues, but in practice, this hasn't occurred. I still need to test BluRay MKV rips, but Windows system sounds and youtube videos aren't stuttering or crackling!

 

Edit: USB headset is still a no-go (terrible sound), but I'm going to try to pass through the USB 3 controller to see if that helps. The HDMI audio is the primary requirement for this system, and so far, that appears fixed!

DPC_improved.png.c44e6c1f63d0b5536d888c8c64d05af2.png

Share this post


Link to post

Followed your instructions, and the initial results are positive! LatencyMon still says I could experience audio issues, but in practice, this hasn't occurred. I still need to test BluRay MKV rips, but Windows system sounds and youtube videos aren't stuttering or crackling!

 

Edit: USB headset is still a no-go (terrible sound), but I'm going to try to pass through the USB 3 controller to see if that helps. The HDMI audio is the primary requirement for this system, and so far, that appears fixed!

 

That's great news!  I've been experimenting with DPC latency a lot lately.  I was getting pretty bad latency before, but never experienced any issues I could tell as a result.  Nonetheless, in an effort to see what tuning methods would result in reducing it, we began experimenting and this isolcpus method seems to be the best to reducing it.  Keep in mind, using this method, you are preventing those two cores from being used for anything but your VM, even when the VM is turned off.  This may impact performance in other areas depending on how many apps you load or during a parity check/sync/rebuild depending on how many disks you have.  Your i5 should still be ok to handle all of that with only 2 of it's 4 cores available, but I want to be clear, there is no known method to toggle this isolcpus method while a system is running.  You have to reboot the system with an alternate syslinux.cfg for this change to take affect.

 

Generally speaking, I think isolating CPUs for high-performance personal computing workloads (real-time applications) makes a lot of sense.  It'd be better if we could toggle it live, but since we can't, it's still a trade-off well worth it for now.  That said, I also think it's one of those things that you don't use until you need it.  If your VM works fine without this tweak for your use case, awesome, but if not, try it out.  I think that audio/video capture, encoding, etc. applications will be the ones that need it most.

 

As for your USB headset not working, your audio through that headset isn't using the sound card on your GPU, but rather, provides a sound card in and of itself to the VM.  This sound card is going through a completely virtual USB2 controller.  So the issues still persisting there doesn't surprise me.  There are two things you can try and this is the order in which I would try them:

 

1)  Change your XML to use a virtual USB3 controller.  This is something we've been experimenting with and have had pretty good success.  It also has a byproduct of potentially further reducing the CPU overhead for your VM.  Locate this section in your XML:

 

    <controller type='usb' index='0'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>

    </controller>

 

Change that to this:

 

    <controller type='usb' model='nec-xhci' index='0'>
    </controller>

 

Notice that I deleted the <address> line entirely.

 

Save and start your VM with the USB headset passed through with this setting in the XML and see how that works out.

 

2)  Pass through an entire USB3 controller.

 

If the first method fails, this is the last resort and is pretty much guaranteed to solve your issues.

Share this post


Link to post

Update: I was able to play my lossless mkv rip of Avengers (HDMI to the home theater receiver) while multitasking (browsing the web on the primary display (DVI) and running Latencymon)! I'd recommend isolcpu to anyone experiencing a/v issues that the MSI fix didn't correct. I'd recommend at least mentioning isolcpu in the Unraid 6 manual. You've written a very good explanation/disclaimer in this thread.

 

There was a moment when I thought the whole server locked up... But I haven't reproduced it. When VLC was opening the mkv, the system stalled for ~250ms (per Latencymon) and made a nasty sound through the receiver. Maybe the disk needed to spool up? Doesn't really matter to me. My project's criteria was parity protection for the media archive, light gaming (Skyrim, League of Legends, EVE) and media playback via HDMI on one machine, and I've finally got that!

 

I'll try the virtual USB3 controller and report back. I figured worst-case passing through the controller would solve the usb audio issue. I need applications like Teamspeak 3 & Skype to work, and I assumed from the start that I'd pass through a usb controller if issues couldn't be resolved.

Share this post


Link to post

Update: I was able to play my lossless mkv rip of Avengers (HDMI to the home theater receiver) while multitasking (browsing the web on the primary display (DVI) and running Latencymon)! I'd recommend isolcpu to anyone experiencing a/v issues that the MSI fix didn't correct. I'd recommend at least mentioning isolcpu in the Unraid 6 manual. You've written a very good explanation/disclaimer in this thread.

 

There was a moment when I thought the whole server locked up... But I haven't reproduced it. When VLC was opening the mkv, the system stalled for ~250ms (per Latencymon) and made a nasty sound through the receiver. Maybe the disk needed to spool up? Doesn't really matter to me. My project's criteria was parity protection for the media archive, light gaming (Skyrim, League of Legends, EVE) and media playback via HDMI on one machine, and I've finally got that!

 

I'll try the virtual USB3 controller and report back. I figured worst-case passing through the controller would solve the usb audio issue. I need applications like Teamspeak 3 & Skype to work, and I assumed from the start that I'd pass through a usb controller if issues couldn't be resolved.

 

Glad to hear it!  I'm going to update this thread to "solved" status as we have our solution.  The isolcpus setting is something we are going to add to the web interface as a toggleable setting in an upcoming release.

Share this post


Link to post

Just to add that reading the suggestions in this thread seems to have solved my latency problems as well when running a Windows VM.

 

Thanks to all who contributed!

Share this post


Link to post

The only sections I changed were these:

 

  <cpu mode='host-passthrough'>

    <topology sockets='1' cores='2' threads='1'/>

  </cpu>

  <vcpu placement='static'>2</vcpu>

  <cputune>

    <vcpupin vcpu='0' cpuset='2'/>

    <vcpupin vcpu='1' cpuset='3'/>

    <emulatorpin cpuset='0-1'/>

  </cputune>

 

Once done, reboot the host for the syslinux change to take affect and try your VM again, using the same tool to measure DPC latency as you did originally.  Let me know if this changes anything for you.

 

Hey Jonp!

 

Just wondering what exactly was being changed here. I'm using 8 cores on my VM and they do not start at 0. Would I have to change the emulator pin and cpusets?

 

Here is my xml

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Player 2</name>
  <uuid>9de93c1e-afa8-f517-cf9e-316ae84373d5</uuid>
  <metadata>
    <vmtemplate name="Custom" icon="windows7.png" os="windows7"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='20'/>
    <vcpupin vcpu='1' cpuset='21'/>
    <vcpupin vcpu='2' cpuset='22'/>
    <vcpupin vcpu='3' cpuset='23'/>
    <vcpupin vcpu='4' cpuset='24'/>
    <vcpupin vcpu='5' cpuset='25'/>
    <vcpupin vcpu='6' cpuset='26'/>
    <vcpupin vcpu='7' cpuset='27'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='8' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/VMs/Player 2/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/Array Vdisks/Player 2/vdisk2.img'/>
      <target dev='hdd' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:f5:51:71'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Player 2.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=00:1d.0,bus=root.1,addr=01.0'/>
  </qemu:commandline>
</domain>

 

Thanks for all of your help around here btw! You are a constant life saver lol  ;D

 

EDIT: I think I was looking for this

 

https://lime-technology.com/forum/index.php?topic=49051.0

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this