Performance degradation


Recommended Posts

So I recently upgraded my server to a x399 with a 2920x and decided to just switch to a windows 10 vm as a daily driver and later sell off my desktop. At first everything seemed to run great but recently my vm has started slowing to the point that I'm running a 22fps in my favorite game.

 

 htpctopology.thumb.jpg.1803c6500ddd16f4223c5e0060d03ee0.jpg

I've stubbed 4 physical cores (8,20,9,21,10,22,11,23) for my vm and passed through my gtx 1080ti and an nvme drive. PCI 10de:1b06 is the 1080ti btw. I have an rx470 in the primary slot for unraid to use and maybe for future vm use. The only thing I can thing of that might be the issue is that I have the nvme drive just passed through using /dev/nvme0n1 rather than stubbing it, but I don't see how that would make a difference.

Is there anything that I'm overlooking or is this just what I can expect from gaming on my vm?

Let me know if there's anymore info that might help.

Link to comment

I'm not as familiar with AMD CPU's as I am with Intel, but usually the first so many cores are "performance cores". My first suspect would be the "performance cores". Depending on what game, resolution and additional settings you might be trying to run, I would definitely expect to see a lot greater performance than 22 FPS with any description of settings combined with a 1080 Ti. Head to the CPU Pinning tab of settings.

Settings -> CPU Pinning -> CPU Isolation

 

I'd recommend you adjust the CPU isolation for Unraid. And then to use those isolated cores for the Windows 10 VM. Leave all the other cores alone for the Docker containers aswell and just adjust the isolation for the first four cores. In my case it's 0-8, 1-9.

w3IF36h.png

 

After you have isolated those cores, restart the Unraid server and head back to the CPU Pinning tab and under CPU Pinning VM select those same cores you just isolated for your Windows 10 VM.

 

After that has been done, boot up your VM and run a benchmark like Unigine Heaven and see how that goes. If you don't see much improvement add another four cores to the CPU Isolation and VM.

Link to comment
5 minutes ago, LiableLlama said:

I'm not as familiar with AMD CPU's as I am with Intel, but usually the first so many cores are "performance cores". My first suspect would be the "performance cores". Depending on what game, resolution and additional settings you might be trying to run, I would definitely expect to see a lot greater performance than 22 FPS with any description of settings combined with a 1080 Ti. Head to the CPU Pinning tab of settings.


Settings -> CPU Pinning -> CPU Isolation

 

I'd recommend you adjust the CPU isolation for Unraid. And then to use those isolated cores for the Windows 10 VM. Leave all the other cores alone for the Docker containers aswell and just adjust the isolation for the first four cores. In my case it's 0-8, 1-9.

w3IF36h.png

 

After you have isolated those cores, restart the Unraid server and head back to the CPU Pinning tab and under CPU Pinning VM select those same cores you just isolated for your Windows 10 VM.

 

After that has been done, boot up your VM and run a benchmark like Unigine Heaven and see how that goes. If you don't see much improvement add another four cores to the CPU Isolation and VM.

I have the last 4 cores isolated due to them being on the same die as the pci express slot as my gpu. I may be wrong but I was under the impression that crossing dies would add latency. Is that incorrect?

Link to comment
Just now, thatnovaguy said:

I have the last 4 cores isolated due to them being on the same die as the pci express slot as my gpu. I may be wrong but I was under the impression that crossing dies would add latency. Is that incorrect?

As I mentioned, I'm not as familiar with AMD CPUs as I am with Intel. That might be true in the case of AMD's Infinity Fabric technology.. but I'm not sure. I'd recommend you give it a try anyway, it can't hurt to give it a shot.

Link to comment
5 hours ago, LiableLlama said:

I'd recommend you adjust the CPU isolation for Unraid. And then to use those isolated cores for the Windows 10 VM. Leave all the other cores alone for the Docker containers aswell and just adjust the isolation for the first four cores. In my case it's 0-8, 1-9.

Just to clear things up, isolating cores means you isolate them from unraid to use. Core 0 is ALWAYS used by unraid itself, no matter if you isolate them or not or using it by a VM or docker. Unraid will always use that core for itself.

 

 @thatnovaguy I have a 1950x and have the following cores isolated.

grafik.thumb.png.c0ae3023878c13169ec875fefdf12fef.png

 

All these cores are used by my main Windows 10 VM for almost 2 years now and I also use a 1080ti and a NVME in this VM. No issues or performance degredations so far. How much RAM do you have and how much is assogned to that VM?

 

Edit:

What does the Windows defragmentation tool shows you? And please post your xml of that VM.

 

 

Edited by bastl
Link to comment

I disabled the scheduled defrag and optimization per spaceinvaderone's video.

windowsdefrag.png.0a615ac6fc7b9613cedb93f7ad0871db.png

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='1'>
  <name>WyonBox</name>
  <uuid>089090d5-b998-0420-3ffe-3350a310c7e3</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='8'/>
    <vcpupin vcpu='1' cpuset='20'/>
    <vcpupin vcpu='2' cpuset='9'/>
    <vcpupin vcpu='3' cpuset='21'/>
    <vcpupin vcpu='4' cpuset='10'/>
    <vcpupin vcpu='5' cpuset='22'/>
    <vcpupin vcpu='6' cpuset='11'/>
    <vcpupin vcpu='7' cpuset='23'/>
    <emulatorpin cpuset='0,12'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-3.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/089090d5-b998-0420-3ffe-3350a310c7e3_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='8' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/nvme0n1'/>
      <backingStore/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/ArrayVDisks/Windows 10/Games.img'/>
      <backingStore/>
      <target dev='hdd' bus='virtio'/>
      <alias name='virtio-disk3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOs2/virtio-win-0.1.160-1.iso'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='sata0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:4a:24:e0'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-WyonBox/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x42' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0c' slot='0x00' function='0x3'/>
      </source>
      <alias name='hostdev2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x046d'/>
        <product id='0xc24a'/>
        <address bus='3' device='3'/>
      </source>
      <alias name='hostdev3'/>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0xfeed'/>
        <product id='0x6060'/>
        <address bus='3' device='2'/>
      </source>
      <alias name='hostdev4'/>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

The only thing I know for sure I need to change is removing the virtio.iso. Outside of that I'm lost in terms of xml.

Link to comment
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/nvme0n1'/>
      <backingStore/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

Your NVME is defined as SATA device. This isn't the best option. Virtio or SCSI have less overhead than SATA and the performance should be better.  But you can't switch to SCSI without installing the drivers first. Attach another dummy SCSI vdisk with 1GB in size and install the SCSI driver for it. After that you can remove the dummy disk and can change the controller type of your NVME to SCSI and it should find the driver for it. For Virtio you shouldn't need to install the driver, because you already did for your Games vdisk.

 

Also change the following line in the XML after you did that


before:

<driver name='qemu' type='raw' cache='writeback'/>

after:

<driver name='qemu' type='raw' cache='writeback' discard='unmap'/>

This way trim operations within the VM will be passed through to the controller itself to handle the optimisations of the NVME/SSD. For that to work, you must enable the "defragmentation" in Windows and run the optimisation at least once and on a scheduled base. Without this a SSD or NVME will become slower and slower over time.

 

Here is a example how the SSD I passed through is configured.

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='threads' discard='unmap'/>
      <source dev='/dev/disk/by-id/ata-Samsung_SSD_850_EVO_1TB_S2RFNX0J606029L'/>
      <backingStore/>
      <target dev='hdc' bus='scsi'/>
      <boot order='3'/>
      <alias name='scsi0-0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

I'am not exactly sure if this will work the way you passed through your NVME. My NVME where the OS is installed on is directly passed through like SpaceInvader showed in one of his videos. This should be the better option, so the OS has direct access to the NVME controller itself. This is how it looks like for my NVME:

IOMMU group 45:	[144d:a804] 41:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x41' slot='0x00' function='0x0'/>
      </source>
      <boot order='1'/>
      <alias name='hostdev3'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </hostdev>

Try the first option first and see if the performance gets better.

 

Another thing, where is your Games vdisk sitting on? On the array itself on a spinner? If so, what are your usual read speed on this disk? If I run Games from the array itself the loading times are really slow. If the game has to load new parts you can sometimes notice some lags. If other operations happens on that disk at the same time as you play, you might have worse performance. Therefore I have a full SSD passed through for games like showed before. Both of my disks are optimized (defrag) by windows itself and I never had any issues for almost 2 years now.

 

grafik.png.5593a7933342da76c769bf456cc0cf84.png

 

Link to comment
16 hours ago, bastl said:

     type='block' device='disk'> name='qemu' type='raw' cache='writeback'/> dev='/dev/nvme0n1'/> dev='hdc' bus='sata'/> order='1'/> name='sata0-0-2'/> type='drive' controller='0' bus='0' target='0' unit='2'/>
 

Your NVME is defined as SATA device. This isn't the best option. Virtio or SCSI have less overhead than SATA and the performance should be better.  But you can't switch to SCSI without installing the drivers first. Attach another dummy SCSI vdisk with 1GB in size and install the SCSI driver for it. After that you can remove the dummy disk and can change the controller type of your NVME to SCSI and it should find the driver for it. For Virtio you shouldn't need to install the driver, because you already did for your Games vdisk.
 
Also change the following line in the XML after you did that

before:


 name='qemu' type='raw' cache='writeback'/>
 

after:


 name='qemu' type='raw' cache='writeback' discard='unmap'/>
 

This way trim operations within the VM will be passed through to the controller itself to handle the optimisations of the NVME/SSD. For that to work, you must enable the "defragmentation" in Windows and run the optimisation at least once and on a scheduled base. Without this a SSD or NVME will become slower and slower over time.
 
Here is a example how the SSD I passed through is configured.


     type='block' device='disk'> name='qemu' type='raw' cache='none' io='threads' discard='unmap'/> dev='/dev/disk/by-id/ata-Samsung_SSD_850_EVO_1TB_S2RFNX0J606029L'/> dev='hdc' bus='scsi'/> order='3'/> name='scsi0-0-0-2'/> type='drive' controller='0' bus='0' target='0' unit='2'/>
 

I'am not exactly sure if this will work the way you passed through your NVME. My NVME where the OS is installed on is directly passed through like SpaceInvader showed in one of his videos. This should be the better option, so the OS has direct access to the NVME controller itself. This is how it looks like for my NVME:


IOMMU group 45:	[144d:a804] 41:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961
 

     mode='subsystem' type='pci' managed='yes'> name='vfio'/> domain='0x0000' bus='0x41' slot='0x00' function='0x0'/> order='1'/> name='hostdev3'/> type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
 

Try the first option first and see if the performance gets better.
 
Another thing, where is your Games vdisk sitting on? On the array itself on a spinner? If so, what are your usual read speed on this disk? If I run Games from the array itself the loading times are really slow. If the game has to load new parts you can sometimes notice some lags. If other operations happens on that disk at the same time as you play, you might have worse performance. Therefore I have a full SSD passed through for games like showed before. Both of my disks are optimized (defrag) by windows itself and I never had any issues for almost 2 years now.
 
grafik.png.5593a7933342da76c769bf456cc0cf84.png
 

I'll try switching to scsi once I get home this evening. Should changing the drive type not help would I be able to pass through the nvme controller like you mentioned without reinstalling Windows?

Also, something else that came to mind is that the first 2 cores on the die I'm using are assigned to Plex. Would splitting the die like that give me issue?

My games drive is just sitting on the array but it's more just steam library storage than anything. Since steam introduced the option to move games from place to place whenever I decide to play something else I just move it to my nvme drive. I'm playing ffxiv right now though so I don't play much else.

EDIT: The SCSI setting seemed to help some. I'm still dipping down to the 40s but that's still a gain over the low 20s. The discard='unmap' setting seemed to disable my gpu drivers the first time I booted after adding that. Windows eventually found my nvidia drivers but it took a few minutes.

 

EDIT2: After adding a couple more cores to my vm I have excellent fps now. I think I was bottlenecked due to background processes eating up my resources.
 

Edited by thatnovaguy
update
Link to comment
17 hours ago, thatnovaguy said:

Should changing the drive type not help would I be able to pass through the nvme controller like you mentioned without reinstalling Windows?

First of all, have a backup of all your important data and settings, in case something breaks in the VM. As I switched to unraid, my main OS was already installed on that NVME I wanted to pass through. It was kinda easy to have that install working without reinstalling the OS. In your case this should be even easier, because I guess you did the install in an VM and the OS don't have extra chipset drivers or motherboard tools installed. All that I had to clean up. Start with a new VM template for the NVME controller passthrough to not messup your current settings. All your manual edits you can add to the new config like you had set it up before.

17 hours ago, thatnovaguy said:

Windows eventually found my nvidia drivers but it took a few minutes.

Are you using the Nvidia driver windows installs itself? If so, switch to the one directly from Nvidia and you should see a performance gain in most games.

 

17 hours ago, thatnovaguy said:

background processes eating up my resources.

As I said earlier, if you pin down dockers to the core 0 and they are under heavy load this can decrease the performance of the whole server. Unraid itself always uses core 0 for itself. If that core is packed with transcoding operations from PLEX for example, everything else will slow down.

 

Using cores from both dies for a single VM might not be the best option. Sure it will work, but if you have heavy latency dependend operations you might be better off by using only cores from 1 die. In my case the first 8 cores Unraid has access to and all the dockers without pinning them down to specific cores. The other half 8 cores are isolated from Unraid for specific use in one VM only. This way none of the background operations on the server touches the cores used by the VM.

Link to comment

I've been using Nvidia's driver. I know better than to use the one Windows provides. The background processes I was referring to were within the VM. I now have all 6 cores of the second die stubbed and pinned to the VM leaving the first die for unraid to use. I'm gonna wait to pass through the nvme controller as everything seems to be working well now

Sent from my SM-N960U using Tapatalk

Link to comment

Assuming you are running NUMA (and not UMA), you get the NUMA node right but it's better to use 7,19 instead of 9,21. That way you spread your VM core evenly across multiple CCX. Your current config will overload 1 CCX which will degrade performance. From my testing, overloading a CCX can be equivalent with having 1 fewer core (e.g. 3+4 is worse than 3+3 - I'm on 2990WX).

 

Then add this line in your xml below </cputune>

  <numatune>
    <memory mode='strict' nodeset='1'/>
  </numatune>

That will force your RAM to be allocated to NUMA node 1 for better performance.

 

Next pass through your NVMe as a PCIe device and not through disk device. You will have to go through a few hoops to get a VM booted with NVMe, btw. Using disk device will bottleneck the NVMe drive (overly-simplistically-speaking, it means having to go the long way through core 0 for IO) + no trim support.

 

Last but not least, restart the VM. I notice this more with i440fx machine type that performance degrades over time until a VM reboot. It's less so with Q35 but is still there.

Link to comment
On 7/24/2019 at 6:58 AM, testdasi said:

Assuming you are running NUMA (and not UMA), you get the NUMA node right but it's better to use 7,19 instead of 9,21. That way you spread your VM core evenly across multiple CCX. Your current config will overload 1 CCX which will degrade performance. From my testing, overloading a CCX can be equivalent with having 1 fewer core (e.g. 3+4 is worse than 3+3 - I'm on 2990WX).

 

Then add this line in your xml below </cputune>


  <numatune>
    <memory mode='strict' nodeset='1'/>
  </numatune>

That will force your RAM to be allocated to NUMA node 1 for better performance.

 

Next pass through your NVMe as a PCIe device and not through disk device. You will have to go through a few hoops to get a VM booted with NVMe, btw. Using disk device will bottleneck the NVMe drive (overly-simplistically-speaking, it means having to go the long way through core 0 for IO) + no trim support.

 

Last but not least, restart the VM. I notice this more with i440fx machine type that performance degrades over time until a VM reboot. It's less so with Q35 but is still there.

Sorry for the long delay between replies, life got busy. I've added the bit you suggested into the xml and before that I had added the other 2 cores in the second numa node to my vm totaling 12 logical cores now. It relieved most of the bottleneck but I'm still getting about 20 worse performance than previously. I haven't quite figured out how to boot from nvme in a vm. I tried reading up on it but I'm a bit lost. Would I be better off just converting my nvme to a virtio img to boot off of and then pass through the nvme drive for game/program storage? Here's an updated copy of my vm xml. The only other change I've made is to add a usb 3.0 card.

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='3'>
  <name>WyonBox</name>
  <uuid>089090d5-b998-0420-3ffe-3350a310c7e3</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>12</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='6'/>
    <vcpupin vcpu='1' cpuset='18'/>
    <vcpupin vcpu='2' cpuset='7'/>
    <vcpupin vcpu='3' cpuset='19'/>
    <vcpupin vcpu='4' cpuset='8'/>
    <vcpupin vcpu='5' cpuset='20'/>
    <vcpupin vcpu='6' cpuset='9'/>
    <vcpupin vcpu='7' cpuset='21'/>
    <vcpupin vcpu='8' cpuset='10'/>
    <vcpupin vcpu='9' cpuset='22'/>
    <vcpupin vcpu='10' cpuset='11'/>
    <vcpupin vcpu='11' cpuset='23'/>
    <emulatorpin cpuset='0,12'/>
  </cputune>
  <numatune>
    <memory mode='strict' nodeset='1'/>
  </numatune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-3.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/089090d5-b998-0420-3ffe-3350a310c7e3_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='12' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback' discard='unmap'/>
      <source dev='/dev/nvme0n1'/>
      <backingStore/>
      <target dev='hdc' bus='scsi'/>
      <boot order='1'/>
      <alias name='scsi0-0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/ArrayVDisks/Windows 10/Games.img'/>
      <backingStore/>
      <target dev='hdd' bus='virtio'/>
      <alias name='virtio-disk3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:4a:24:e0'/>
      <source bridge='br0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/0'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/0'>
      <source path='/dev/pts/0'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-WyonBox/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x42' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x42' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0c' slot='0x00' function='0x3'/>
      </source>
      <alias name='hostdev2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </hostdev>
    <memballoon model='none'/>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

Thanks so much for all the help, btw. This is a truly great community!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.