XML Performance Tuning for SSDs [deprecated]


Recommended Posts

Damn. That sucks. I thought it was just my config. This really isn't hurting anything that I can see (to do a force shutdown after a regular shutdown), but clearly this isn't the ideal solution. Personally I think the performance benefits are worth it but not necessarily for every VM type. I have more experimenting and testing to do with this before I can really say if this shutdown issue is going to be easy to fix or not.

 

The x-data-plane setting is an optimization tweak, not a requirement.

Link to comment

interesting paper on KVM:I/O performance

 

http://jrs-s.net/2013/05/17/kvm-io-benchmarking/

 

I have a qcow2 Win8.1 VM on an XFS formatted SSD and I want to try writeback as a cache method as this appeared to be one of the faster options. When I change the cache method from

 

<driver name='qemu' type='qcow2' cache='none' io='native'/>

 

to

 

<driver name='qemu' type='qcow2' cache='writeback' io='native'/>

 

the VM will not start. See error below

 

Warning: libvirt_domain_create(): unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads in /usr/local/emhttp/plugins/dynamix.kvm.manager/classes/libvirt.php on line 838

 

Anyone know what the io=? for writeback?

 

edit: I am using io=threads and it starts nicely. So far It seems just as fast as the raw disk.

 

Link to comment

interesting paper on KVM:I/O performance

 

http://jrs-s.net/2013/05/17/kvm-io-benchmarking/

 

I have a qcow2 Win8.1 VM on an XFS formatted SSD and I want to try writeback as a cache method as this appeared to be one of the faster options. When I change the cache method from

 

<driver name='qemu' type='qcow2' cache='none' io='native'/>

 

to

 

<driver name='qemu' type='qcow2' cache='writeback' io='native'/>

 

the VM will not start. See error below

 

Warning: libvirt_domain_create(): unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads in /usr/local/emhttp/plugins/dynamix.kvm.manager/classes/libvirt.php on line 838

 

Anyone know what the io=? for writeback?

 

edit: I am using io=threads and it starts nicely. So far It seems just as fast as the raw disk.

 

That article you linked is pretty old (circa 2013).  If you want to experiment with other XML tweaks, your best bet is to start reading here:  http://libvirt.org/formatdomain.html

 

That's the domain XML documentation for libvirt.  Not everything there is current either, but it's definitely a more definitive source of current information than the article you found.

 

We're still experimenting with various configs for the right balance of performance and features with virtual machines.  A lot of testing that goes into VMs is for enterprise computing / cloud / datacenter environments and not real world unRAID use-cases on machines in people's homes.  You don't house hundreds of databases that thousands of concurrent users are accessing, therefore lots of the guidance provided in 3rd party studies don't apply to a home-system that is serving at most a few different concurrent users at the same time.

 

The first thing I'd be curious to know is if you're having a performance issue.  The suggestion I made in this thread was really just to get people to "try this" and see if they noticed anything to improve their experience when raw images are on SSDs.  If you weren't feeling you had performance issues before, I wouldn't start tweaking XML now.  Embrace the tried-and-true saying: if it ain't broke, don't fix it.

 

If you ARE having a performance issue, please describe it better first so we can understand the scenario you're in and why this might be happening.  Some things can be adjusted to improve performance from inside the XML, other things could benefit from restructuring how you use VMs / containers, and yet others still could be hardware-specific.

Link to comment

If you weren't feeling you had performance issues before, I wouldn't start tweaking XML now.  Embrace the tried-and-true saying: if it ain't broke, don't fix it.

Hmmm, I always thought the saying was: if it ain't broke, take it apart and watch the parts fly everywhere, then put it all back together again! And once it's all back together and there is a small screw left over, just say "meh, doesn't need it anyways"...

Link to comment

If you weren't feeling you had performance issues before, I wouldn't start tweaking XML now.  Embrace the tried-and-true saying: if it ain't broke, don't fix it.

Hmmm, I always thought the saying was: if it ain't broke, take it apart and watch the parts fly everywhere, then put it all back together again! And once it's all back together and there is a small screw left over, just say "meh, doesn't need it anyways"...

 

$(KGrHqJ,!mIFHYEVPSjyBR7bMcU7bw~~60_35.JPG

Link to comment

interesting paper on KVM:I/O performance

 

http://jrs-s.net/2013/05/17/kvm-io-benchmarking/

 

I have a qcow2 Win8.1 VM on an XFS formatted SSD and I want to try writeback as a cache method as this appeared to be one of the faster options. When I change the cache method from

 

<driver name='qemu' type='qcow2' cache='none' io='native'/>

 

to

 

<driver name='qemu' type='qcow2' cache='writeback' io='native'/>

 

the VM will not start. See error below

 

Warning: libvirt_domain_create(): unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads in /usr/local/emhttp/plugins/dynamix.kvm.manager/classes/libvirt.php on line 838

 

Anyone know what the io=? for writeback?

 

edit: I am using io=threads and it starts nicely. So far It seems just as fast as the raw disk.

 

That article you linked is pretty old (circa 2013).  If you want to experiment with other XML tweaks, your best bet is to start reading here:  http://libvirt.org/formatdomain.html

 

That's the domain XML documentation for libvirt.  Not everything there is current either, but it's definitely a more definitive source of current information than the article you found.

 

We're still experimenting with various configs for the right balance of performance and features with virtual machines.  A lot of testing that goes into VMs is for enterprise computing / cloud / datacenter environments and not real world unRAID use-cases on machines in people's homes.  You don't house hundreds of databases that thousands of concurrent users are accessing, therefore lots of the guidance provided in 3rd party studies don't apply to a home-system that is serving at most a few different concurrent users at the same time.

 

The first thing I'd be curious to know is if you're having a performance issue.  The suggestion I made in this thread was really just to get people to "try this" and see if they noticed anything to improve their experience when raw images are on SSDs.  If you weren't feeling you had performance issues before, I wouldn't start tweaking XML now.  Embrace the tried-and-true saying: if it ain't broke, don't fix it.

 

If you ARE having a performance issue, please describe it better first so we can understand the scenario you're in and why this might be happening.  Some things can be adjusted to improve performance from inside the XML, other things could benefit from restructuring how you use VMs / containers, and yet others still could be hardware-specific.

Thanks for the link I will check it out. I have been having ongoing performance issues particularly related to streaming from a Plex server on the Host to Win8.1 and Win10 guests. (Should probably start a separate thread). I have also been having issues with DPC latency on the Windows guests. Both of these issues seem to be at least partly related to disk I/O and guest clock settings.

The guest clock timer seems to have the greatest effect on my latency issues as well as moving the GTX 970 gpu onto Message Signalled Interrupts.

Switching the virtio disk to iothreads and raw has also significantly improved Plex playback which just has not been as smooth as on my dedicated HTPC. Still want to get to that level of performance. All of which is complicated by Nvidia's lack of VM support with their newer cards.

Who doesn't like a challenge though, right?

Link to comment

interesting paper on KVM:I/O performance

 

http://jrs-s.net/2013/05/17/kvm-io-benchmarking/

 

I have a qcow2 Win8.1 VM on an XFS formatted SSD and I want to try writeback as a cache method as this appeared to be one of the faster options. When I change the cache method from

 

<driver name='qemu' type='qcow2' cache='none' io='native'/>

 

to

 

<driver name='qemu' type='qcow2' cache='writeback' io='native'/>

 

the VM will not start. See error below

 

Warning: libvirt_domain_create(): unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads in /usr/local/emhttp/plugins/dynamix.kvm.manager/classes/libvirt.php on line 838

 

Anyone know what the io=? for writeback?

 

edit: I am using io=threads and it starts nicely. So far It seems just as fast as the raw disk.

 

That article you linked is pretty old (circa 2013).  If you want to experiment with other XML tweaks, your best bet is to start reading here:  http://libvirt.org/formatdomain.html

 

That's the domain XML documentation for libvirt.  Not everything there is current either, but it's definitely a more definitive source of current information than the article you found.

 

We're still experimenting with various configs for the right balance of performance and features with virtual machines.  A lot of testing that goes into VMs is for enterprise computing / cloud / datacenter environments and not real world unRAID use-cases on machines in people's homes.  You don't house hundreds of databases that thousands of concurrent users are accessing, therefore lots of the guidance provided in 3rd party studies don't apply to a home-system that is serving at most a few different concurrent users at the same time.

 

The first thing I'd be curious to know is if you're having a performance issue.  The suggestion I made in this thread was really just to get people to "try this" and see if they noticed anything to improve their experience when raw images are on SSDs.  If you weren't feeling you had performance issues before, I wouldn't start tweaking XML now.  Embrace the tried-and-true saying: if it ain't broke, don't fix it.

 

If you ARE having a performance issue, please describe it better first so we can understand the scenario you're in and why this might be happening.  Some things can be adjusted to improve performance from inside the XML, other things could benefit from restructuring how you use VMs / containers, and yet others still could be hardware-specific.

Thanks for the link I will check it out. I have been having ongoing performance issues particularly related to streaming from a Plex server on the Host to Win8.1 and Win10 guests. (Should probably start a separate thread). I have also been having issues with DPC latency on the Windows guests. Both of these issues seem to be at least partly related to I/O and guest clock settings.

The guest clock timer seems to have the greatest effect on my latency issues as well as moving the GTX 970 gpu onto Message Signalled Interrupts.

Switching the virtio disk to iothreads and raw has also significantly improved Plex playback which just has not been as smooth as on my dedicated HTPC. Still want to get to that level of performance. All of which is complicated by Nvidia's lack of VM support with their newer cards.

Who doesn't like a challenge though, right?

 

Wait, so are you talking about Plex server running on unRAID inside a KVM virtual machine?  Why aren't you running it as a Docker container?  Super easy to setup, no performance issues that I can comment on, and is a better way to host Linux headless applications.  VMs are ideal for non-Linux applications or where desktop applications are needed (Plex Media Server isn't a desktop app, but a server app).

 

Can you post the XML for your VM so I can examine it closer?

Link to comment

interesting paper on KVM:I/O performance

 

http://jrs-s.net/2013/05/17/kvm-io-benchmarking/

 

I have a qcow2 Win8.1 VM on an XFS formatted SSD and I want to try writeback as a cache method as this appeared to be one of the faster options. When I change the cache method from

 

<driver name='qemu' type='qcow2' cache='none' io='native'/>

 

to

 

<driver name='qemu' type='qcow2' cache='writeback' io='native'/>

 

the VM will not start. See error below

 

Warning: libvirt_domain_create(): unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads in /usr/local/emhttp/plugins/dynamix.kvm.manager/classes/libvirt.php on line 838

 

Anyone know what the io=? for writeback?

 

edit: I am using io=threads and it starts nicely. So far It seems just as fast as the raw disk.

 

That article you linked is pretty old (circa 2013).  If you want to experiment with other XML tweaks, your best bet is to start reading here:  http://libvirt.org/formatdomain.html

 

That's the domain XML documentation for libvirt.  Not everything there is current either, but it's definitely a more definitive source of current information than the article you found.

 

We're still experimenting with various configs for the right balance of performance and features with virtual machines.  A lot of testing that goes into VMs is for enterprise computing / cloud / datacenter environments and not real world unRAID use-cases on machines in people's homes.  You don't house hundreds of databases that thousands of concurrent users are accessing, therefore lots of the guidance provided in 3rd party studies don't apply to a home-system that is serving at most a few different concurrent users at the same time.

 

The first thing I'd be curious to know is if you're having a performance issue.  The suggestion I made in this thread was really just to get people to "try this" and see if they noticed anything to improve their experience when raw images are on SSDs.  If you weren't feeling you had performance issues before, I wouldn't start tweaking XML now.  Embrace the tried-and-true saying: if it ain't broke, don't fix it.

 

If you ARE having a performance issue, please describe it better first so we can understand the scenario you're in and why this might be happening.  Some things can be adjusted to improve performance from inside the XML, other things could benefit from restructuring how you use VMs / containers, and yet others still could be hardware-specific.

Thanks for the link I will check it out. I have been having ongoing performance issues particularly related to streaming from a Plex server on the Host to Win8.1 and Win10 guests. (Should probably start a separate thread). I have also been having issues with DPC latency on the Windows guests. Both of these issues seem to be at least partly related to I/O and guest clock settings.

The guest clock timer seems to have the greatest effect on my latency issues as well as moving the GTX 970 gpu onto Message Signalled Interrupts.

Switching the virtio disk to iothreads and raw has also significantly improved Plex playback which just has not been as smooth as on my dedicated HTPC. Still want to get to that level of performance. All of which is complicated by Nvidia's lack of VM support with their newer cards.

Who doesn't like a challenge though, right?

 

Wait, so are you talking about Plex server running on unRAID inside a KVM virtual machine?  Why aren't you running it as a Docker container?  Super easy to setup, no performance issues that I can comment on, and is a better way to host Linux headless applications.  VMs are ideal for non-Linux applications or where desktop applications are needed (Plex Media Server isn't a desktop app, but a server app).

 

Can you post the XML for your VM so I can examine it closer?

 

Sorry probably wasn't all that clear. The Plex server is running on unRAID as a plugin. Just haven't gotten round to converting it over to a docker but it is on my list of things to do (so little time). Do you really think there is a performance gain to be had there?

 

I have a stand alone intel HTPC that I used to use as my primary Plex media player connected to an A/V receiver and gigabit switch (running Win7pro). So that is my point of reference for Plex Home Theatre playback. My Win8.1 and  Win10 VM's are connected via HDMI to the same A/V receiver.

 

Here is my current win8.1 XML

 

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>windows81nvidiaMSIRAW</name>
  <uuid>cc411d70-4463-4db7-bf36-d364c0cdaa3c</uuid>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>6</vcpu>
  <iothreads>6</iothreads>
  <cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='3'/>
    <vcpupin vcpu='2' cpuset='4'/>
    <vcpupin vcpu='3' cpuset='5'/>
    <vcpupin vcpu='4' cpuset='6'/>
    <vcpupin vcpu='5' cpuset='7'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-2.1'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
    <hap/>
    <kvm>
      <hidden state='on'/>
    </kvm>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='2' cores='8' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup' track='guest'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='directsync' io='native'/>
      <source file='/mnt/disk/vmdisk/Image Media/Win8.1ProN.raw'/>
      <target dev='hda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/disk/vmdisk/Images/en_windows_8_1_n_x64_dvd_2707896.iso'/>
      <target dev='hda' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/disk/vmdisk/Images/virtio-win-0.1-100.iso'/>
      <target dev='hdd' bus='sata'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x2'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:46:29:be'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
    </interface>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.x-data-plane=on'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=01:00.1,bus=pcie.0'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=00:12.0,bus=root.1,addr=00.1,'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=00:12.2,bus=root.1,addr=00.2'/>
  </qemu:commandline>
</domain>

 

Thanks for all the focus on KVM makes the unRaid system pretty amazing at this point.

Link to comment
  • 2 weeks later...

Please report back if you test this and let me know if you see an improvement in your virtual machines!!

I just tried this out.  I created a new q35 mythbuntu vm with a 10G raw image on an ssd and a 750GB laptop drive. I used MC to copy a 4GB mpg from drive to drive before and after the tweaks. There was no change on write performance to the image on ssd and when I add the laptop drive its write performance was cut in half.

 

Also a while back I added <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> to the kvm plugin editor if qemucommandline arguments were detected.  So you don't have to add that to the first line.

 

Edit: the shutdown worked fine in ubuntu with the tuning.

Link to comment
  • 2 weeks later...

interesting paper on KVM:I/O performance

 

http://jrs-s.net/2013/05/17/kvm-io-benchmarking/

 

I have a qcow2 Win8.1 VM on an XFS formatted SSD and I want to try writeback as a cache method as this appeared to be one of the faster options. When I change the cache method from

 

<driver name='qemu' type='qcow2' cache='none' io='native'/>

 

to

 

<driver name='qemu' type='qcow2' cache='writeback' io='native'/>

 

the VM will not start. See error below

 

Warning: libvirt_domain_create(): unsupported configuration: native I/O needs either no disk cache or directsync cache mode, QEMU will fallback to aio=threads in /usr/local/emhttp/plugins/dynamix.kvm.manager/classes/libvirt.php on line 838

 

Anyone know what the io=? for writeback?

 

edit: I am using io=threads and it starts nicely. So far It seems just as fast as the raw disk.

 

That article you linked is pretty old (circa 2013).  If you want to experiment with other XML tweaks, your best bet is to start reading here:  http://libvirt.org/formatdomain.html

 

That's the domain XML documentation for libvirt.  Not everything there is current either, but it's definitely a more definitive source of current information than the article you found.

 

We're still experimenting with various configs for the right balance of performance and features with virtual machines.  A lot of testing that goes into VMs is for enterprise computing / cloud / datacenter environments and not real world unRAID use-cases on machines in people's homes.  You don't house hundreds of databases that thousands of concurrent users are accessing, therefore lots of the guidance provided in 3rd party studies don't apply to a home-system that is serving at most a few different concurrent users at the same time.

 

The first thing I'd be curious to know is if you're having a performance issue.  The suggestion I made in this thread was really just to get people to "try this" and see if they noticed anything to improve their experience when raw images are on SSDs.  If you weren't feeling you had performance issues before, I wouldn't start tweaking XML now.  Embrace the tried-and-true saying: if it ain't broke, don't fix it.

 

If you ARE having a performance issue, please describe it better first so we can understand the scenario you're in and why this might be happening.  Some things can be adjusted to improve performance from inside the XML, other things could benefit from restructuring how you use VMs / containers, and yet others still could be hardware-specific.

Thanks for the link I will check it out. I have been having ongoing performance issues particularly related to streaming from a Plex server on the Host to Win8.1 and Win10 guests. (Should probably start a separate thread). I have also been having issues with DPC latency on the Windows guests. Both of these issues seem to be at least partly related to I/O and guest clock settings.

The guest clock timer seems to have the greatest effect on my latency issues as well as moving the GTX 970 gpu onto Message Signalled Interrupts.

Switching the virtio disk to iothreads and raw has also significantly improved Plex playback which just has not been as smooth as on my dedicated HTPC. Still want to get to that level of performance. All of which is complicated by Nvidia's lack of VM support with their newer cards.

Who doesn't like a challenge though, right?

 

Wait, so are you talking about Plex server running on unRAID inside a KVM virtual machine?  Why aren't you running it as a Docker container?  Super easy to setup, no performance issues that I can comment on, and is a better way to host Linux headless applications.  VMs are ideal for non-Linux applications or where desktop applications are needed (Plex Media Server isn't a desktop app, but a server app).

 

Can you post the XML for your VM so I can examine it closer?

 

Moving Plex over to a Docker container and then pinning it to the two of the cpu's that are not used by the Win8.1 VM has made a huge difference. I have family members who live overseas and regularly watch or sync from my Plex Server. Plex transcoding was maxing out 6+ cpu's of the 8 core AMD FX8350 and this did not make the Windows VM happy. I have done the same with SAB but limited it to a single core. All running smoothly now.

Link to comment
  • 2 months later...

I did this for a newly created Windows 10 VM with OVMF, and now that the install is finished (which took upwards of an hour), I'm showing the disk usage at 100% in the task manager.

 

<domain type='kvm' id='24' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>Aaron</name>
  <uuid>febf4a3b-0c67-229e-b068-7d19e091f514</uuid>
  <metadata>
    <vmtemplate name="Custom" icon="windows.png" os="windows"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>1</vcpu>
  <iothreads>1</iothreads>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='1' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='directsync' io='native'/>
      <source file='/mnt/disks/Samsung840Pro/Aaron/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/cache/appdata/iso/Windows10_InsiderPreview_x64_EN-US_10130.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/cache/appdata/iso/virtio-win-0.1.96.iso'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:89:2a:9a'/>
      <source bridge='virbr0'/>
      <target dev='vnet3'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Aaron.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5901' autoport='yes' websocket='5701' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='cirrus' vram='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk2.x-data-plane=on'/>
  </qemu:commandline>
</domain>

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.