SSD Write performance


Recommended Posts

Hi Guys,

 

Ive noticed for a while now that write speeds aren't great from inside windows VMs. When i say 'aren't great', i mean 150MB/s out of a possible 550MB/s from my dedicated VM SSD. Obviously not slow, but I feel like im missing something in my config thats causing write speeds to be hit. Read speeds seem to be fine.

The same SSD was running at 550MB/s write when used as a 'normal' drive in my old workstation, so im pretty sure it isnt the drive and its something to do with the VM stuff.

 

I get the same write results when using a raw img file for the HDD (as set up by the GUI) and also when passing through the entire drive (current setup).

 

Here is a screenshot of a drive benchmark, not sure whats going on with 4k read\writes?:

 

fmBZcPd.jpg

 

I'm using the 109 driver set.

 

XML is here:

 

<domain type='kvm' id='21' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

  <name>Office PC</name>

  <uuid>365b72a7-dff3-3205-24ce-3fbe70d9521b</uuid>

  <metadata>

    <vmtemplate name="Custom" icon="windows.png" os="windows"/>

  </metadata>

  <memory unit='KiB'>8388608</memory>

  <currentMemory unit='KiB'>8388608</currentMemory>

  <memoryBacking>

    <nosharepages/>

    <locked/>

  </memoryBacking>

  <vcpu placement='static'>8</vcpu>

  <cputune>

    <vcpupin vcpu='0' cpuset='8'/>

    <vcpupin vcpu='1' cpuset='9'/>

    <vcpupin vcpu='2' cpuset='10'/>

    <vcpupin vcpu='3' cpuset='11'/>

    <vcpupin vcpu='4' cpuset='12'/>

    <vcpupin vcpu='5' cpuset='13'/>

    <vcpupin vcpu='6' cpuset='14'/>

    <vcpupin vcpu='7' cpuset='15'/>

  </cputune>

  <resource>

    <partition>/machine</partition>

  </resource>

  <os>

    <type arch='x86_64' machine='pc-q35-2.3'>hvm</type>

    <boot dev='cdrom'/>

    <boot dev='hd'/>

    <bootmenu enable='yes' timeout='3000'/>

  </os>

  <features>

    <acpi/>

    <apic/>

    <hyperv>

      <relaxed state='on'/>

      <vapic state='on'/>

      <spinlocks state='on' retries='8191'/>

    </hyperv>

  </features>

  <cpu mode='host-passthrough'>

    <topology sockets='1' cores='8' threads='1'/>

  </cpu>

  <clock offset='localtime'>

    <timer name='hypervclock' present='yes'/>

    <timer name='hpet' present='no'/>

  </clock>

  <on_poweroff>destroy</on_poweroff>

  <on_reboot>restart</on_reboot>

  <on_crash>restart</on_crash>

  <devices>

    <emulator>/usr/bin/qemu-system-x86_64</emulator>

    <disk type='block' device='disk'>

      <driver name='qemu' type='raw'/>

      <source dev='/dev/disk/by-id/ata-SanDisk_SDSSDX240GG25_125095403047'/>

      <backingStore/>

      <target dev='hdc' bus='virtio'/>

      <alias name='virtio-disk2'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>

    </disk>

    <disk type='file' device='cdrom'>

      <driver name='qemu' type='raw'/>

      <source file='mnt/user/Software/Windows10_InsiderPreview_x64_EN-GB_10565.iso'/>

      <backingStore/>

      <target dev='hda' bus='sata'/>

      <readonly/>

      <alias name='sata0-0-0'/>

      <address type='drive' controller='0' bus='0' target='0' unit='0'/>

    </disk>

    <disk type='file' device='cdrom'>

      <driver name='qemu' type='raw'/>

      <source file='mnt/user/Software/virtio-win-0.1.109.iso'/>

      <backingStore/>

      <target dev='hdb' bus='sata'/>

      <readonly/>

      <alias name='sata0-0-1'/>

      <address type='drive' controller='0' bus='0' target='0' unit='1'/>

    </disk>

    <controller type='usb' index='0' model='ich9-ehci1'>

      <alias name='usb'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x7'/>

    </controller>

    <controller type='usb' index='0' model='ich9-uhci1'>

      <alias name='usb'/>

      <master startport='0'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0' multifunction='on'/>

    </controller>

    <controller type='sata' index='0'>

      <alias name='ide'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>

    </controller>

    <controller type='pci' index='0' model='pcie-root'>

      <alias name='pcie.0'/>

    </controller>

    <controller type='pci' index='1' model='dmi-to-pci-bridge'>

      <alias name='pci.1'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>

    </controller>

    <controller type='pci' index='2' model='pci-bridge'>

      <alias name='pci.2'/>

      <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>

    </controller>

    <controller type='virtio-serial' index='0'>

      <alias name='virtio-serial0'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>

    </controller>

    <interface type='bridge'>

      <mac address='52:54:00:8c:80:b6'/>

      <source bridge='br0'/>

      <target dev='vnet0'/>

      <model type='virtio'/>

      <alias name='net0'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>

    </interface>

    <serial type='pty'>

      <source path='/dev/pts/0'/>

      <target port='0'/>

      <alias name='serial0'/>

    </serial>

    <console type='pty' tty='/dev/pts/0'>

      <source path='/dev/pts/0'/>

      <target type='serial' port='0'/>

      <alias name='serial0'/>

    </console>

    <channel type='unix'>

      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Office PC.org.qemu.guest_agent.0'/>

      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>

      <alias name='channel0'/>

      <address type='virtio-serial' controller='0' bus='0' port='1'/>

    </channel>

    <hostdev mode='subsystem' type='pci' managed='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x1d' slot='0x04' function='0x0'/>

      </source>

      <alias name='hostdev0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>

    </hostdev>

    <hostdev mode='subsystem' type='pci' managed='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x60' slot='0x00' function='0x0'/>

      </source>

      <alias name='hostdev1'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>

    </hostdev>

    <memballoon model='virtio'>

      <alias name='balloon0'/>

      <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/>

    </memballoon>

  </devices>

  <qemu:commandline>

    <qemu:arg value='-device'/>

    <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/>

    <qemu:arg value='-device'/>

    <qemu:arg value='vfio-pci,host=0f:00.0,bus=pcie.0,multifunction=on,x-vga=on'/>

    <qemu:arg value='-device'/>

    <qemu:arg value='vfio-pci,host=0f:00.1,bus=pcie.0'/>

  </qemu:commandline>

</domain>

 

Stuff ive tried so far...

 

  • When the HDD was a raw img file, same write speeds when cache='writeback' and cache='native'
  • Passed through entire SSD (Same disk as used withe the raw img file) and restored the same vm from a backup onto the new disk

 

If anyone is running W10 in a VM on an SSD, could you run some benchmarks and let me know if this 'issue' is across the board or just me?

Also, if anyone has any suggestions to tweak my setup to get better write speeds, im quite happy to be a guineepig and test stuff.

 

Thanks for your time guys,

Mark

Link to comment
  • Replies 72
  • Created
  • Last Reply

Top Posters In This Topic

When it was a raw img HDD file, the drive was formatted in ext4 and not part of the array. Mounted with the unassigned devices plugin.

 

Now i pass through the entire Disk, I assumed that the windows setup would control the entire disk, partition tables and all, and format it in NTFS?

 

EDIT: just noticed that for some reason its still being reported that i have an ext4 partition on that drive? Is this normal or should i scap that partition in the terminal and restore windows from its backup again?:

 

tp3AbiX.png

Link to comment

fdisk -l reports that the drive only has NTFS partitions. So thats just a red herring from the Unassigned devices plugin it seems:

 

OW0aVjO.jpg

 

Disk rescan on the unassigned partitions plugin now shows ntfs.

Definitely pointing the finger at either my VM config or the drivers?

 

the only thing i can think to try next is to get a dedicated SATA card for the SSD and pass that through rather than the disk to avoid using the VirtIO drivers?

Link to comment

Had similar issues long ago. Try something like this, worked for me

 

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' discard='unmap'/>
      <source dev='/dev/sdi'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <boot order='1'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

 

Remove the SATA controller, and add a new scsi controller using virtio-scsi

 

  
  <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/>
  </controller>

 

Most likely you need to reinstall Windows, due to missing drivers, but SSD performance is close to native without passing the controller.

Link to comment

Had similar issues long ago. Try something like this, worked for me

 

    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' discard='unmap'/>
      <source dev='/dev/sdi'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <boot order='1'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

 

Remove the SATA controller, and add a new scsi controller using virtio-scsi

 

  
  <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/>
  </controller>

 

Most likely you need to reinstall Windows, due to missing drivers, but SSD performance is close to native without passing the controller.

 

Thanks! I'll give this a shot before I potentially brick my H310 :)

 

Link to comment

Well im a little closer... managed to get my disk passed though using virtio-scsi. took me a little while to work out that you can only boot from device 0 when using the scsi controller, which meant needing to boot of usb rather than using sata or virtio-scsi. In case anyone needs it, this is the disk part of my XML with my windows ISO, my VIRTIO driver ISO and my virtio-scsi passed through SSD:

 

<disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' discard='unmap'/>
      <source dev='/dev/disk/by-id/ata-SanDisk_SDSSDX240GG25_125095403047'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='mnt/user/Software/Windows10_InsiderPreview_x64_EN-GB_10565.iso'/>
      <backingStore/>
      <target dev='hda' bus='usb'/>
      <readonly/>
      <alias name='usb-disk0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='mnt/user/Software/virtio-win-0.1.109.iso'/>
      <backingStore/>
      <target dev='hdb' bus='usb'/>
      <readonly/>
      <alias name='usb-disk1'/>
    </disk>

 

Regardless of this change, ive only gained around 50MB/s writes (better than nothing), but 4k writes are still way off what they should be:

 

gQeuVBk.jpg

 

If anyone else is running windows on an SSD, IMG or drive passed though, could you post some benchmarks to see if its just me having disk performance issues?

 

Link to comment

@JonP

 

Im hoping the SSD changes discussed earlier weren't part of 6.1.4?

No difference in SSD Performance compared to 6.1.3.

 

Mark

Sadly no, it was a missed item from inclusion just before releasing 6.1.4. It was tested as part of our 6.2 internal beta already, but we didn't have time to backport the change. If you know your SSD drive letters, you can try this right now:

 

echo deadline > /sys/block/sdX/queue/scheduler

 

Change sdX to be your SSD. Do this for each SSD in the system.  You do not have to stop the array to do this.  Test a write speed to cache before and after the change.

Link to comment

Just thought I'd submit an additional data point for you: I didn't really notice much improvement after that executing that command. But then, I never really noticed a write performance problem to begin with.  ;)

 

I'm running unRAID 6.1.3 Pro, and my cache drive is a Samsung 840 EVO 250GB connected to a m1015 flashed w/ IT.

 

root@nas:/mnt/cache# dd if=/dev/zero of=/mnt/cache/dump.me bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 7.63911 s, 562 MB/s
root@nas:/mnt/cache# rm dump.me
root@nas:/mnt/cache# echo deadline > /sys/block/sdf/queue/scheduler
root@nas:/mnt/cache# dd if=/dev/zero of=/mnt/cache/dump.me bs=1M count=4096
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 7.52125 s, 571 MB/s
root@nas:/mnt/cache#

 

-A

Link to comment

Im trying some custom bits in XML but they keep getting stripped out by something?

 

<controller type='scsi' index='0' model='virtio-scsi' iothread='io1'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
    </controller>

 

and this iothread object at the end of my xml added:

 

    <qemu:arg value='-object'/>
    <qemu:arg value='iothread,id=io1'/>

 

iothread='io1'

seems to get removed from the SCSI Controller when i save the XML, or when i go back in to view it?

Also, a SATA controller gets added even if i take it out of the XML each time i update it?

 

From what ive read, everything in a VM shares a single IO thread, but these additions "should" allow the disk to have its own

 

Where do these XML files get saved? I might need to make changes in notepad++ and upload them by hand.

 

 

Link to comment

Ok, making (tiny!) progress...

 

somehow managed to get the changes to stick for one boot (changed the next time as normal) but i got a new error message:

 

"got wrong number of IOThread pids from QEMU monitor. got 1, wanted 0"

 

Seems to be a bug which has been fixed in libvirt 1.2.21, At the moment we are on 1.2.18....

 

Bug discussion: https://www.redhat.com/archives/libvir-list/2015-October/msg00424.html

1.2.21 changelog: https://libvirt.org/news.html

 

Any chance this can be updated for 6.2? If not, is there a way to upgrade manually?

 

 

Link to comment

We are updated on libvirt for 6.2.

 

;D

 

Just a case of waiting for 6.2 before I start trying to change things again I guess!

 

Question though...Is the XML "checking" functionality of libvirt, or is that something on top that's been implemented for unraid? Could be problematic going forward when all these extra switches are available to use in the config but then they get stripped out when you save the XML and its "checked".

 

Ps, I dont want to sound as if im complaining with my posts, I love the functionality KVM in Unraid brings to the table, Im just a geek and want to squeeze out every last bit of performance available from my hardware!

 

Mark

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.