SSD Write performance


Recommended Posts

We are updated on libvirt for 6.2.

 

;D

 

Just a case of waiting for 6.2 before I start trying to change things again I guess!

 

Question though...Is the XML "checking" functionality of libvirt, or is that something on top that's been implemented for unraid? Could be problematic going forward when all these extra switches are available to use in the config but then they get stripped out when you save the XML and its "checked".

 

When you create a VM using XML Expert or edit an existing VM in XML Edit mode, the changes you make may not always stick.  What happens when you click "Update" or "Create" in this mode is the XML you've assembled/customized is submitted to Libvirt to update the VM definition.  It then interprets the XML and adds/removes what is necessary.  For example, each individual <device> has an <address>, but we don't really want to be editing those, because libvirt will fill those in for us automatically.  That's why when I provide advice to folks to edit XML for things like vdisks, I tell them to just delete the entire <address> line from that element, because libvirt will do that work for us.

 

Certain elements that you don't add will always get added by libvirt as they are required for the machine type of the VM (expected).  Some elements you add won't stick because they may be expected other "sister elements" in the XML that aren't present.  Modifying libvirt XML is not technically something they support.  This is literally written at the top of the XML files libvirt stores (we don't expose you to this):

 

WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE

OVERWRITTEN AND LOST.

 

So yeah, supporting XML edits is not something we are going to do, but it is something we use to test additional functionality for folks like yourself that want your systems to be guinea pigs to see if we can supercharge its guinea-legs and make it run faster.

 

Ps, I dont want to sound as if im complaining with my posts, I love the functionality KVM in Unraid brings to the table, Im just a geek and want to squeeze out every last bit of performance available from my hardware!

 

Mark

 

As am I!  I totally appreciate where you're coming from.  I can tell you that I did experiment with IO threads to improve vdisk performance a while back and it didn't seem to help at all.  That said, it's been a while and perhaps I need to give that another go.  Also, are your virtual disks pointed directly through /mnt/cache or do you go through /mnt/user?

Link to comment
  • Replies 72
  • Created
  • Last Reply

Top Posters In This Topic

Also, are your virtual disks pointed directly through /mnt/cache or do you go through /mnt/user?

 

Neither,

Im using an unnassigned device outside of the array for my windows VM and pass through the entire disk. Only reason for this was that its been part of my tinkering to see if it made any difference to performance (it didnt!). so, at the moment this is my disk:

<disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' discard='unmap'/>
      <source dev='/dev/disk/by-id/ata-SanDisk_SDSSDX240GG25_125095403047'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

 

If its libvirt that gives the XML a once over before saving, and removing things it doesnt deem necessary, Im thinking that the new libvirt version has the necessary changes to understand the IOTHREAD stuff, as it seems to had quite alot added since the current vesion (1.2.18). eg:

 

1.2.21:
qemu: Fix qemu startup check for QEMU_CAPS_OBJECT_IOTHREAD 
conf: Optimize the iothreadid initialization,
qemu: Check for niothreads == 0 in qemuSetupCgroupForIOThreads,
qemu: Use 'niothreadids' instead of 'iothreads'
conf: Refactor the iothreadid initialization

1.2.19:
api: Adjust comment for virDomainAddIOThread
qemu: Add check for invalid iothread_id in qemuDomainChgIOThread 
conf: Check for attach disk usage of iothread=0
api: Remove check on iothread_id arg in virDomainPinIOThread

 

Link to comment

Try this.  Follow this guide but instead of located the Bluray drive in the step using lsscsi, locate your SSD:

 

http://lime-technology.com/forum/index.php?topic=35504.msg331448#msg331448

 

You'll have to load the virtio driver for this to work and it's a different folder (vioscsi or virtio-scsi it's probably called off the root of the virtio.iso).  This is actually passing through the device to a virtual SCSI controller, but will expose the device to your guest natively (no QEMU disk anymore).

Link to comment

Try this.  Follow this guide but instead of located the Bluray drive in the step using lsscsi, locate your SSD:

 

http://lime-technology.com/forum/index.php?topic=35504.msg331448#msg331448

 

You'll have to load the virtio driver for this to work and it's a different folder (vioscsi or virtio-scsi it's probably called off the root of the virtio.iso).  This is actually passing through the device to a virtual SCSI controller, but will expose the device to your guest natively (no QEMU disk anymore).

 

So after much testing, im back to using a QEMU disk. :(

 

I got the drive to pass through usng the link above, but windows refused to complete the installation, and gave a "Windows could not set the offline locale" error. Google gave the impression that this was usually related to a bad disk, but as I know the disk is good, i put this down to something not being quite right in the way libvirt is passing the disk through to the VM.

 

I'll hang on to 6.2 to see if the IOTHREAD stuff goes anywhere.

 

For reference, this is my current disk config:

 

<disk type='block' device='lun'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/dev/sdd'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
    </controller>

Link to comment

Another small update...

 

Got restless so I decided to pass through my H310 (flashed with LSI Firmware). Windows installation didn't detect the SSD, regardless of which driver i tried in the windows setup.

Ubuntu saw the drive, and installation went through fine, however, on reboot, SeaBIOS doesnt seem to know how to boot from a passed through HBA.

Tried passing through the option rom, still no joy.

 

Back to a QEMU disk (again!).

Link to comment

Have exactly the same issue, but passing through a cheap ASMEDIA ASM1062 integrated in the MB. Works perfectly in the first boot, and managed to boot a pre-installed Windows 10, but upon every guest reboot SEABIOS doesn't find the controller, forcing me the reboot the host.

 

Maybe ejecting the controller on windows reboot could help. I never tried, but just an idea :)https://www.linuxserver.io/index.php/2013/09/12/xen-4-3-windows-8-with-vga-passthrough-on-arch-linux/. It helped in the past with GPU issues...

Link to comment

Have exactly the same issue, but passing through a cheap ASMEDIA ASM1062 integrated in the MB. Works perfectly in the first boot, and managed to boot a pre-installed Windows 10, but upon every guest reboot SEABIOS doesn't find the controller, forcing me the reboot the host.

 

Maybe ejecting the controller on windows reboot could help. I never tried, but just an idea :)https://www.linuxserver.io/index.php/2013/09/12/xen-4-3-windows-8-with-vga-passthrough-on-arch-linux/. It helped in the past with GPU issues...

 

I couldnt find a driver to get through the windows installation. never managed to detect a disk. Ubuntu found the drive, so the LSI driver must be in the kernel, but seabios doesnt see it as a bootable device, so doesnt go anywhere.  Im assuming id have had the same issue with booting even if the windows installation finished anyway!

 

I would try OVMF, but i don't get any video output when I use that Bios :( . Will see what 6.2 brings (hopefully) soon.

Link to comment

Since you are waiting anyway... i am testing different settings right now, so I thought I share them, maybe it helps to find the bottleneck/bug.

Be aware, I am no veteran on that topic, just posting my observations and personal conclusions. (which may be wrong)

 

Depending on how brave you are with your "guinea pig'ing" and how often you are willing to do backups, you could go ahead and try different "cache" options in qemu and windows.

 

IBM: Best practice: KVM guest caching modes

In my case "unsafe" is the fastest, according to AS SSD...

Funny thing is, I have better results using "none" cache, than the default "writeback", as if there is some kind of overhead/bottleneck that slows things down when qemu caches and waits for the ACK (O_SYNC issues?)...

 

Most SSDs only have normal power loss protection. some kind of ssd-internal journal that ensures data-integrity, but does not prevent loosing data in your cache in case of a sudden powerloss.

Some Intel SSDs (320, 750) have additional capacitators that should work as a little battery in case of a sudden powerloss.

A battery-backed or flash-backed sata controller may also add some "safety".

 

When or if at all the cache is actually written depends on many factors, qemu alone has four and the windows guest has additional three... so 12 possible combinations.

I disabled any cache flushing, so neither windows ("Turn of write-buffer flushing") nor qemu ("unsafe") ever tells the ssd to write the cached data to its flash an waits for the ok. I asume/hope, that the SSD will do that, when IO is low, which should be fairly often in a io-light desktop/gaming usage...

 

The Problem is, caching itself makes benchmarking very questioanable, since you never know if you are actually testing the perfomance of your hardware or just its ability to use its given cache... Maybe your host has not enough resources to properly cache?

You should try and use at least 3GB files in AS SSD, at least I get results that are more usable, especially in a vm.

 

In addition to that, and since single/multithreading was mentioned, you may want to try out [ftp=ftp://public.dhe.ibm.com/linux/pdfs/KVM_Virtualized_IO_Performance_Paper_v2.pdf]virtio-blk-data-plane[/ftp].

According to that document (from 2013...), "normal" VirtIO had a IOPS Cap of 140k iops. while that seems "enough" for a sata ssd, it shows, that VirtIO was not designed vor high i/o.

I asume driven by enterprise demands (which are io heavy) that option gets more love than the "legacy" virtio, because it seems, that it is, in one way another..., capable of multithreading

 

However, there are things I do not understand myself right now, so i may have come to false conclusions, wich could explain why the following does not make sense:

  -  data-plane is intended für virtio-blk/virtio-scsi but in my case, it worked best with the normal virtio.

  -  caching:

          - disabling host-cache ("none") speeds things up, but I would guess host-cache should be the hosts memory and therefore be much faster? (probably depends on disk/ssd speed)

          - most of all: the 4k random i/o are highest (with my settings), when the guest only has very limited memory (512MB).

          - 1GB Memory halfes the 4k reads and 2GB+ then halfes the 4k writes as well. (I can reproduce that with win8, 10, Server2012 and Server 2016 prev)

          - so if windows runs out of memory to cache and qemu ignores/has no caching, the system gets faster, sounds strange.

 

 

The cache settings increased my sequential perfomance (1,5x - 2x), while data-plane helped with 4k Random i/o (1,5x-3x -> depending on guest memory), your results may vary.

my xml:

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
...
<devices>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='unsafe' io='threads'/>
      <source file='/mnt/nvme/Rechner/Rechner_vDisk1.img'/>
      <backingStore/>
      <target dev='hda' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
  ...
  ...
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.scsi=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.config-wce=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.x-data-plane=on'/>
  </qemu:commandline>
</domain>

Link to comment

Since you are waiting anyway... i am testing different settings right now, so I thought I share them, maybe it helps to find the bottleneck/bug.

Be aware, I am no veteran on that topic, just posting my observations and personal conclusions.......

 

Hi dAigo,

 

Nice to see that someone else is experiencing the same as me. Was starting to wonder if I was the only one who'd noticed the poor disk performance! I went down the cache setting change rabbit hole a while ago, Since then ive been down the IOTHREAD and HBA passthrough rabbit hole too. Yet to find a rabbit!

 

From what I understand (Im open to being corrected if this is wrong!), it seems alot of the VM Hardware shares a single IOTHREAD, which is why when you increase memory/other hardware on a VM, you notice a hit on disk performance. It also makes me wonder what else is being effected, but I think with the disk being IOP heavy, thats going to be the most noticeable.

 

The answer seems to be to create more IOTHREADs and assign hardware to those threads. 

 

To create the new IOTHREAD:

    <qemu:arg value='-object'/>
    <qemu:arg value='iothread,id=io1'/>

 

To tell the scsi controller to use that thread:

<controller type='scsi' index='0' model='virtio-scsi' iothread='io1'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
    </controller>

 

Doing that threw up an error: "got wrong number of IOThread pids from QEMU monitor. got 1, wanted 0".

Seems that its a known issue in the version of libvirt (1.2.18) being used in UnRaid 6.1.4. jonp has hinted that libvirt gets an upgrade in 6.2, so my testing shall start up again on that release. No doubt i'll hit some more roadblocks, but there's not much more I can test without being able to use a different version of QEMU and libvirt going forward.

 

I had toyed with the idea of getting a dedicated HBA for use with my main VM, but then I reminded myself that its a VM, and the whole point is to be able to share hardware, not have dedicated bits!

 

From discussions on other forums, it does seem that being able to use additional IOTHREADs will resolve this, so fingers crossed that libvirt gets a version bump to 1.2.21 in the next UnRaid version :)

 

 

Link to comment
From what I understand (Im open to being corrected if this is wrong!), it seems alot of the VM Hardware shares a single IOTHREAD, which is why when you increase memory/other hardware on a VM, you notice a hit on disk performance. It also makes me wonder what else is being effected, but I think with the disk being IOP heavy, thats going to be the most noticeable.

 

The answer seems to be to create more IOTHREADs and assign hardware to those threads. 

That seems correct to me, which is why I not only mentioned cache settings, but also the "dataplane" setting.

Seems that its a known issue in the version of libvirt (1.2.18) being used in UnRaid 6.1.4. jonp has hinted that libvirt gets an upgrade in 6.2, so my testing shall start up again on that release. No doubt i'll hit some more roadblocks, but there's not much more I can test without being able to use a different version of QEMU and libvirt going forward.

Look at the document behind the [ftp=ftp://www.linux-kvm.org/images/a/a7/02x04-MultithreadedDevices.pdf]"multithread"[/ftp] link in my post.

 

It seems to me, that data-plane uses 1 IOThread per device as a default. (Page 9)

I was trying to point out, that you may add threads that way until your bug is fixed.

IOThread CPU affinity x-data-plane=on 1:1 mode

Classic -device virtio-blk-pci,x-data-plane=on:

? 1 IOThread per device

? Makes sense with fewer devices than host CPUs

So.. in theory... if you have two 8-core cpus (2x 4+4) best performance would be to create a vm with 16 seperate disks (raw images on your ssd), and create a raid0 in the vm.

That way, qemu could use 16 threads, which should be (theoreticaly) distributed across all 16 "cores" and therefore run at the same time... Well, until you run from the i/o bottleneck of the disk to the i/o bottleneck of your cache/host/physikal hardware. (a 16-disk software raid in a vm does sound rather io/interrupt heavy, maybe tone it down^^)

 

Something like that is what i think happened with my tests. (just a gut feeling according to my current knowledge)

I get almost twice as many 4k random IOs when I add data-plane. But because that only helps with the IO of the virtual disk, as soon as I hit the single cache/memory "thread", there is a bottleneck. Therefore, if I remove/ignore as much memory/cache IOs (by "ignoring" the "flush" requests/IOs of the guest), the disk performance goes up even higher.

The Windows driver forbids me to deactive write cache, otherwise i would test that.

In passthrough I had "near bare-metal" perfomance, wich would almost double the sequential performance. I am with you on "why passthrough in a virtual system" but depending on the hardware/costs passthrough is the better Option (see GPU-passthrough -> spend 10k+ on a GRID GPUs + VMware to get a shared vGPU seems unreasonable for gaming ::) )

 

That was obviosly not the case with mechanical drives, but SSDs exposed those bottlenecks. High IO requirements and affordable high IO hardware (pci/nvme flash storage) are pushing the IO development in qemu.

From what I understand, they are/were trying the "data-plane" approach with diffrent devices, such as network/memory because it seems everything is limited by I/Os -> Threads.

 

Oh btw. there are version .110 VirtIO drivers available, I did not notice any change in perf., but who knows...

 

So ... before I am getting off topic  :-[ , you should try "data-plane" to see if it changes anything.

 

Rest is offtopic, just my thoughts in generall  8)

 

If its not a generall unRAID/qemu issue (most people seem ok), maybe its your hw config? (yeah, that rabbid hole ;) )

 

While XEONS are great at what they do, I don't know how well unraid/qemu works with dual socket cpus. That may introduce additional overhead/cache issues. But that NUMA stuff gets really complicated. you pinned your vcpu to your second cpu, but what about the memory? Can qemu make sure, that the memory of your second cpu is used/prioritzed? what about the host cache? on wich cores are the qemu threads running?

Xeons are great for multithreaded applications, but even if you could seperate the IO into ONE additional thread, that thread would run on a 2.2Ghz core... an i7 with 3.8-4.5Ghz may still be faster. Depending on your workload, it may be of no diffrence to create one or two additional threads, you would need a larger scale.

Thats why I love the new skylake... more/enough pci-lanes for storage, more/enough memory for vms, but still a very high ghz per core ratio.

Only thing missing is ecc memory and dual cpu, but 8 "cores" are usualy "enough".

 

Maybe your hardware has badly "optimized" linux drivers and there is an issues with the host<->device communication? (Could also be the reason for the passthrough issues...)

Maybe because its a Dell controller with a LSI firmware, on a HP mainboard? the LSI 2008 is a good controller, but rather old.

The LSISAS2008 seems to have some issues with TRIM/SSDs. Do you use IT or IR firmware? Did you try another controller? (onboard maybe) Maybe the write-cache is slow/damaged on the controller?

 

HP is great for workstation/server hardware, but they are a pita when it comes to campatibility. At work, we had a case where two external Tape drives were damaged (fw update/reset didn't help) and replaced until hp pointed to the "IBM" sas controller (LSI2008...) and said we should change that to one that is listed in the compatibility list of the tape-drive (HP with LSI2008...). We did that and had now issues ever again.

We usually try to avoid mixing hp with other vendors, if the customer does not have the money for "all HP", we use "no HP". Thats why supermicro gets so pupular, they are mixing very well with other vendors.

Link to comment
  • 2 weeks later...

Had chance to experiment again over the last few days. TLDR is that ive not managed to make any improvements.

 

I played around with the virtio-blk controller and tested the x-data-plane settings. zero changes.

    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.scsi=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.config-wce=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.x-data-plane=on'/>

 

I also tested some extra bits on the virtio-scsi controller to increace the number of queues, but again, something is stripping off the extra parameters ive added on:

<controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/>

(num_queues='8' gets removed before the vm boots).

 

Very frustrating as id much prefer an error stating "no, you cant do that, you idiot" rather than something checking for bits it feels aren't necessary! even manually editing the xml in '/etc/libvirt/qemu'  removes any extra bits i add. Would be nice if i could turn off that 'feature', as its quite annoying.

 

With the libvirt update coming in 6.2, im hoping the iothread (previously x-data-plane) options can be added to the blk controller and they dont get stripped out (can you tell im bitter??).

Other than that, i think the ultimate performance fix would be to find a cheap-ish 2 or 4 port SATA3\SAS controller that can achieve full 6Gb/s throughput and is compatible as a boot device inside seabios or OVMF. Does anyone know of such a card??

 

JonP, are you able to expand on what updates are included on the VM side in 6.2? Just a libvirt update or is qemu, seabios and ovmf updated as well? (Not sure if they all come hand in hand or not)

Link to comment
  • 3 weeks later...

After reading a bunch of these posts and threads about the disk IO issues with windows and KVM I figured I would run some benchmarks.

 

Its almost impossible to run a apples to apples especially when running a brtfs cache pool. What surprised me the most was how bad the AS SSD benchmarks were on all KVM VM's it suffered far worse than the CrystalDisk benchmarks.

 

Windows Native

OS: Windows X64 10 Pro

 

AS%20SSD-ToshibaSSD-NonVM.pngCrystalDIsk-ToshibaSSD-NonVM.png

 

 

version: unRaid 6.1.6

vDisk: brtfs cachepool

OS: Windows X64 10 Pro

 <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/VM/win10Test/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>

 

AS%20SSD-VM-BRTFS-Cache.pngCrystalDisk-VM-BRTFS-Cache.png

 

version: unRaid 6.1.6

vDisk: Toshiba SSD - XFS - unassigned devices

OS: Windows X64 10 Pro

    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disks/TOSHIBA_THNSNJ256GCST_935S101JTSXY-part1/win10XFSTest/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>

 

 

AS%20SSD-ToshibaSSD-VM-XFS.pngCrystalDisk-TOSHIBA-VM-XFS.png

 

 

 

Link to comment
  • 2 weeks later...

Bumping this up to the top with a question for fellow VM users. Noticed a few people were passing through Bluray\DVD drives this way, and wondered if it would work for a HDD to try and increace performance of passed through SSDs.

 

eg, disk is listed as:

[1:0:2:0]    disk    ATA      SanDisk SDSSDX24 R211  /dev/sde 

 

so the pass though in this case would be like this?:

<controller type='scsi' index='0' model='virtio-scsi'/> 
    <hostdev mode='subsystem' type='scsi'> 
      <source> 
        <adapter name='scsi_host1'/> 
        <address type='scsi' bus='0' target='2' unit='0'/> 
      </source> 
    </hostdev>

 

Briefly attempted this but Windows setup was unable to detect any disks. Attempted to use both the blk and virtio-scsi driver.

Link to comment

I have been searching the forum for a few days and I can not find any real answers to whether or not a whole drive can be passed through.

 

We can passthrough GPU's and USB controllers but I can not find a thread that specifically discusses passing through a single SSD as a boot drive?

Link to comment
  • 4 weeks later...

Pro tip:  instead of typing the path to where you want to create a vdisk in the webGui, just path to /dev/disk/by-id/YOURDISKHERE

 

Yeah, that really does work...

 

jonp, would there be a way to move the contents of an existing vdisk image directly onto a disk, change the location to said disk and boot up as normal? What's the config change and process, if it's possible? I have a 512GB SSD i'd like to use exclusively for that VM!

Link to comment

Pro tip:  instead of typing the path to where you want to create a vdisk in the webGui, just path to /dev/disk/by-id/YOURDISKHERE

 

Yeah, that really does work...

 

Will 6.2 give extra raid style options for the cache pool ?  I have 4X250GB right now and raid5 style would be sufficient for me , raid 10 is a bit of an overkill imho..

Link to comment
  • 4 weeks later...

Small progress.

 

Bought a new SSD, and decided to give another try.

 

After enabling a legacy boot option on the on "bios" for the Asmedia ASM1061 controller, which comes integrated in my motherboard, manage to pass through that and boot every time from it, without any issues after restarts (using Seabios).

 

Performance is considerably better, but apparently the controller is limiting to 200MB/s

 

OGclBjd.png

and

SZDbJyZ.png

 

Any thoughts? Still slower than I would expect from Native Windows 10.

 

 

Link to comment

No disk section (actually have there the virtio.img iso).

 

  <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/Others/Apps/Virtio_Drivers/virtio-win.iso'/>
      <backingStore/>
      <target dev='hdb' bus='usb'/>
      <readonly/>
      <alias name='usb-disk1'/>
    </disk>

 

Passing through the controller (had to enable some legacy thing in the bios, else it was flaky).

 

    <qemu:arg value='-device'/>
    <qemu:arg value='vfio-pci,host=04:00.0,bus=pcie.0'/>

Link to comment

interesting!

I always got the impression that booting from a passed through controller in a VM wasn't possible!

Ive tried it in the past, managed to install the OS but then it didnt detect any boot devices. granted i did do it with a hba rather than a sata controller though...

 

Did you have to do anything else in the XML or on the VM to get it to boot?

I may buy a cheap-ee psci-e SATA3 card and have an experiment... hmmm!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.