SSD performance/benchmark on VM


Recommended Posts

Ive been looking into the poor passthrough ssd performance for a while now... the only way it seems to get near bare metal performace is to use the virtio-scsi controller and pass through the disk as a LUN device and to set sgio as "unfiltered" so SCSI commands aren't bottle necked by the hypervisor.

 

unfortunately at the moment:

a) it doesnt work with beta 21 of unraid.

b) Last time i tested, either LUN passthrough or the virtio-scsi controller is only supported in Linux guests, not in windows (this may have been fixed in the latest windows drivers, ive not checked in a while)

 

If anyone wants to test, this is the XML for the disk\controller (obvs change the source dev to your own disk name):

 

<disk type='block' device='lun' rawio='no' sgio='unfiltered'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/sdg'/>
      <target dev='sdd' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>

 

Unfiltered SGIO isnt supported in the unraid kernel it seems as i get an error when starting the VM: "Requested operation is not valid: unpriv_sgio is not supported by this kernel".

removing the sgio option allows the VM to start, but then its bottlenecked the same as every other passthrough method, and back to the poor performance.

 

If anyone knows how to enable 'unpriv_sgio' in the unraid kernel, im happy to test :)

Link to comment

Ed had asked me to chime in here with our thoughts on this topic, so here I am!

 

We don't typically like to get involved in discussions relating to benchmarks or performance because they are mostly hardware dependent.  Our primary focus in testing vdisk performance is to ensure that normal computer use (opening apps, playing games, surfing the web) is unimpeded by virtualization overhead and we feel unRAID does a great job at that.  Benchmark tools are a great thing, but as many of you have discovered, the nature of virtualization and emulation allows for some weird results to pop up from time to time.  For example, because the emulated storage controller that we use for VMs (virtio) uses system RAM for caching, your performance testing can greatly vary based on the amount of free RAM available on the system at the time of the test.  Another issue is that sometimes benchmarks will load data onto a disk to then read said data, and virtio is very efficient in recognizing when something being read from disk is still in RAM, so it can deliver unrealistic speeds compared to truly random user IO.

 

Virtualization in unRAID utilizes the KVM hypervisor and the QEMU emulation software.  There are plenty of articles discussing benchmark performance on this topic, but as many of you have also found, the information out there is pretty scattered, not concise, and not always current.  We are continuing to improve software documentation for unRAID in the wiki, but it will take some time for us to extend our documentation to cover all the open source components that are built into it as well.

 

All of that said, we are constantly seeking to push the limits of providing high performance storage for virtual machines.  We have even purchased in some PCIe SSDs ourselves to test and support NVMe on unRAID and we are continuing to invest in newer and faster technologies to test unRAID against so that new system builders can use the latest components with confidence.  Just understand that with all new things, it takes time to implement, test, and validate, so in the meantime, we ask the community to continue having these discussions here for us to learn from.  There is lots of valuable information in this forum in the form of testing and posting results with various hardware setups.

Link to comment

I honestly thought that passing through an entire SSD would give almost bare-metal performance (as it does with GPUs)... So if performance is not that good it might not be worth to have a separate SSD and just use the cache pool.

I only tested it once, but yes perfomance will be "near" bare-metal. (like GPU -> 95-99%)

 

Sorry to ask you again, just to clarify... you tested once doing a SSD passthough and got 95-99% of performance in comparison to bare-metal? Because if that's what you meant, that's absolutely perfect for me! (using QEMU/KVM caching/improvements?)

 

Also when you mentioned 50-70% of bare metal performance, were you referring about using vdisk running on the cache pool? On this percentage did you included the caches and other improvements QEMU/KVM offer?

 

Thanks!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.