Jump to content
We're Hiring! Full Stack Developer ×

SSD passsthrough, whats a QEMU HARD DISK?


pervin_1

Recommended Posts

I have passed through 2 Samsung 860 EVOs in Ubuntu 18.04 VM running in Unraid:
/dev/disk/by-id/ata-Samsung_SSD_860_EVO_500GB_XXXXXXXXXXXXXX
/dev/disk/by-id/ata-Samsung_SSD_860_EVO_500GB_XXXXXXXXXXXXXX
Everything is working as intended to be honest, did software RAID0, pretty happy!!!
Today I ran the SMART info and it shows my both disks as QEMU HARD DISK, wasn't this supposed to be Samsung here? 
I have attached 2 screenshots, I would greatly appreciate any help or advice!!!!
 
Link to comment

The machine is creating a virtual hard disk and will then forward all disk accesses from the VM to the physical disk.

 

But at the same time, it is catching some of the accesses made by the virtual machine - in this case it lets the virtual machine see a normalized drive instead of seeing the actual hardware identifiers. A big reason for this is so that you can replace the physical hardware without the virtual machine noticing.

 

You don't want applications on the virtual machine using the serial number of the real disk as a hardware lock and then refuse to operate because you have moved the VM to another host.

Link to comment
8 minutes ago, pwm said:

The machine is creating a virtual hard disk and will then forward all disk accesses from the VM to the physical disk.

 

But at the same time, it is catching some of the accesses made by the virtual machine - in this case it lets the virtual machine see a normalized drive instead of seeing the actual hardware identifiers. A big reason for this is so that you can replace the physical hardware without the virtual machine noticing.

 

You don't want applications on the virtual machine using the serial number of the real disk as a hardware lock and then refuse to operate because you have moved the VM to another host.

I see now, so this is totally normal behavior, I thought there was something wrong with my setup. Does it mean there will always be some overhead involved regardless of a disk passthrough, meaning, I will never archive RAW disk performance in VM versus a physical hardware?

 I guess there should be 10-15% performance loss during "emulation", more latency since there is a data forwarding involved? Cause I was expecting around 900-1000Mb/s sequential write and read in software RAID with 2x Samsung EVO 860 500GB SSD drives, but instead I get max  ~750-800Mb/s or less, which is still amazing, to be honest, can't complain here.

 

Thank you for the reply, I appreciate it!!! 

Link to comment

For raw performance, it's the host controller you need to pass through, in which case the VM will directly access the actual disks connected to that controller.

 

It's quite common to give the VM a virtual host controller and then your configuration file maps one or more disk images or physical disks to show up on that virtual host controller.

 

I'm not really sure exactly how much performance loss you get from having a virtual host controller - in theory the loss would be extremely small. But it depends on the latency of the host - if the host doesn't have available resources to instantly take care of disk accesses then the VM will see lower performance since the CPU core(s) inside the VM will depend on the response time of the CPU core(s) of the host machine.

 

VMware custom-adapted a host OS just to minimize the latency when handling disk and network accesses arriving through emulated host controllers or NICs. In the end, a lot will depend on what your unRAID host OS is busy doing.

Link to comment
10 minutes ago, pwm said:

For raw performance, it's the host controller you need to pass through, in which case the VM will directly access the actual disks connected to that controller.

 

It's quite common to give the VM a virtual host controller and then your configuration file maps one or more disk images or physical disks to show up on that virtual host controller.

 

I'm not really sure exactly how much performance loss you get from having a virtual host controller - in theory the loss would be extremely small. But it depends on the latency of the host - if the host doesn't have available resources to instantly take care of disk accesses then the VM will see lower performance since the CPU core(s) inside the VM will depend on the response time of the CPU core(s) of the host machine.

 

VMware custom-adapted a host OS just to minimize the latency when handling disk and network accesses arriving through emulated host controllers or NICs. In the end, a lot will depend on what your unRAID host OS is busy doing.

It makes sense, I saw a lof instructions involved regarding the passing through disks, GPUs and PCI controllers by editing XML settings, but how do I pass 2 onboard SATA controllers, it's really important for me in terms of latency, performance and also making sure that guest OS properly sees the disk as Samsung SSD, so I can issue proper trim commands as well as monitor it with smartctl, right now Ubuntu sees it as Qemu Hard Disks, and Smart does not show all values I need, some values are not supported, its really important. I searched here, but I could not find it, any links you would like to share with me, please? Thank you again!!!

Link to comment

There are other people on the forum who are better suited to help you with passing through your host controllers.

 

Remember that unRAID will not see your SSD and it will be up to you to fix some tools to issue mail or other event in case the SMART data starts to indicate troubles.

 

It isn't impossible that you might get TRIM to work even with a virtual host controller. I know TRIM from the VM can be used to shrink virtual disk images on the host. It isn't unlikely that TRIM can also reach the actual SSD flash blocks.

 

The following link might be a starter point for getting TRIM to work:

https://superuser.com/questions/646559/virtualbox-and-ssds-trim-command-support

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...