What is the current status of NVMe support?


dAigo

Recommended Posts

This is great stuff and something I am planning on doing tonight to experiment with. Thanks to the both of you for outlining all this. If you have a sec can you outline how you used the mount point from nvme during the creation of the vm to host the os on?

 

Also if you have any info on how to modify the GO command that would be appreciated. I didn't see reference to that and am super noobing it here.

 

Edit: Figured out that the mnt point will show up in the unRAID gui console and allow you to select it as the "primary vdisk location." I'm sure this was obvious to many just not me until I started experimenting.

 

Also the go command can be modified via the console (CLI).

 

Go to the directory (remove any " from below commands) 

 

cd /boot/config

 

type "vi go"

From there hit the "~" key

Move to the field you want to edit and hit "i"

Type in the following on two separate lines

"mkdir /mnt/nvme"

"mount /dev/disk/by-uuid/2d5e7ce0-41e6-47b5-80d2-70df40a8c1da /mnt/nvme"

Hit "Esc" key

Then "wq!"

 

Now you have the go file set to auto create and mount the drive on boot each time. Took me a bit to figure out and this is mostly to inform other noob fish.

Link to comment

Guys

 

I successfully passed through my 950 Pro NVME to my Windows 10 VM but I am not 100% sure I am see the transfer speeds I would expect. Generally ASSD benchmarks produce 200-300MB/s seq read/write, I would expect a lot more!

The virtio drivers are all installed and so is the samsung nvme driver, however, I cannot see the samsung driver/nvme controller present in device manager anywhere.

 

Any ideas?

 

Here is my xml...

 

<domain type='kvm' id='8'>

  <name>MOHOME</name>

  <uuid>3b8bdaad-0e1d-6092-ddc7-f59b7de93e47</uuid>

  <metadata>

    <vmtemplate name="Custom" icon="windows.png" os="windows"/>

  </metadata>

  <memory unit='KiB'>16777216</memory>

  <currentMemory unit='KiB'>16777216</currentMemory>

  <memoryBacking>

    <nosharepages/>

    <locked/>

  </memoryBacking>

  <vcpu placement='static'>8</vcpu>

  <cputune>

    <vcpupin vcpu='0' cpuset='0'/>

    <vcpupin vcpu='1' cpuset='1'/>

    <vcpupin vcpu='2' cpuset='2'/>

    <vcpupin vcpu='3' cpuset='3'/>

    <vcpupin vcpu='4' cpuset='4'/>

    <vcpupin vcpu='5' cpuset='5'/>

    <vcpupin vcpu='6' cpuset='6'/>

    <vcpupin vcpu='7' cpuset='7'/>

  </cputune>

  <resource>

    <partition>/machine</partition>

  </resource>

  <os>

    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>

  </os>

  <features>

    <acpi/>

    <apic/>

  </features>

  <cpu mode='host-passthrough'>

    <topology sockets='1' cores='8' threads='1'/>

  </cpu>

  <clock offset='localtime'>

    <timer name='rtc' tickpolicy='catchup'/>

    <timer name='pit' tickpolicy='delay'/>

    <timer name='hpet' present='no'/>

  </clock>

  <on_poweroff>destroy</on_poweroff>

  <on_reboot>restart</on_reboot>

  <on_crash>restart</on_crash>

  <devices>

    <emulator>/usr/bin/qemu-system-x86_64</emulator>

    <disk type='file' device='cdrom'>

      <driver name='qemu' type='raw'/>

      <source file='/mnt/user/ISO/virtio-win-0.1.112.iso'/>

      <backingStore/>

      <target dev='hda' bus='virtio'/>

      <readonly/>

      <boot order='2'/>

      <alias name='virtio-disk0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>

    </disk>

    <disk type='block' device='disk'>

      <driver name='qemu' type='raw' cache='writeback'/>

      <source dev='/dev/disk/by-path/pci-0000:01:00.0'/>

      <backingStore/>

      <target dev='hdb' bus='ide'/>

      <boot order='1'/>

      <alias name='ide0-0-1'/>

      <address type='drive' controller='0' bus='0' target='0' unit='1'/>

    </disk>

    <controller type='usb' index='0'>

      <alias name='usb'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>

    </controller>

    <controller type='pci' index='0' model='pci-root'>

      <alias name='pci.0'/>

    </controller>

    <controller type='ide' index='0'>

      <alias name='ide'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>

    </controller>

    <controller type='virtio-serial' index='0'>

      <alias name='virtio-serial0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>

    </controller>

    <interface type='bridge'>

      <mac address='52:54:00:ec:74:3b'/>

      <source bridge='virbr0'/>

      <target dev='vnet0'/>

      <model type='virtio'/>

      <alias name='net0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

    </interface>

    <serial type='pty'>

      <source path='/dev/pts/0'/>

      <target port='0'/>

      <alias name='serial0'/>

    </serial>

    <console type='pty' tty='/dev/pts/0'>

      <source path='/dev/pts/0'/>

      <target type='serial' port='0'/>

      <alias name='serial0'/>

    </console>

    <channel type='unix'>

      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/MOHOME.org.qemu.guest_agent.0'/>

      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>

      <alias name='channel0'/>

      <address type='virtio-serial' controller='0' bus='0' port='1'/>

    </channel>

    <input type='tablet' bus='usb'>

      <alias name='input0'/>

    </input>

    <input type='mouse' bus='ps2'/>

    <input type='keyboard' bus='ps2'/>

    <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-gb'>

      <listen type='address' address='0.0.0.0'/>

    </graphics>

    <video>

      <model type='cirrus' vram='16384' heads='1'/>

      <alias name='video0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>

    </video>

    <memballoon model='virtio'>

      <alias name='balloon0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>

    </memballoon>

  </devices>

</domain>

 

Link to comment

As an additional, i modified by xml so the nvme image user the following

<driver name='qemu' type='raw' cache='none' io='threads'/>

 

it did make things faster. I have attached a crystal disk mark run on my nvme within 2 vm's booted and running.

The read speeds aren't as high as possible but are pretty quick.

 

I also added this to the bottom of my go file

mkdir /mnt/nvme0
mount /dev/nvme0n1p1 /mnt/nvme0

 

I have my VM's set to autostart and the go file mounts the nvme on boot for me

 

Regards,

Jamie

ssd.jpg.83e400829adba868103956745a4b6f21.jpg

Link to comment

As an additional, i modified by xml so the nvme image user the following

<driver name='qemu' type='raw' cache='none' io='threads'/>

 

it did make things faster. I have attached a crystal disk mark run on my nvme within 2 vm's booted and running.

The read speeds aren't as high as possible but are pretty quick.

 

I also added this to the bottom of my go file

mkdir /mnt/nvme0
mount /dev/nvme0n1p1 /mnt/nvme0

 

I have my VM's set to autostart and the go file mounts the nvme on boot for me

 

Regards,

Jamie

I will definitely try this when I get home  :D

I just picked up an Intel 750 400gb to play with.

Link to comment

As an additional, i modified by xml so the nvme image user the following

<driver name='qemu' type='raw' cache='none' io='threads'/>

 

it did make things faster. I have attached a crystal disk mark run on my nvme within 2 vm's booted and running.

The read speeds aren't as high as possible but are pretty quick.

 

I also added this to the bottom of my go file

mkdir /mnt/nvme0
mount /dev/nvme0n1p1 /mnt/nvme0

 

I have my VM's set to autostart and the go file mounts the nvme on boot for me

 

Regards,

Jamie

 

Interesting, I am getting only half your speeds (approx 800MB/s read) with the 950 nvme drive passed through to the VM. I thought it would be as quick if not quicker?? I added your xml modifications also, but no difference. I wonder what it could be.

Link to comment

As an additional, i modified by xml so the nvme image user the following

<driver name='qemu' type='raw' cache='none' io='threads'/>

 

it did make things faster. I have attached a crystal disk mark run on my nvme within 2 vm's booted and running.

The read speeds aren't as high as possible but are pretty quick.

 

I also added this to the bottom of my go file

mkdir /mnt/nvme0
mount /dev/nvme0n1p1 /mnt/nvme0

 

I have my VM's set to autostart and the go file mounts the nvme on boot for me

 

Regards,

Jamie

 

Interesting, I am getting only half your speeds (approx 800MB/s read) with the 950 nvme drive passed through to the VM. I thought it would be as quick if not quicker?? I added your xml modifications also, but no difference. I wonder what it could be.

 

Any ideas what's going on here chaps?

 

I performed a bench of the nvme in unraid and got the following impressive 2.3GB/s stat...

However, in Windows 10, crystaldisk mark only reaches approx 900MB/s read. I guess then there is more i can do to optimize my VM?

It's currently in IMG format. Any tips people?

 

/dev/nvme0n1:

Timing cached reads:  19914 MB in  1.99 seconds = 9995.03 MB/sec

Timing buffered disk reads: 7008 MB in  3.00 seconds = 2335.83 MB/sec

root@MOUNRAID01:/dev# hdparm -tT /dev/nvme0n1

 

/dev/nvme0n1:

Timing cached reads:  20040 MB in  1.99 seconds = 10058.78 MB/sec

Timing buffered disk reads: 7000 MB in  3.00 seconds = 2332.82 MB/sec

root@MOUNRAID01:/dev#

 

Link to comment

As an additional, i modified by xml so the nvme image user the following

<driver name='qemu' type='raw' cache='none' io='threads'/>

 

it did make things faster. I have attached a crystal disk mark run on my nvme within 2 vm's booted and running.

The read speeds aren't as high as possible but are pretty quick.

 

I also added this to the bottom of my go file

mkdir /mnt/nvme0
mount /dev/nvme0n1p1 /mnt/nvme0

 

I have my VM's set to autostart and the go file mounts the nvme on boot for me

 

Regards,

Jamie

I will definitely try this when I get home  :D

I just picked up an Intel 750 400gb to play with.

 

I posted something about tweaks I tried to improove perfomance in THIS thread.

 

In short: I use "cache=unsafe" and "x-data-plane=on".

 

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
...
<devices>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='unsafe' io='threads'/>
      <source file='/mnt/nvme/Rechner/Rechner_vDisk1.img'/>
      <backingStore/>
      <target dev='hda' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
  ...
  ...
  </devices>
  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.scsi=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.config-wce=off'/>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.virtio-disk0.x-data-plane=on'/>
  </qemu:commandline>
</domain>

bench.PNG.3272b4d71c1d522c754bc349432bce5c.PNG

Link to comment

Interesting, I am getting only half your speeds (approx 800MB/s read) with the 950 nvme drive passed through to the VM. I thought it would be as quick if not quicker?? I added your xml modifications also, but no difference. I wonder what it could be.

 

You did not exactly passthrough your PCI-device. You just mapped your block-device through a virtio disk-device to your vm.

qemu/kvm does not need to know, that it is a disk you are passing throught, that just adds overhead that nvme is trying to remove.

 

That way, you get pretty much native perfomance, because you remove any overhead, that qemu/kvm may add. Your VM would directly access the PCI device. But as I wrote you via pm, you would probably need to use OVMF (EFI) instead of SeaBIOS, because I don't know if SeaBIOS can boot from PCI/nvme.

Link to comment

Interesting, I am getting only half your speeds (approx 800MB/s read) with the 950 nvme drive passed through to the VM. I thought it would be as quick if not quicker?? I added your xml modifications also, but no difference. I wonder what it could be.

 

You did not exactly passthrough your PCI-device. You just mapped your block-device through a virtio disk-device to your vm.

qemu/kvm does not need to know, that it is a disk you are passing throught, that just adds overhead that nvme is trying to remove.

 

That way, you get pretty much native perfomance, because you remove any overhead, that qemu/kvm may add. Your VM would directly access the PCI device. But as I wrote you via pm, you would probably need to use OVMF (EFI) instead of SeaBIOS, because I don't know if SeaBIOS can boot from PCI/nvme.

 

How would you go about making that modification to do the direct pass-through and still be able to boot from it?

Link to comment

Like that: Simpler / Easier PCI Device Pass Through for NON-GPUs

 

You dont need to tell your vm, that this device is a disk, so remove the <disk ...> part.

Win10 should see that pci device and identify it like it does with any other pci device.

Of course, passthrough can always make problems, depending on the hardware. (IOMMU groups etc.)

 

This way, it can use the nvme drivers instead of virtio and therefore remove the driver/protokoll overhead.

 

Link to comment

Well, not supporting may be right, I don't know.

 

But it definitly works. And to make sure I just copied all the stuff from my nvme disk to the array an created a new VM.

1 core, no hyper-v, ovmf and pci-passthrogh for my disk.

 

Win10 x64 detected the drive without any additional drivers.

After Installation it boots (although I have to add the boot-option in the EFI shell)

 

As you can see in the screenshot, Windows detects the disk correctly as an intel NVME SSD...

However, thats probably the issue, I think as of now, Intel is one of the few nvme devices, thats build into the installation media.

 

That kind of compatibility is why I generally recommend the more expensive and sometimes even a little bit "slower" Intel over Samsung.

 

As a sidenote, pre 1151 everything works fine, with 1151, at least for me GPU-Passthrough with ovmf is broken.

Maybe there are solutions for that problem, but I switched back to SeaBIOS, because I don't need the EFI features right now.

But passthrough of the nvme device still works with 1151 and ovmf, its just kind of useless without a gpu :)

passthrough.jpg.34334887672009f90cde21c79385de97.jpg

Link to comment

Soo, since I already installed...

I looked into the ovmf/gpu-passthrough issues is had and it seems a newer ovmf version resolves the issue..

 

Got it already compiled from HERE.

Just extract the "OVMF-pure-efi.fd" from the "edk2.git-ovmf-x64-0-20160209.b1474.g7d0f92e.noarch.rpm" file, put it somewhere qemu can access (Array/flashdrive/etc.) and change the path of the ovmf file to it...

 

I guess it would be nice if a newer version of ovmf makes it into 6.2.

 

So, to stay on topic, I have no problem passing through my pci nvme disk...

 

I think I will move everything back tomorrow, the normal ssd is so slow :P

Unless someone needs to get something tested...

1151-passthrough.JPG.500d7dd2ed77a76b967f0d9ccd9959c8.JPG

Link to comment

Soo, since I already installed...

I looked into the ovmf/gpu-passthrough issues is had and it seems a newer ovmf version resolves the issue..

 

Got it already compiled from HERE.

Just extract the "OVMF-pure-efi.fd" from the "edk2.git-ovmf-x64-0-20160209.b1474.g7d0f92e.noarch.rpm" file, put it somewhere qemu can access (Array/flashdrive/etc.) and change the path of the ovmf file to it...

 

I guess it would be nice if a newer version of ovmf makes it into 6.2.

 

So, to stay on topic, I have no problem passing through my pci nvme disk...

 

I think I will move everything back tomorrow, the normal ssd is so slow :P

Unless someone needs to get something tested...

 

I did this but I am still having a strange issue. Each time my W10 VM reboots I lose the ability to boot it until I restart UNRAID. I have tried everything including using seperate OVMF fd files. No matter what I can only boot my VM once in EFI without restarting the server, very very annoying.

Link to comment
  • 3 weeks later...
  • 2 weeks later...

Unraid 6.2 still does not support NVMe?

 

I had to return my NVMe. I tried to find an alternative to Unraid to use it but I couldn't  :'(

 

well to say it 'doesnt' support it is not necessarily true...you can mount the drive through the shell without issue (it's quite simple to do actually)

 

what you can't do (as far as I understand it) is assign it as a cache drive or even see it appear in the web gui at all

Link to comment

Yeah, the whole package with cache and array...

I put that info in the opening post, so that new people can find it.

 

But 6.2 is still in beta and it seems it has a ton of changes.

I would have no problem to use that beta (on a new system), unRaid betas are usually really stable.

 

But I from the patch notes it seems there are a lot of things one has to consider depending on the current system.

That beeing said, I'll definitly go and upgrade asap, but not without carefull planing :)

 

Whatever the outcome, its realy nice how fast we got from "driver loaded" (nov.'15)-> "its on our list" (dec.'16) -> "we talked to someone at ces" -> "we ordered a test device" (jan.'16) -> "beta support" (mar.'16)

Link to comment

Looking forward to this feature as I have a Samsung 950 loaded and ready. When I tested this on a normal Windows 10 box the speed was phenomenal getting 1800 read and 900 write

 

What size is your 950?

 

My 950 Pro 512GB is doing 2200/1800 on my X99 system.

 

 

I'm running a 265GB Samsung SM951 in UnRAID 6.2b18 and it's generally working - temps are being read, but no SMART.

Link to comment

Just as a quick slightly off topic, I have a Samsung 951 256gb on the new beta. On my motherboard it is positioned under 2 gpus.

 

What sort of temps are you guys reading? Mind tends to sit around 50's but I've seen it hit 58'c under heavy load. I know that very hot for a hdd but I think these are safe up to around 70'c

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.