Hyper-V inside a Server 2016 VM


Recommended Posts

Hey,

 

I have just downloaded the latest stable unraid version and installed it.

 

I have my Windows Server 2016 VM working. Inside that vm when i try installing the Hyper-V role i get an error Saying Virtualization is not enabled in the BIOS.

 

So, is what i am trying to do supported By UNRAID or is it a limitation of Server 2016?

 

If there are any settings i need to to do can you send me a link or the directions to them.

 

Thanks for the Help

Link to comment
  • 3 weeks later...
  • 2 months later...

hey,

 

with the new Unraid 6.3 RC Version you can enable Hyper-V and passthrough a Nvidia Card.

 

You have to set a vendor_id see this example:

 

    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='123456789ab'/>
    </hyperv>

 

you can read about this here:

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#.22Error_43_:_Driver_failed_to_load.22_on_Nvidia_GPUs_passed_to_Windows_VMs

Link to comment
  • 3 weeks later...

Hey chvb and others in this thread.

Have any of you gotten Hyper support to work?

I was pleased to see chvb's post, and just yesterday updated to 6.3 RC.

 

However I still can't enable Hyper-V in my windows.

 

I have added the vendor_id, I do have Nvidia card passthrough (GTX 1060). I tried with both i440 and Q35 to see if that made any difference.

Attached complete XML for Q35 setup. Originally using i440, both tried with the newest .7 version.

 

Booting fine, no problems there.

 

Grep of ps axf showing that unraid/virsh correct parsed the xml and added hv_relaxed, hv_vapic, hv_spinlocks AND hv_vendor_id as it is supposed to.

$ ps axf | grep qemu

22360 ?        SLl    8:57 /usr/bin/qemu-system-x86_64 -name guest=Windows 10 HyperV,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-Windows 10 HyperV/master-key.aes 
-machine pc-q35-2.7,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=123456789ab 
-drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on 
-drive file=/etc/libvirt/qemu/nvram/5e9c2cf0-54a6-77a8-83a6-bad5b70ba573_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 10240 -realtime mlock=off -smp 8,sockets=1,cores=4,threads=2 -uuid 5e9c2cf0-54a6-77a8-83a6-bad5b70ba573 
-display none -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-5-Windows 10 HyperV/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime 
-no-hpet -no-shutdown -boot strict=on -device i82801b11-bridge,id=pci.1,bus=pcie.0,addr=0x1e -device pci-bridge,chassis_nr=2,id=pci.2,bus=pci.1,addr=0x0 
-device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x2 
-drive file=/mnt/disk1/isos/Windows10.iso,format=raw,if=none,media=cdrom,id=drive-sata0-0-0,readonly=on -device ide-cd,bus=ide.0,drive=drive-sata0-0-0,id=sata0-0-0,bootindex=2 
-drive file=/mnt/disk1/isos/virtio-win-0.1.126.iso,format=raw,if=none,media=cdrom,id=drive-sata0-0-1,readonly=on -device ide-cd,bus=ide.1,drive=drive-sata0-0-1,id=sata0-0-1 
-drive file=/mnt/disk1/domains/Windows 10/vdisk1.img,format=raw,if=none,id=drive-sata0-0-2,cache=writeback -device ide-hd,bus=ide.2,drive=drive-sata0-0-2,id=sata0-0-2,bootindex=1 
-netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:11:58:2e,bus=pci.2,addr=0x1 
-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-5-Windows 10 HyperV/org.qemu.guest_agent.0,server,nowait 
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 
-device vfio-pci,host=01:00.0,id=hostdev0,bus=pci.2,addr=0x3,romfile=/boot/vbios.rom -device usb-host,hostbus=5,hostaddr=3,id=hostdev1,bus=usb.0,port=1 
-device usb-host,hostbus=3,hostaddr=4,id=hostdev2,bus=usb.0,port=2 -device usb-host,hostbus=3,hostaddr=6,id=hostdev3,bus=usb.0,port=3 
-device usb-host,hostbus=5,hostaddr=2,id=hostdev4,bus=usb.0,port=4 -device virtio-balloon-pci,id=balloon0,bus=pci.2,addr=0x4 -msg timestamp=on

 

Everything is working as normal, no problems. Nvidia card is in use, etc.

However my "Windows Features" has greyed out Hyper-V Platform telling me that the "Processor does not have required virtualization capabilites". Attached image.

 

Am I missing something?

I'm hoping to get this to work, as I'd like to run my old Windows 7 temporarily in Hyper-V. I was not succesful in booting my old Windows 7 (Without it BSOD'ing) in either Unraid nor VirtualBox. Hoping Hyper-V will allow me to boot it up.

 

Win10HyperVXml.txt

hypervcap.png.964e2ec69b53bc465ac5f68766763155.png

Link to comment

Nested virtualization was disabled in 6.3rc9 and 6.3stable. It can be re-enable by adding these lines to the top of your /boot/config/go script:

 

echo "options kvm-intel nested=1" > /etc/modprobe.d/kvm-intel.conf
echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf

 

Link to comment

a bit confused...

my Windows 10 VM configuration was having the hyperv enabled.

and i could still passthroigh the nvidia gpu, load the drivers and benchmark without issues.

this was on unraid 6.2.4, before i upgraded to 6.3.0.

 

But i keep reading that hyperv + nvidia passthrough != LOVE...

Or is this an issue only when trying to use the hyper v feature? (to be frank, i didn't use hyperv...)

 

thanks

-d

 

Link to comment

I think I solved it, with an additional note.

I don't know if Buggy's post made a difference, but I'm guessing it did.

However: I still couldn't install hyper-v through programs and features. Same message and greyed out as screenshot above.

 

After a bit of googling and mumbling, I randomly stumbled on a microsoft link: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v

By random chance I just attempted the powershell command written there: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

And success, it installed hyper, rebooted twice and I now have Hyper-V Manager and everything installed.

 

NB:

I have not yet tried booting anything through it, so whether it works or just installed I can't say yet. I'm keeping my hopes high, will try during the weekend. I'll update this thread/post if I have success.

 

Link to comment

I think I solved it, with an additional note.

I don't know if Buggy's post made a difference, but I'm guessing it did.

However: I still couldn't install hyper-v through programs and features. Same message and greyed out as screenshot above.

 

After a bit of googling and mumbling, I randomly stumbled on a microsoft link: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/quick-start/enable-hyper-v

By random chance I just attempted the powershell command written there: Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

And success, it installed hyper, rebooted twice and I now have Hyper-V Manager and everything installed.

 

NB:

I have not yet tried booting anything through it, so whether it works or just installed I can't say yet. I'm keeping my hopes high, will try during the weekend. I'll update this thread/post if I have success.

 

99% sure it won't work, it is installed but the hypervisor itself is not completely started. For some reason Microsoft really doesn't want people nesting Hyper-V even through it theoretically works perfectly (VMware has an option to bypass this "protection").

Link to comment

Just an FYI, enabling hyper-v enlightenments in a guest VM and being able to run Hyper-V inside a guest to create nested VMs are NOT the same thing.

 

The Hyper-V setting available on the edit VM page refers to making the windows guest VM "aware" that it is a VM. This lets Windows relax a bit on things like clock timing and spin locks.  Read this as: possibly better performance for the guest VM.

 

What you guys want is to enable nested virtualization which will let you install VMs under a guest running on unRAID. That's was disabled by default in 6.3 due to stability concerns related to Avast (the antivirus software).

 

Lastly, being able to use Hyper-V with Nvidia GPU pass through has been around since 6.2. We already automatically patch the VM XML for you, so those custom XML statements are not necessary.

 

Sent from my SM-G930P using Tapatalk

 

 

Link to comment

Thanks for clarifying jonp :)

However that still doesn't really bring us closer.

 

I see now that when I disable the Hyper-V setting on a VM I get the Nvidia passthrough problem that has been mentioned. By just enabling it, the problem is gone, which also shows that Unraid is already "faking" the vendor_id (with the text "none"?)

 

Bungy wrote:

Nested virtualization was disabled in 6.3rc9 and 6.3stable. It can be re-enable by adding these lines to the top of your /boot/config/go script:

 

<code>

echo "options kvm-intel nested=1" > /etc/modprobe.d/kvm-intel.conf

echo "options kvm-amd nested=1" > /etc/modprobe.d/kvm-amd.conf

</code>

 

However even with that I still can't install/run Hyper-V VMs.

I do not run Avast or other such software, so I'm really not concerned there.

 

How do we enable nested virtualization? Is Bungy's tip correct? Or is there something missing to reenable nested virtualization?

Would I be forced to downgrade to a Beta release?

 

Linux 4.9.7-unRAID.
root@Tower:~# cat /etc/modprobe.d/kvm-intel.conf
options kvm-intel nested=1

Link to comment

Wow.. Thanks jonp :)

 

Although that's bad news for me :( Guess I'll have to try to see if I can get my p2v converted through other means.

 

Looking at release notes, it looks as I can expect 6.4 to be in the range of at least 3-6 months.

At least thanks for finding that KVM bug, now I know the source of the frustration :D

Link to comment

I think my previous instructions may have been incorrect. I think we need to modify the kernel paramaters on boot to enable nested virtualization in unraid 6.3+. Modify /boot/syslinux/syslinux.cfg to append these parameters:

 

append kvm-intel.nested=1 kvm-amd.nested=1 initrd=/bzroot

 

instead of

append initrd=/bzroot

 

You can then check that nested virtualization is enabled using:

cat /sys/module/kvm_intel/parameters/nested

 

I'm sure the command is similar for amd machines.

Link to comment

Did anyone having problem tried this:

 

https://blogs.technet.microsoft.com/gbanin/2013/06/25/how-to-install-hyper-v-on-a-virtual-machine-in-hyper-v/

 

Please test if you can and post results

Now I have good news and bad news for you who are eager to learn how to install the Hyper-V virtual machine. The good news is that through PowerShell you can install but the bad news is that unfortunately you are not able to start the virtual machines, but for self-study lab and is already a great improvement you can create a cluster of Hyper-V and verify in practice how it all works. However I will teach you how to install the Hyper-V as well as the Cluster service.

Wouldn't matter, unfortunately.

 

I'm going to test out Bungy's post later today.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.