Docker Desktop (Hyper-V) inside (nested) Windows 10 VM


Recommended Posts

Unraid 6.8.3

TLDR: How to get docker desktop running inside W10 VM (if it is even possible)?

(and yes, there is specific reason why I would like to run docker inside the VM and not just connect to unraid's daemon)

 

As far as i can determine, it should be working, but it simply isnt.

 

I have enabled nested-vm in unraid:

systool -m kvm_intel -v | grep nested
nested              = "Y"

I could enable hyper-v features in W10 VM and as far as I can determine they should be working:

image.png.be7ce2ce78450827b0b25bd258fd246b.png

image.thumb.png.8627e1b4561b3cd88350be3034d78c4e.png

image.png.a17c2aca00ae83f20c3dcb562e52a41f.png

 

Docker Desktop installs fine, but it won't actually start:

Docker.Core.DockerException:
Docker.Core.Backend.BackendException:
Unable to start Hyper-V VM: 'DockerDesktopVM' failed to start.

Failed to start the virtual machine 'DockerDesktopVM' because one of the Hyper-V components is not running.
...

I am assuming it is because one of the hyper-v services is not running and won't start (in a physical machine with docker working, it is running) :

image.png.05395a43370974806c7ad42aa61afe79.png

I also assumed the service issue was caused by this driver problem:

image.png.c4f1357f4b1a0ddbe1c12df1b3dd7e72.png

 

I have retried disabling-enabling features and reinstalling docker desktop with same end result.

I have also tried:

<kvm>
  <hidden state='on'/>
</kvm>

Which didn't seem to do anything.

Then I tried (separately):

<feature policy='disable' name='hypervisor'/>

Which resulted in W10 not being aware that it is a VM:

image.png.7fd7dc33a7e3910b4154abf04a4bd249.png

"Virtualization: Enabled" seemed lika good thing and it also seemingly fixed the driver issue:

image.png.f07f509745c018cef44195c048b179ce.png

But the HV Host Service behaves still in the exact same manner (so maybe this driver thingy is not relevant at all?)

And when I try to run Docker Desktop, it fails with a new message:

Hardware assisted virtualization and data execution protection must be enabled in the BIOS. See https://docs.docker.com/docker-for-windows/troubleshoot/#virtualization-must-be-enabled

I also tried: https://forums.unraid.net/topic/70040-guide-vms-in-vm-intel-nested-virtualization/ but it didn't seem to change behaviour.

https://youtu.be/2-saWn6ZbHc?t=663 describes same behaviour 3 years ago and attributes it to a bug that I assume should be fixed by now?

 

At this point I'm out of ideas and my google-fu has let me down. Any help would be appreciated.

 

Base VM conf (without all the fix attempts):

<?xml version='1.0' encoding='UTF-8'?>
<domain type='kvm' id='53'>
  <name>ptvm</name>
  <uuid>ce31c424-4f6a-1f98-4667-de7ffad64628</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>6</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='5'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='6'/>
    <vcpupin vcpu='4' cpuset='3'/>
    <vcpupin vcpu='5' cpuset='7'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/ce31c424-4f6a-1f98-4667-de7ffad64628_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='3' threads='2'/>
    <cache mode='passthrough'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/vm/ptvm/vdisk1.img' index='2'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.141-1.iso' index='1'/>
      <backingStore/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:6f:03:42'/>
      <source bridge='br0'/>
      <target dev='vnet2'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/2'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/2'>
      <source path='/dev/pts/2'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-53-ptvm/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='vnc' port='5901' autoport='yes' websocket='5701' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+100</label>
    <imagelabel>+0:+100</imagelabel>
  </seclabel>
</domain>

 

Link to comment

Someone may have to correct me, but i think it's needed to have this tag inside <cpu> for the nested virtualization to work prior to everything else you have tried:

<cpu mode='host-passthrough' check='none'>
    ...
	<feature policy='require' name='vmx'/>
    ...
</cpu>

At least it was needed when i tried to do some nested virtualization a year or so ago.

Link to comment
32 minutes ago, Nephilgrim said:

Someone may have to correct me, but i think it's needed to have this tag inside <cpu> for the nested virtualization to work prior to everything else you have tried:


<cpu mode='host-passthrough' check='none'>
    ...
	<feature policy='require' name='vmx'/>
    ...
</cpu>

At least it was needed when i tried to do some nested virtualization a year or so ago.

This was described here:

9 hours ago, k11su said:

I also tried: https://forums.unraid.net/topic/70040-guide-vms-in-vm-intel-nested-virtualization/ but it didn't seem to change behaviour.

And unfortunately, as mentioned, it resulted in same error:

Failed to start the virtual machine 'DockerDesktopVM' because one of the Hyper-V components is not running.

(conf attached).

ptvm_2.xml

Link to comment

Warning: backup your vdisk before running the below.

 

When I tried Docker Desktop, even on bare metal, it wouldn't work because of that Hyper-V not running error.

The command below fixed it (run it as administrator).

bcdedit /set hypervisorlaunchtype auto

 

However, when booting as a VM (NVMe passed-through in dual boot), the VM wouldn't boot (stuck in the Tianocore screen).

Hence backup your vdisk first before trying in case you need to restore.

 

 

 

 

 

 

Link to comment
1 hour ago, testdasi said:

Warning: backup your vdisk before running the below.

 

When I tried Docker Desktop, even on bare metal, it wouldn't work because of that Hyper-V not running error.

The command below fixed it (run it as administrator).


bcdedit /set hypervisorlaunchtype auto

 

However, when booting as a VM (NVMe passed-through in dual boot), the VM wouldn't boot (stuck in the Tianocore screen).

Hence backup your vdisk first before trying in case you need to restore.

Yep, this results in VM not booting - eventually it will get to windows startup repair and only way to fix it (that i have found) is combination of:

and

which effectively just undoes the change (as far as I can determine).

Link to comment

If you can boot to Repair mode, from the command line just type the below and it will boot back up again (obviously without Hyper-V)

bcdedit /set hypervisorlaunchtype off

 

I have given up on nested virtualisation for quite a while now because of this catch-22 situation.

Unraid has docker and VM support so it's not like I miss it to be honest.

 

 

 

Link to comment
  • 3 weeks later...
  • 6 months later...

📌
Having the same problem trying to start Docker with WSL2 backend inside a Win10 VM with passed through NVidia GPU (if that's relevant). The VM is running off of a bare metal install of Windows 10 on a passed through HDD (when booting directly into that Windows it works).
grafik.png.0a9a0604a21756f751a20b57fba5db7b.png

 

System.InvalidOperationException:
Failed to deploy distro docker-desktop to C:\Users\admin\AppData\Local\Docker\wsl\distro: exit code: -1
 stdout: Aktivieren Sie bitte das Windows-Feature Virtual Machine Platform und stellen Sie sicher, dass die Virtualisierung im BIOS aktiviert ist.

Weitere Informationen finden Sie unter https://aka.ms/wsl2-install


 stderr: 
   bei Docker.ApiServices.WSL2.WslShortLivedCommandResult.LogAndThrowIfUnexpectedExitCode(String prefix, ILogger log, Int32 expectedExitCode) in C:\workspaces\stable-2.5.x\src\github.com\docker\pinata\win\src\Docker.ApiServices\WSL2\WslCommand.cs:Zeile 146.
   bei Docker.Engines.WSL2.WSL2Provisioning.<DeployDistroAsync>d__17.MoveNext() in C:\workspaces\stable-2.5.x\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\WSL2Provisioning.cs:Zeile 169.
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
   bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   bei Docker.Engines.WSL2.WSL2Provisioning.<ProvisionAsync>d__8.MoveNext() in C:\workspaces\stable-2.5.x\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\WSL2Provisioning.cs:Zeile 78.
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
   bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   bei Docker.Engines.WSL2.LinuxWSL2Engine.<DoStartAsync>d__25.MoveNext() in C:\workspaces\stable-2.5.x\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\LinuxWSL2Engine.cs:Zeile 99.
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
   bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   bei Docker.ApiServices.StateMachines.TaskExtensions.<WrapAsyncInCancellationException>d__0.MoveNext() in C:\workspaces\stable-2.5.x\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\TaskExtensions.cs:Zeile 29.
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
   bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   bei System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   bei Docker.ApiServices.StateMachines.StartTransition.<DoRunAsync>d__5.MoveNext() in C:\workspaces\stable-2.5.x\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\StartTransition.cs:Zeile 67.
--- Ende der Stapelüberwachung vom vorhergehenden Ort, an dem die Ausnahme ausgelöst wurde ---
   bei System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   bei Docker.ApiServices.StateMachines.StartTransition.<DoRunAsync>d__5.MoveNext() in C:\workspaces\stable-2.5.x\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\StartTransition.cs:Zeile 92.

If I disable the WSL2 backend, I don't get that error but Docker says "failed to start".

grafik.png

Edited by nastard
Link to comment
  • 1 month later...

I have similar problem here like @nastard. When I start docker using windows-based containers within vm it starts no problem. When I start vm using linux based containers, the error that @nastard mentioned in previous post shows up. I checked that both VT-d in BIOS and virtual machine platform feature in windows vm are both enabled.

 

@testdasi Could you elaborate more to your solution with much detail into the steps you have followed?

 

image.png.cfc97e9afbc1a35bc6f60fe42372128d.png

 

Quote

System.InvalidOperationException:
Failed to deploy distro docker-desktop to C:\Users\sbilis\AppData\Local\Docker\wsl\distro: exit code: -1
 stdout: Please enable the Virtual Machine Platform Windows feature and ensure virtualization is enabled in the BIOS.

For information please visit https://aka.ms/wsl2-install


 stderr:
   at Docker.ApiServices.WSL2.WslShortLivedCommandResult.LogAndThrowIfUnexpectedExitCode(String prefix, ILogger log, Int32 expectedExitCode) in C:\workspaces\PR-15077\src\github.com\docker\pinata\win\src\Docker.ApiServices\WSL2\WslCommand.cs:line 146
   at Docker.Engines.WSL2.WSL2Provisioning.<DeployDistroAsync>d__17.MoveNext() in C:\workspaces\PR-15077\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\WSL2Provisioning.cs:line 169
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Docker.Engines.WSL2.WSL2Provisioning.<ProvisionAsync>d__8.MoveNext() in C:\workspaces\PR-15077\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\WSL2Provisioning.cs:line 78
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Docker.Engines.WSL2.LinuxWSL2Engine.<DoStartAsync>d__25.MoveNext() in C:\workspaces\PR-15077\src\github.com\docker\pinata\win\src\Docker.Desktop\Engines\WSL2\LinuxWSL2Engine.cs:line 99
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Docker.ApiServices.StateMachines.TaskExtensions.<WrapAsyncInCancellationException>d__0.MoveNext() in C:\workspaces\PR-15077\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\TaskExtensions.cs:line 29
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
   at Docker.ApiServices.StateMachines.StartTransition.<DoRunAsync>d__5.MoveNext() in C:\workspaces\PR-15077\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\StartTransition.cs:line 67
--- End of stack trace from previous location where exception was thrown ---
   at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
   at Docker.ApiServices.StateMachines.StartTransition.<DoRunAsync>d__5.MoveNext() in C:\workspaces\PR-15077\src\github.com\docker\pinata\win\src\Docker.ApiServices\StateMachines\StartTransition.cs:line 92

 

Edited by [email protected]
Link to comment
  • 1 year later...
  • 10 months later...

Hi I've been following this thread and I have a question .. You say run this command on the hyper v host but how do you do this when you

can't install hyperv on the host without the vm crashing to a boot error 0xc0000225??

 

I've been working on this problem now for 3 weeks with Unraid and Windows Server 2022 running as a VM. It would seem that they VM's do not like to run hyperV without crashing.

 

I don't run to run actual VM's on the Hypver V I need it for fail over clustering so I can take advantage of the Unraid Nas.

 

Any suggestions are greatly apprecaited.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.