parkanoid Posted June 7, 2016 Share Posted June 7, 2016 Any idea whats causing this or any workarounds? so far ive tried seabios/ovmf, windows 7 and 8, stable and newest beta, but as soon as i up the memory to 32gb+ the windows boot always takes like 2-5minutes instead of 10-20sek normally with 8gb, looking at the dashboard the cpu is working 100% during the boot aswell until its in windows, then its back to normal and everything works fine, very odd. seems like the more memory i allocate the slower the boot is, it seemingly locks up during the windows logo thingy during boot. system: 12core 2678v3 xeon 64gb ddr4 unraid 6.2beta Quote Link to comment
SpaceInvaderOne Posted June 7, 2016 Share Posted June 7, 2016 Um, thats interesting. I had/have a similar problem. But I noticed it when I assigned alot of cores. Also it was only when I passed through my gtx 970. Passing through a hd6450 or using vnc I didnt get the slow boot. I am waiting for a gtx 1080 and will see if I have same issue with that. Quote Link to comment
f3dora Posted June 8, 2016 Share Posted June 8, 2016 Do you use usb passthrough? when i use the unraid usb device passthrough and not passthrough the entire controller (dont forget to stub the controller!!!) i have a lot of weird bugs: the windows activation doesent work, slow boot and a lot of bsods. Quote Link to comment
parkanoid Posted June 9, 2016 Author Share Posted June 9, 2016 Do you use usb passthrough? when i use the unraid usb device passthrough and not passthrough the entire controller (dont forget to stub the controller!!!) i have a lot of weird bugs: the windows activation doesent work, slow boot and a lot of bsods. i just use to the unarid device passthrough for mouse/keyboard, and a 6950hd for gpu passthrough, thing is its all working fine, boot is super fast until i hit the high memory numbers, ill do some more testing with vnc, and different core numbers to see what makes a difference eidt: so i did abunch of test with different vcpu/memory numbers, and it seems both raising vcpus and memory triggers the slower boots, but memory more so than cores, gpu/usb didnt seem to effect it at all, neither did machine, and ive tried it with OMVF aswell earlier and i think i had similar boot times(will have to restest this). All in all its very usable just slightly annoying, performance seems really good after the boot hiccup xml of what i was planning to use <domain type='kvm' id='9'> <name>Client Windows 8.1</name> <uuid>1cb15b09-ccfb-7688-da03-f2a6271e853f</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 8.x" icon="windows.png" os="windows"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>20</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='7'/> <vcpupin vcpu='6' cpuset='8'/> <vcpupin vcpu='7' cpuset='9'/> <vcpupin vcpu='8' cpuset='10'/> <vcpupin vcpu='9' cpuset='11'/> <vcpupin vcpu='10' cpuset='14'/> <vcpupin vcpu='11' cpuset='15'/> <vcpupin vcpu='12' cpuset='16'/> <vcpupin vcpu='13' cpuset='17'/> <vcpupin vcpu='14' cpuset='18'/> <vcpupin vcpu='15' cpuset='19'/> <vcpupin vcpu='16' cpuset='20'/> <vcpupin vcpu='17' cpuset='21'/> <vcpupin vcpu='18' cpuset='22'/> <vcpupin vcpu='19' cpuset='23'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-2.5'>hvm</type> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor id='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='10' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='unsafe'/> <source file='/mnt/user/Vidisks/Client Windows 8.1/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:c4:d3:81'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-Client Windows 8.1/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom bar='on' file='/mnt/user/ISOs/asus6950.rom'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc051'/> <address bus='3' device='2'/> </source> <alias name='hostdev2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x04b4'/> <product id='0x0101'/> <address bus='3' device='3'/> </source> <alias name='hostdev3'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </memballoon> </devices> </domain> Quote Link to comment
parkanoid Posted June 21, 2016 Author Share Posted June 21, 2016 no one noticed similar behavior? Quote Link to comment
testdasi Posted June 21, 2016 Share Posted June 21, 2016 Maybe the reason is very few people would assign 32GB RAM to the VM - and still able to start it. That realistically means the server has 64GB+ RAM and not that many motherboards support 64GB+ RAM to begin with. Quote Link to comment
SpaceInvaderOne Posted June 21, 2016 Share Posted June 21, 2016 Maybe the reason is very few people would assign 32GB RAM to the VM - and still able to start it. That realistically means the server has 64GB+ RAM and not that many motherboards support 64GB+ RAM to begin with. No i dont think it is motherboard related. I have 64 gig in my server on an x99 board. I get slow boot speeds when passing through 1070 gtx vms with higher core counts. I can boot to baremetal windows and boot speed normal with 14 cores and 64 gig ram Quote Link to comment
testdasi Posted June 22, 2016 Share Posted June 22, 2016 No i dont think it is motherboard related. I have 64 gig in my server on an x99 board. I get slow boot speeds when passing through 1070 gtx vms with higher core counts. I can boot to baremetal windows and boot speed normal with 14 cores and 64 gig ram Oops, I meant to reply to parkanoid's question ("no one noticed similar behavior?"). Nothing about motherboard. Just to point out that the number of people who can assign 32GB RAM to a VM is small so that's why few people are noticing / reporting it. Quote Link to comment
RifleJock Posted April 4, 2018 Share Posted April 4, 2018 When I use my non ecc 128GB and assign 32GB+ to win10, I have about 8 minutes to get to the TianoCore, and then about 15 min after that to desktop. 4x 1TB WDVR's and 2x 1.2TB Intel 750 nvme drives, i7-6950x, 1080ti, which I finally got to work. If use unraid in my server board with dual 2011v3's and 1TB of ecc ram. Windows 32GB+ gives me about 3 minutes total boot time. Quote Link to comment
tr0910 Posted April 4, 2018 Share Posted April 4, 2018 (edited) With my dual 2670 rig with Intel 2600cp mb and 96GB ECC ram, the first boot of a 2 core 32GB Win10 Pro VNC is 8 seconds to TianoCore and further 78 seconds to login screen. This is with only 2 cores (6/22). Changing to 4 cores (6/22, and 15,31) drops the boot time to the login screen to 10 seconds after the same 8 seconds to TianoCore. Then bumping to 24 cores of 32 total allocated to this Win10VM drops boot time to 8 seconds to the login screen with the same 8 seconds to TianoCore . All testing done over VNC. Initial Memory: 32768 Max Memory: 32768 Machine: i440fx-2.10 BIOS:OVMF More memory and more cores are not slowing my VM boot speed at all. (Nothing is passed through to this Win10 VM. unRaid 6.50) Edited April 4, 2018 by tr0910 Quote Link to comment
1812 Posted April 4, 2018 Share Posted April 4, 2018 9 hours ago, tr0910 said: With my dual 2670 rig with Intel 2600cp mb and 96GB ECC ram, the first boot of a 2 core 32GB Win10 Pro VNC is 8 seconds to TianoCore and further 78 seconds to login screen. This is with only 2 cores (6/22). Changing to 4 cores (6/22, and 15,31) drops the boot time to the login screen to 10 seconds after the same 8 seconds to TianoCore. Then bumping to 24 cores of 32 total allocated to this Win10VM drops boot time to 8 seconds to the login screen with the same 8 seconds to TianoCore . All testing done over VNC. Initial Memory: 32768 Max Memory: 32768 Machine: i440fx-2.10 BIOS:OVMF More memory and more cores are not slowing my VM boot speed at all. (Nothing is passed through to this Win10 VM. unRaid 6.50) pass through a GPU and watch it crawl when adding large amounts of ram. Quote Link to comment
RifleJock Posted July 25, 2018 Share Posted July 25, 2018 (edited) I've upgraded my system since the last post. I'm now running dual E5 2696v4's with 512GB of 2,400Mhz ECC. Same board as before. Turns out, I was running single channel... now its in Octa-Channel. Much faster. Also, I have tweaked a few ECC settings. On the i7-6950x build with non-ecc, high RAM boot times are still outrageous. The dual Xeon ECC build and a 64GB RAM VM boots now in roughly 12 seconds. Looks like ECC plays a huge factor in the VM bootups. That, or I'm on a newer version of UnRAID and didn't realize it. Edited July 25, 2018 by RifleJock Spell Check Quote Link to comment
1812 Posted July 25, 2018 Share Posted July 25, 2018 This issue was generally resolved with the release of 6.5.3. 1 Quote Link to comment
rix Posted January 20, 2019 Share Posted January 20, 2019 I still see this in 6.6.6. Assigning 16-20 GB slows down my VM boot up to 3 minutes... first boot is always quick. Quote Link to comment
Warrentheo Posted January 20, 2019 Share Posted January 20, 2019 I have 6.6.6 with Win10 VM and passthrough 50GB out of my 64GB to it... Have not noticed an issue with boot times once I finished tweaking the XML and Windows... I may have accidentally fixed this issue with the tweaks I have applied, but don't think so... Quote Link to comment
1812 Posted January 20, 2019 Share Posted January 20, 2019 1 hour ago, rix said: I still see this in 6.6.6. Assigning 16-20 GB slows down my VM boot up to 3 minutes... first boot is always quick. Ryzen has weird issues. I believe is you search for "Ryzen vm slow boot" or similar, you might find some tips. Also, start your own thread, as this one is originally 2.5 yrs old. Quote Link to comment
rix Posted January 27, 2019 Share Posted January 27, 2019 Qemu 3.1 in the newest 6.7 RC is much faster for me! Quote Link to comment
CraigGivant Posted February 14, 2019 Share Posted February 14, 2019 Did not want to start a new thread for this as I'm merely reporting a finding. I found this thread because I as well was experiencing VERY slow boot times. In fact, the VM was going into suspend mode and I would need to force stop it the first time. After that it would boot but was taking up to 10 minutes to do so. I was assigning 48G Ram to a Windows 10 VM running on a Ryzen 1950X on 6.6.6. Dropping this down to 32G (host has 80G) the machine boots in about 15 seconds. Not posting diagnostics or looking for any support and will retry this once 6.7 reaches stable, but if anybody wants to investigate this just hit me up and I'll grab some logs. Quote Link to comment
johnarvid Posted March 5, 2019 Share Posted March 5, 2019 I probably have the same problem. Very slow boot when doing pass-through and much memory. First pinned cpu is working 100% Will try 6.7 soon, but this has been a problem since at least 2016 so I don't have any expectations. Same as Craig, let me know if any logs or other info is needed. Quote Link to comment
johnarvid Posted March 5, 2019 Share Posted March 5, 2019 Tested with 6.7rc5 but no improvement. Quote Link to comment
bastl Posted March 5, 2019 Share Posted March 5, 2019 If this is the first start of the VM set it to only 1 core. Initial setup with only 1 core assigned fixed it for me, if I had that issue. @johnarvid Quote Link to comment
johnarvid Posted March 5, 2019 Share Posted March 5, 2019 @bastl Do you mean first start of VM after host start? Or do you mean in the VM's lifetime? Quote Link to comment
bastl Posted March 5, 2019 Share Posted March 5, 2019 I mean if I setup a new Windows VM. sometimes it uses only 1 single core at 100% load and it took ages at that point to get trough the setup. Limit the VM to only 1 core for the setup always fixed it for me. Sometimes I had the same issue changing some settings for the VM (remove GPU, add another disk). Same fix, 1 core, vm starts fine, after that I could reassign more cores and it should work. Quote Link to comment
johnarvid Posted March 5, 2019 Share Posted March 5, 2019 Ok, no that is not applicable for me. I use an "old" vm. Quote Link to comment
Marshalleq Posted May 22, 2019 Share Posted May 22, 2019 Came here looking for why it takes so long to get to the Tianocore screen. On 6.6.7. So I guess I'm a +1 but nothing educated to base it on yet. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.