Javen Posted September 4, 2022 Share Posted September 4, 2022 I have upgraded Windows 11 VM to the latest version and seems there is a new option in "Device security" named Core isolation/Memority integrity. (Or maybe I didn't notice it before) Enable the option will make the VM extremely slow. Before enabling, the CPU usage is around <5%. After enabling the option, it's jumpping between 40%~70% and sometime goes to 100%. My VM detail: CPU: 11700K (4 Cores/8 Threads, CPU isolation set) Mem: 16G 2 nvme SSD passthrough GPU: Nvidia GTX 1070 passthrough 11700K iGPU to shared to plex docker for HW transcoding. I read somewhere the option might be based on some virtualization tech so I assume it might conflict with virtual machines. Also I understand it can be skipped by just not enabling the option. Only I would like to make sure if I understand it correctly or is there any real fix for it? Here is also my VM settings. <name>Windows 11 Work Station</name> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='8'/> <vcpupin vcpu='2' cpuset='1'/> <vcpupin vcpu='3' cpuset='9'/> <vcpupin vcpu='4' cpuset='2'/> <vcpupin vcpu='5' cpuset='10'/> <vcpupin vcpu='6' cpuset='3'/> <vcpupin vcpu='7' cpuset='11'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-6.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/f96ea022-1e18-bc17-4bda-52bba8a978cf_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <reset state='on'/> <vendor_id state='on' value='KVM Hv'/> <frequencies state='on'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='svm'/> <feature policy='disable' name='x2apic'/> </cpu> <clock offset='utc'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='yes'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> Quote Link to comment
PeteAsking Posted September 7, 2022 Share Posted September 7, 2022 (edited) Hi Also noticed this. The fix is to purchase a faster CPU that is of a newer generation to mitigate the newer security requirements performance impacts. Its like running a nested VM inside the VM for CPU processes which is quite taxing, even if security is improved. Kind regards Pete Edited September 7, 2022 by PeteAsking Quote Link to comment
Javen Posted November 14, 2022 Author Share Posted November 14, 2022 On 9/7/2022 at 8:32 PM, PeteAsking said: Hi Also noticed this. The fix is to purchase a faster CPU that is of a newer generation to mitigate the newer security requirements performance impacts. Its like running a nested VM inside the VM for CPU processes which is quite taxing, even if security is improved. Kind regards Pete I purchase a brand new 7950x recently and tested the exact same scenario. I have allocated 10cores/20 threads and 32 GB RAM to the VM. After the feature enabled, no big difference from daily usage. However, if you monitor the CPU usage and try some benchmark like CPUZ, still you can see a big big performance drop. e.g. CPUz single core: Core isolation off: 740-750 Core isolation on: 600+ If you monitor the CPU usage: Core isolation off: normally <10% or even 5%, Core isolation on: 10%~ 20%, if open a file explorer, it might go up to 30%-40%. (if set off, it's about ~10% CPU usage to open a new file explorer). As I said, CPU like 7950x can still leave it enabled, but the cost is still high. I still perfer it off. Quote Link to comment
ghost82 Posted November 14, 2022 Share Posted November 14, 2022 Sometimes setting cpu pinning could bring to worse performance than not setting it. When you use cpu pinning you should consider: 1. to not pin core 0 and its hyperthread, core 0 and its hyperthread is preferred to be used for the host and not in the vm; 2. to set emulatorpin cpuset to core 0/hyperthread (that in use in the host) 3. to use lstopo to map your cpu topology, so you can assign correctly vcpupin, emulatorpin and iothreadpin 4. to use lstopo to define a topology in your xml About lstopo, here is a tutorial from 2018, but still valid: https://forums.unraid.net/topic/74207-video-guide-how-to-use-lstopo-for-better-vm-performance-on-multi-cpu-and-threadripper-systems/ Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.