[GUIDE] Optimizing Windows VMs in unRaid


m00nman

Recommended Posts

I've switched from Proxmox to unraid not that long ago and I've run into several problems/inconveniences that I managed to overcome and would like to share with everyone as it doesn't seem to be a common knowledge.

 

The issues:

 

1. Windows VM (Win10 1703 +) high CPU usage when idle. Pretty self explanatory. Default settings the VMs are created with cause CPU to be busy with interrupts contstantly.

2. No KSM enabled by default = much less ram is free/available to other services when running 2+ windows VMs at the same time. This has caused OOM (out of memory) to kick in and kill one of the VMs over time when docker containers start using more RAM. This will probably be useful to people with limited RAM on their servers. I only have 32GB myself and this made a huge difference for me.

3. CPU pinning is the default in unRaid. This is great to isolate certain cores to only be used with a certain VM in some situation, when for example, unraid is also your main PC and you want some cores dedicated to the VM that you use day to day to play games or whatever else you do, but terrible for server workloads, especially if your server doesn't have that many cores and a lot of containers/services/VMs and there is no way to know which core will be loaded at any given time, while others are idling.

 

Solutions:

 

1. I stumbled upon a thread on the forums that recommended enabling HPET timer which seemed to resolve the issue somewhat. The issue is that HPET is an unreliable clock source and often goes out of sync. The real solution is to enable Hyper-V Enlightenments which was introduced in qemu 3.0. It is already partially enabled in unRaid by default. This is what Proxmox uses by default for Windows VMs

 

Go to settings for your Windows VM, and enable XML view in the upper right corner. We will need to edit 2 blocks:

add the following to <hyperv mode='custom'> block

      <vpindex state='on'/>
      <synic state='on'/>
      <stimer state='on'/>

Change migratable='on' to migratable='off'

in the following line

<cpu mode='host-passthrough' check='none' migratable='off'>

add the following to <clock offset='localtime'> block

   <timer name='hypervclock' present='yes'/>

 

In the end, it should look like this

image.png.0d7283966a42f4b8e3663b77db652b9c.png

 

The bonus is that this reduces idle CPU usage even further compared to HPET, without all of the HPET drawbacks. Please note this ONLY applies to Windows VMs. Linux and *BSD already use a different paravirtualized clock source.

 

2, (do NOT do this if you are gaming on the VM or otherwise need a more predictable performance not affected by all the other containers/VMs running on the same machine - you are better off adding more ram to the host) unRaid does come with a kernel that has KSM (kernel samepage merging) enabled (thank you, unraid dev team). What it does is it looks for identical pages in memory for multiple VMs and replaces them with write-protected single page, thus saving (a lot) of RAM. The more similar VMs you have, the more ram you will save with almost no performance penalty.

 

To enable KSM at runtime append the following line to /boot/config/go

echo 1 > /sys/kernel/mm/ksm/run

 

And remove the following block from all of the VMs configs that are subject to KSM:

  <memoryBacking>
    <nosharepages/>
  </memoryBacking>

 

Let it run for an hour or 2, and then you can check if it's actually working (besides seeing more free ram) by

cat /sys/kernel/mm/ksm/pages_shared

The number should be greater than 0 if it's working. If it isn't working then either your VMs aren't similar enough, or your server hasn't reached

the threshold of % used memory.

 

The result (This is with Windows 11 and Windows Server 2022 VMs, 8GB ram each)

image.png.0aeb9c7d3b0ec6461abb0c18eacad585.png

 

3. (do NOT do this if you are gaming on the VM or otherwise need a more predictable performance not affected by all the other containers/VMs running on the same machine) We want to disable CPU pinning completely and let the kernel deal with scheduling and distributing load between all the cores on the CPU. Why is CPU pinning not always good? Let's assume you did your best to distribute and pin cores to different VM. For simplicity let's assume we have a 2 core CPU and 4 VMs. We pin core #1 to VM1 and VM3, and core #2 to VM2 and VM4. Now it so happened that VM1 and VM3 started doing something CPU intensive at the same time and they have to share that core #1 between the two of them all while core #2 is doing completely nothing. By letting kernel schedule the load without pinning it will distribute the load between both cores.

 

Let's go back into the VM settings and

Delete the following block

  <cputune>
    .
    .
    .
  </cputune>

 

Make sure that the line

<vcpu placement='static'>MAX_CPU_NUMBER</vcpu>

and

<topology sockets='1' dies='1' cores='MAX_CPU_NUMBER' threads='1'/>

still has the maximum number of cores your VM is allowed to use (obviously MAX_CPU_NUMBER is a number of cores you want to limit this particular VM to, so replace it with a number)

 

NOTE: if you switch back from XML view to the basic view and change some setting (could be completely unreleated) and save, unraid may overwrite some of these settings. Particularly I noticed that it likes to overwrite max cores assigned to VM to just a single core. You will just need to change back to XML view and change "vcpu placement" and "topology" again

 

Bonus:

- Make sure you are only using VirtIO devices for storage and network

- Make sure CPU is in passthrough mode

- Disable drive encryption (BitLocker) which is enabled by default with latest Win10 an 11 ISOs.

- For "network model" pick "virtio" for better throughput ("virtio-net" is the default)

- If you have Realtek 8125[A || B] network adapter and having issues with throughput, have a look at @hkall comment below.

OR

 

There is now a native r8125 driver available in under Apps within unraid.

 

 

Edited by m00nman
  • Like 6
  • Thanks 7
  • Upvote 2
Link to comment
  • m00nman changed the title to [GUIDE] Optimizing Windows VMs in unRaid
  • 4 weeks later...
  • 4 weeks later...

This is amazing! I'm in the process of building an Unraid Server to replace both my current plex server and gaming pc. I game very rarely (like once every few weeks) and so I was hoping to use a windows VM that had access to the majority of the resources on my unraid server, but would only need them for a while and would then free-up when not in use and hibernating (and then using wake on lan) to wake it up when needed again). Would this allow me to accomplish that?

Specs of my server:
i5-8500 (6 core, no HT)

16GB DDR4

1TB NVME Cache Drive

1TB NVME unassigned drive (was hoping to use this for windows vm/steam installs)

AMD RX6400

A bunch of bigger drives for the array

 

 

I obviously don't have enough cores/ram to just be assigning them away to a VM permanently, so would your method allow me to have my cake and eat it too?

Link to comment

I've been having the same issues and I am honestly about to kick the server over and just give up on self-hosting. This isn't my first rodeo with self-hosting (I've run XCP-ng, Hyper-V etc. in the past without a hitch and have been in the IT industry for many years so I've had exposure to all the big players) but it has by far been the most painful experience with getting VMs working.

 

Docker containers? Works great in my testing so far. 

WebUI? Clean, relatively intuitive and responsive.

Setting up SMB/NFS shares? Easy. Have had minimal issues.

Running Linux VMs? No dramas at all from what I've seen.

Running a Windows VM? No chance in my (albeit limited) experience on Unraid. 

 

The CPU in my Windows 11 VM is pegged at 100% while idling, with MS Edge consuming the most CPU (I don't believe that is accurate as Edge isn't even running, so it's just a background process.) It's a brand new install of Windows and I haven't even installed any apps.

 

Host CPU util spikes to 100% on random cores. My system is an HP ML380 Gen 9 with an Intel Xeon E3-1240 v5, 64 GB of ECC RAM, 4 disks in my array (3 data 1 parity, two of the disks are spun down as I haven't filled up the array enough yet and I have idle spin-down enabled) plus a 500 GB SSD cache drive. This shouldn't be happening.

 

Disk for the VM sits under /mnt/users/domains/<folder_for_vm> and I have the cache for the domains share set to Prefer: Cache

 

I'm using virt-io for everything. I've installed the latest virt-io/KVM drivers on the VM.

 

I implemented your fixes and it seemed to alleviate some of the burden on the host (Unraid shows slightly less CPU util and not across all cores now) but still the same, unusable performance results inside Windows. I'm not even trying to run a gaming VM or anything, it's more of a jump box/VDI for testing.

 

Really disappointing, this might force me to move away from Unraid and just run a full XCP-ng/Xen Orchestra stack with a TrueNAS Scale VM and direct passthrough for my storage needs. This over-complicates my setup and locks me into ZFS along with it's ECC memory "requirements" (no one seems to have a straight answer as to whether it's a requirement or just strongly recommended. I have ECC RAM but my next server might not which limits my future expansion) ZFS/TrueNAS also have stricter disk requirements so I can't just slap in whatever I have laying around, not a concern for now as all my disks are the same (came with the server) but they are old and one's bound to fail soon, I don't intend on purchasing an OEM replacement direct from HP when I have a plethora of other 3.5" drives on hand.

 

Not sure if this is an issue specific to the version of Unraid I am running (6.11.5) but it's a real shame as Unraid was gearing up to be the "all-in-one" solution that would've suited my environment well. Glad I'm only running the trial version but I've still poured more hours into configuring this than I should have given these results. 

 

Any advice from anyone would be appreciated as I'm about to pull the plug on the whole thing and go back to the drawing board.

Edited by mitch98
Link to comment
On 3/24/2023 at 7:57 PM, mitch98 said:

Any advice from anyone would be appreciated as I'm about to pull the plug on the whole thing and go back to the drawing board.

Sorry I can't suggest more than what's already described in OP. #1 Enabling Hyper-V enlightenments should fix the issue you are having if done correctly, but like I said, if you are switching back and forth between XML view and regular view with saves in between, you are most likely losing some of the configuration (as unraid just overrides it).

 

I guess there is one more thing: Settings -> VM Manager -> Default VM storage path change directly to cache instead of /mnt/user/*. /mnt/user path is FUSE mount with, I believe, a custom driver that unites all filesystems into a single virtual one, similar to 'mergerfs' on mainline linux distros. The point is, it can be very expensive in terms of CPU time so you can probably take some burden off of the CPU by just going directly to the SSD mount (which does not use FUSE ).

 

For example one of my SSD 'pools' (it's really just a single SSD) is named "appstorage". So I changed "Default VM storage path" from "/mnt/user/domains/" to "/mnt/appstorage/domains/". If you had already created VMs with the old settings you will need to either recreate them, or copy the Primary vDisk Location image to your new path and change the setting in the VM itself.

 

I agree, I was disappointed in unraid coming from much more enterprise oriented systems, but unlike you I had already paid when I discovered these issues. It is working for me now, but it very inflexible, compared to even Proxmox. Unraid does one thing really well and that's storage, everything else is an afterthought.  Can't complain too much about that though.

-------

 

ECC Ram is not a hard requirement for ZFS. You can run without it, the fear is bit flips that happen because of natural radiation (seriously, solar flares etc). can corrupt the filesystem and ZFS stores critical data in RAM while it's running (unlike other fs). One bit flip can lead to data corruption of the whole array. So it is strongly recommended to have ECC ram, but not strictly required.

 

 

Edited by m00nman
  • Thanks 1
Link to comment
On 3/23/2023 at 12:15 PM, DebrodeD said:

I obviously don't have enough cores/ram to just be assigning them away to a VM permanently, so would your method allow me to have my cake and eat it too?

 

I'll be honest, I have never tried to hibernate VMs before as all my VMs are meant ot be running 24/7.  I know that when I forgot to turn off sleep timer on a windows VM it would go into sleep but would never wake up after so I had to force stop it and restart. Hibernate might be different. Unfortunately I can't try it because it is not available for me (I turned off sleep functionality in windows, that might be why the option isn't there).

Link to comment
On 3/25/2023 at 12:57 PM, mitch98 said:

Running a Windows VM? No chance in my (albeit limited) experience on Unraid. 

 

Since you’re new to unraid, have you looked at Spaceinvaderone’s video guides?

There are tweaks you can do on the Windows side to get it to work better as an unraid VM. I had similar CPU spiking issues until I tweaked the MSI interrupt settings inside Windows.

The hyper-v changes in this thread also helped of course.

I’m not actually sure if the MSI interrupts were covered in this video series, could also have been in:

 

 

Link to comment
  • 1 month later...

Update on my Windows 11 VM:

 

It looks like I've been able to squeeze moderate performance out of a fresh Windows 11 VM now by using all of the tips outlined in this post as well as a few others that I will mention below.

 

I suspect some of the issues I've experienced are due to a number of factors, one of which I suspect is the Windows Core Isolation setting in Windows Security.

 

I have had a Windows 10 VM running for about a month now without any major issues and, while performance is not great, it's serviceable for my needs. 

 

The only difference I found between the VMs is that Windows Core Isolation setting being turned OFF on the Windows 10 VM (that is working) and turned ON in the Windows 11 VM that was experiencing issues.

 

I also believe that Bitlocker drive encryption may be a contributing factor as I also tested migrating an existing Windows 11 Hyper-V VM (with Bitlocker enabled) to Unraid which worked but was met with the 100% CPU utilization problem again despite any tweaks I made. I could've tried turning off Bitlocker on the VM to see if it helped but I decided to leave that rabbit hole for another day.

 

TL;DR:

 

I think there are a lot of "gotchas" with regards to Windows VM performance in Unraid and it's not exactly straight-forward. I guess some of these could be mitigated by including the XML changes mentioned in this post in the Windows 11 VM template.

Link to comment
  • 3 weeks later...

I tried this on 6.12.0rc5 and it works great to some regards.
The idle consumption went from 100% to 20% on my windows vm.
However I am not able to open the VM page anymore.
I can still the see VMs in the Dasboard and stop/start it but I can't change the VM anymore when clicking on edit as no XML selection is loaded.
Is there any other way to change the xml of my VM once it is in this broken state?
This may also be a bug in 6.12.0rc5.

Edited by DasMarx
host => vm
Link to comment
  • 2 weeks later...

I just would like to add my experience in fixing my windows 11 vm once it got to a broken state and would not boot. Becuase I was trying to fix the dredded mouse lag that was slowing my windows VM to a crawl with 100% CPU usage.

I tried all other fixes i could find on the web, then I attempted to play with the MSI interrupts and my windows VM became unbootable. ( Be careful attempting to fix things  with this, yours could break too)

 

FYI- My windows install was on a seperate NVME drive outside of Unraid.

 

I had to completly reset and whipe my windows install. ( I had previously broken it, due to my PC cutting off in the middle of a windows update)

 

So I had to create a new windows VM with the virt io drivers and windows install.iso,  listed on the template, with USB selected and I had to play with the install to get windows to give me the blue screen and allow me to select the "reset this PC option". ( I did this within the Unraid OS, I did not reset my windows drive OUTSIDE of Unriad)

I had to do this all within the VNC option. Attemting to boot with my GPU passthrough i was getting a solid black screen. Only after the PC completly reset the NVME drive that had windows on it, and I got to the windows 11 setup screen, then was I able to reset the VM, boot with the GPU passed through, then continue the install process.

 

Now My windows VM is back to normal no more lag, and crazy 100% cpu spikes while trying to game. Working sound and even didnt have to use the MSI interrupts.😁

Link to comment
  • 4 weeks later...
On 4/30/2023 at 8:24 PM, mitch98 said:

I also believe that Bitlocker drive encryption may be a contributing factor as I also tested migrating an existing Windows 11 Hyper-V VM (with Bitlocker enabled) to Unraid which worked but was met with the 100% CPU utilization problem again despite any tweaks I made. I could've tried turning off Bitlocker on the VM to see if it helped but I decided to leave that rabbit hole for another day.

 

You may be onto something here. I have not experienced this myself, but I believe new ISO for Windows 10 and 11 enable (but maybe don't activate?) BitLocker by default upon installation. I believe TPM has to be present for BitLocker to be enabled during installation. I would think that if the host CPU supports AES-Ni (or AMD alternative) the impact would be minimal, but then again some other configuration options for emulation may be required in XML to mitigate certain I/O overhead - I don't know to be honest. Either way, BitLlocker can be disabled from the BitLocker menu in Windows (just click start and search "BitLocker"). If you see yellow triangle with an exclamation mark inside, you can turn on bitlocker, save the key, then turn it off - that will completely disable it. Also I believe Rufus has an option to disable automatic bitlocker activation for creating bootable installable flash drive.

Link to comment
  • 1 month later...
On 7/29/2023 at 4:39 PM, showstopper said:

Do we think this would work for Windows Server? I tried changing this for one of my VM's but I now struggle to log in via remote desktop (only vnc seems to work)

I run this on Windows Server 2022 and it works awesome.

Just changed what the OP said :)

I use RDP via Apache-Guacamole.

Link to comment
On 1/22/2023 at 4:35 PM, m00nman said:

I managed to overcome and would like to share with everyone as it doesn't seem to be a common knowledge.

 

Hi m00n, thank you very much for these tips. I implemented all of these and it hasn't improved anything from what I can see. There's something still struggling in my system.

 

Backstory: I did the all-in-one UnRaid Server - Windows 10 VM Gaming Machine for over 3 years following SpaceInvaders guides. Specs were a Ryzen 1600 & 3600 (in 2020), with a MEG Unify X570, and a Gigabyte 1080G1 (with downloaded VBIOS), 32GB Ram. With Windows 10 being installed on a WD Black NVME that I passed-through (meaning I could boot directly into Windows at any time if I wanted). This worked fantastic and I hardly noticed a difference between the Sever/VM boot and booting directly into Windows (bare bones or whatever it's called). Had a few dockers, like Plex running 24/7, and gamed heavily in the VM. I pinned all but 2 cores for the VM, but didn't isolate any cores (as I felt the 3600 didn't have enough cores to be isolating that many).

 

Fast-forward to 2023: I have replaced my 3600 with a 5800X3D, I have replaced the 1080 with a 6700XT, I'ved added a secondary 1TB NVME that is passed-through with the sole purpose of holding my steam library, and I have moved to Windows 11 (meaning switching from CSM to Secure Boot, etc - changing UnRaid VM template from Win10 to Win11 (i440 to latest Q35, tpm, etc). I've kept the MEG Unify X570 and updated its BIOS. I made a new VM template and passed through everything, pinned all but 2cores/threads to the VM, and loaded it up. It booted into Windows 11 fine... but right away I knew there was something major up. My 5800X3D was running 20oC hotter (usually around 34-37oC when idle... but in VM it is 50oC+), and randomly spiking, and loading up on just one core. I then jumped into the main game I usually play (War Thunder), in which I would typically get 60fps in the menu, and 200+ FPS when in an actual game @ 1440p (when I was booting directly into Windows with the new parts), but was getting around 10FPS in menu and around 60-120FPS in the game, with random stutters here and there. I've messed with a few things like isolating the cores, and even turning off all dockers, etc, but it has made no difference. Since the upgrade, I have basically been having to boot directly into Windows if I want to game at all.

 

With this guide: So I have done all the changes here, and it certainly isn't spiking the CPU as much, but the CPU is still running hotter, and my FPS is still much much lower than booting directly into Windows. I'm not sure if the culprit is something to do with the CPU or GPU. I'm actually thinking the 6700XT, because I have other issues with it that I have to deal with: 1) when I boot into UnRaid, I get a warning about not being able to change the power state of the GPU, and It seems to have the reset bug? I basically cannot close the VM, it will just go black screen with the VM staying on. I basically have to force close the VM to get it to shut off. I have the correct VBIOS, made sure rebar is off in BIOS, and have the latest (pro) drivers installed. But I've also compared cinabench r23 scores of the 5800X3D and it seems to be a pretty big difference in both single core and multicore. Obviously the VM is getting 2 less cores and threads (so 6 core 12 thread), but it seems to be a bigger issue than just that. I've included the screenshots of the benchmarks (Both 5800X3D in those lists are my scores - higher being the stock direct Windows test, and the other being the 6/12 VM test).

 

Any ideas of where I can go from here?

CR23_BB_MC.jpg

CR23_BB_SC.jpg

Link to comment
On 8/1/2023 at 6:12 AM, Lebowski89 said:

 

Hi m00n, thank you very much for these tips. I implemented all of these and it hasn't improved anything from what I can see. There's something still struggling in my system.

 

 

 

It's hard to say what the issue is. For your case I would only do #1 from this guide. You don't need KSM if you don't have any other Windows VMs, and you probably want to actually pin the cores to get maximum performance for gaming if you don't care about docker/other VM performance.

 

As for the bad performance, I would try to put your 1080 back in and see if it resolves these issues for you. If it does (and I'm guessing it will), it is probably the AMD card. I don't really have any experience passing through AMD GPUs, but I read it's a pain in the butt.

Edited by m00nman
Link to comment
5 hours ago, m00nman said:

As for the bad performance, I would try to put your 1080 back in and see if it resolves these issues for you. If it does (and I'm guessing it will), it is probably the AMD card. I don't really have any experience passing through AMD GPUs, but I read it's a pain in the butt.

 

I'm suspecting it does have something to do with it. The big issue I've noticed is a reset bug, but you can't call it a reset bug.. because they say this generation of cards doesn't suffer from it, lol. Basically, if I shut down or reset the VM, it will go black screen, refuse to shut off, then it will give me the ol' code 127 error message. And when I load UnRaid it will whine about not being able to change power state of the GPU. Basically, similar to the issues experienced in these topics:

 

https://forum.level1techs.com/t/6700xt-reset-bug/181814

https://forum.level1techs.com/t/sapphire-rx-6700xt-nitro-unable-to-reset-after-shutting-down-vm/180360

 

Will have to make a separate topic on it with my diagnotics, etc, but I see a bunch of other topics that have gone unanswered on it, and others.. where the person with the issues never sorted it out and tried different GPUs instead. Really don't want to give Nvidia any more money, tbh.

Edited by Lebowski89
Link to comment
7 hours ago, Lebowski89 said:

 

I'm suspecting it does have something to do with it.

I would check out multiple suggestions on proxmox wiki page https://pve.proxmox.com/wiki/PCI(e)_Passthrough

 

Did you switch to UEFI (OVMF) for the VM as well? I believe you can run "mbr2gpt /convert /fullos" in the VM before conversion, if you don't want to reinstall.

 

I also had a weird issue with a proxmox rig and nvidia card stutter. I put the card into a different pcie slot (connected via chipset, not directly to CPU, I believe it's actually x8 slot... but it's just for video playback) and the issue went away.

 

Also, apparently, passthrough works better with some brands of cards than others

https://forum.proxmox.com/threads/vfio_bar_restore-reset-recovery-restoring-bars.107318/#post-462777

 

I hear you about nvidia being greedy. I will probably not buy an nvidia card again as well, but at this point I have a 3080 and a 3070 so it will probably be a while till I upgrade again. I used to have RX580 and loved that card, and AMD adrenaline software. I thought it was much better than nvidia's solution. Screen tear and input lag reduction option (can't remember the actual name) worked much better as well compared to nvidia (i don't have {g-,free-}sync monitor or VRR tv).

Edited by m00nman
Link to comment
8 hours ago, m00nman said:

Did you switch to UEFI (OVMF) for the VM as well? I believe you can run "mbr2gpt /convert /fullos" in the VM before conversion, if you don't want to reinstall.

 

I also had a weird issue with a proxmox rig and nvidia card stutter. I put the card into a different pcie slot (connected via chipset, not directly to CPU, I believe it's actually x8 slot... but it's just for video playback) and the issue went away.

 

Thanks for the link will study it.

 

I didn't even think of that. Tbh, I have been running a Windows 10 VM on a WD Black NVME passedthrough for 3 years. Moving to Windows 11 was a case of upgrading within the VM, and then removing the Windows 10 template for a Windows 11 template (switching to TPM, Q35, etc in the process). I used DDU to remove the Nvidia drivers and plopped the AMD stuff on top. I've noticed a few oddities, like the OS being reported as Windows 10 to applications and benchmarking apps, despite obviously being on Windows 11. I probably should do a Windows reinstall before doing anything too drastic. Especially now I'm passing through a second NVME that holds my Steam Library away from the OS drive.

 

All of my PCIE (MSI X570 MEG UNIFY) slots have something going on with them, GPU in first slot, LSI SAS9201-16e (connected to 16 Bay DAS) in second, and Mellanox connectx-3 10GB SFP+ NIC in third. Speaking of which, I should probably pass through the motherboards unused 2.5GBE port to the VM. I could juggle them around to different slots, but I would hope to be able to have the GPU in a Gaming VM rig in the first slot without issues.

 

(I agree about Adrenaline, really enjoying it. I actually bought a G-Sync monitor in 2018 (Acer Predator XB-1 27" 1440p 144hz) that has an earlier module that only works with Nvidia cards. Have given that monitor to my partner along with the 1080. Had to go out and buy an adaptive sync monitor that does G/Free-Sync so I'm not tied down to any company. I realised my Acer Predator was GSync only AFTER buying the 6700XT, which was not very fun. So I had to spend more money than I bargained for. Probably could have just bought a 4070, but am very happy with new monitor)

 

Edit: My 6700XT is a Gigabyte Eagle. I've seen people having issues with other Gigabyte 6700XT (AORUS) cards. I jumped on this model because it was at a great price, I didn't think there would be different compatibilities or issues with passing through the cards.

Edited by Lebowski89
Link to comment

Thanks so much for all these settings they've helped a ton, my Windows 11 VM was running well but still stuttering randomly and having some CPU spikes.  The final setting that completely removed the issue for me was this:

 

Change:

<cpu mode='host-passthrough' check='none' migratable='on'>

To:

<cpu mode='host-passthrough' check='none' migratable='off'>

 

Referenced below this appears to enable invtsc which speeds up Windows a ton!!!  there might be a better way to accomplish this but this setting rocks my VM into close to baremetal.

 

For reference:

CPU: 5950x

Mobo: ASRock X570 Taichi

 

 

Edited by abc123
  • Thanks 1
Link to comment
On 8/9/2023 at 1:40 PM, abc123 said:

Thanks so much for all these settings they've helped a ton, my Windows 11 VM was running well but still stuttering randomly and having some CPU spikes.  The final setting that completely removed the issue for me was this:

Thanks, this will probably help a few people on here who have GPUs in PCIe passthrough. I didn't really notice any difference for my server windows VMs, however, since unRaid doesn't support migration of a VM to different host (like, for example, Proxmox) it makes sense to disable migration by default as well. I added your suggestion to OP.

 

@Lebowski89 have a look at the post above, it may be the fix you are looking for.

Edited by m00nman
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.