CPU Isolation Question - Gaming VM


Recommended Posts

Hi All,

 

I'm looking to upgrade my unraid server hardware with an Intel i5-8600K and also a GTX 1070. I would like to setup a gaming VM to pass through the 1070. I've been reading about CPU isolation but I'm having a hard time trying to figure out the best way to do this. i5-8600K has 6-cores so I'm thinking of isolating 4 of the cores to gaming VM and the other 2 cores for unraid/docker. I am currently using the following Docker containers:

 

Plex - (1) local play constantly on. Direct Play. Sometimes 1-2 remote plays which would be transcoding. Remote plays are really hit or miss (share with friends) but I've never had more than 2 at the same time.

Sonarr

Radarr

Nzbget

Unifi Controller

Letsencrypt/Duck DNS (reverse proxy)

Ombi

Plexpy

 

Anyone have experience with a gaming VM using a 6-core processor and have a good recommendation for CPU isolation.

Link to comment

Similar situation- i7-8700K and a GTX 1080. I was able to assign 4 cores to my VM and I still don't know how that's affecting my Plex transcodes as no one has really streamed remotely yet. As far as the VM goes, I'm getting stuttering during CPU-intensive games. I don't want to pin my cores though because, if the VM is off, I'd still like Plex to be able to access all the cores.

Link to comment

To chime in here, I have been using 2 gaming VM's in unRaid since the unRaid 6 release. Initially, it was all perfectly smooth sailing with rarely ever a hiccup. Over the past.... I dont know 6-12 months I have been getting sound stuttering, locking up, audio/video desync issues left and right. With old AMD 6450 cards as well as new Nvidia 1060's. No amount of tinkering has fixed any of it yet, nor the most recent unRaid update.

 

I personally am giving up and will be transitioning away from gaming/video pass through on my unraid box until the issues are resolved.

 

Just my $0.02. 

Link to comment

I have a 4 core Xeon and was sharing cores between Plex and VM.

 

Worked pretty well until I was synching some Plex content with a tablet, which triggered a lot of transcoding. Seroiusly affected VM performance.

 

So I specified cores on the Plex docker and omitted one core. So basically Plex has 3 cores and the VM has three cores, but they only overlap on 2 cores. This means as busy as Plex gets, the VM will still have one complete core. And vice versa. Works pretty well.

  • Like 1
Link to comment
  • 4 months later...

Hi,

 

Did anyone get anywhere with this?  any examples?

 

My situation is I want as much power as possible for Gaming VM (Windows 10) use VR alot so quite intensive, also I have Plex transcoding to think of.

 

I have a 8700K CPU 

 

Any advice/tips/do's/dont's would be appreciated! ?

Link to comment
On 6/13/2018 at 2:09 AM, mbc0 said:

Did anyone get anywhere with this?  any examples?

 

My situation is I want as much power as possible for Gaming VM (Windows 10) use VR alot so quite intensive, also I have Plex transcoding to think of.

 

I have a 8700K CPU 

 

Any advice/tips/do's/dont's would be appreciated! ?

 

It works, the basic idea is manually allocating resources, as best as possible for your configuration.

For the tips here are some things I've seen around in forum posts, and I tend to follow: 

    1. Leave your first cpu core/thread alone; leave that for unRAID host.

    2. Bad idea to cpu-pin the entire cpu to the VM, unless you're running a multi-cpu system.

    3. Consider how much multi-tasking you want to do. IE want Plex transcodes AND VM, then allocate for each one.  Which brings sub-question, how many streams on Plex at the same time? You'll want a thread per stream if you're transcoding (I think, perhaps it's two threads? ... I would do some digging with Plex forums for exact/correct answer. I'm not a big Plex user myself; I also have a lot of extra horsepower ;) ) 

    4.  For best results of cpu-pin's try not to assign the same core or hyperthread to multiple (simultaneous running) VM's and/or Docker containers.

 

For your 8700K you have 12 threads (six cores plus HT) so I would probably start with something like

cpu0 - unRAID, ie nothing explicitly assigned here.

cpu8-11 your Gaming VM   (four virtual cores to start with).

Leaving cpu1-7 for unRAID and tasks, which you could do some optional pinning for Plex, say cpu2-4 (again how many simultaneous transcodes do you need?).

 If pinning cpu's 2-4 is enough for your plex needs, you could go back to your Gaming VM config and add cpu 5 and 6 which will be more than plenty for VR. 

 

   5.  Not a requirement, but I like to assign my vm's from the top of the cpu-core list and go down. I just find it easier to remember them there. I also figure the linux-unRAID kernel would prefer to use cpu0, so I keep my tasks off of it.

 

  • Like 1
Link to comment
On 1/16/2018 at 1:56 PM, NotYetRated said:

Over the past.... I dont know 6-12 months I have been getting sound stuttering, locking up, audio/video desync issues left and right.

I had this too, for me I found that somewhere, some time, Windows removed the registry entry for MSI-IRQ on my graphics audio card. Once I made the registry change and rebooted my system, I was back to clean video and audio. Also took me a month to hunt it down, and only bothered to check this after it was suggested to me by another community member.  I was surprised because before (Windows 10 build 1603 and Windows8) this fix was for "demonic" distorted sound.

 

Here's a copy/paste of instructions, I made for myself the first time I ran into issue (different symptoms), hopefully it's useful:

 

For the device you wish to assign, locate it's PCI address identifier (this can be found when selecting the device from within the VM creation tool)

From the command line, type the following: lspci -v -s 1:00.0 (replace 1:00.0 with your GPU device)

Look for a line that looks like this: Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+If the Enable setting is set to +, that means your device claims it is MSI capable and it is enabled by the guest VM that is using it. If you cannot find a line that mentions MSI as a capability, it means your device does not support this. If the Enable setting is set to -, this means your device claims it is MSI capable, but that the guest VM is NOT using it. The procedure for enabling MSI support from Windows is documented here: http://forums.guru3d.com/showthread.php?t=378044

How to fix
Checking for PCI devices working in MSI-mode.

Go to Device Manager. Click in menu “View → Resources by type”. Expand “Interrupt request (IRQ)” node of the tree. Scroll down to “(PCI) 0x… (…) device name” device nodes. Devices with positive number for IRQ (like “(PCI) 0x00000011 (17) …”) are in Line-based interrupts-mode. Devices with negative number for IRQ (like “(PCI) 0xFFFFFFFA (-6) …”) are in Message Signaled-based Interrupts-mode.

 

Trying to switch device to MSI-mode.

 

You must locate device`s registry key. Invoke device properties dialog. Switch to “Details” tab. Select “Device Instance Path” in “Property” combo-box. Write down “Value” (for example “PCI\VEN_1002&DEV_4397&SUBSYS_1609103C&REV_00\3&11 583659&0&B0”). This is relative registry path under the key “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\ ”.

Go to that device`s registry key (“HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum \PCI\VEN_1002&DEV_4397&SUBSYS_1609103C&REV_00\3&11 583659&0&B0”) and locate down the subkey “Device Parameters\Interrupt Management”. For devices working in MSI-mode there will be subkey “Device Parameters\Interrupt Management\MessageSignaledInterruptProperties” and in that subkey there will be DWORD value “MSISupported” equals to “0x00000001”. To switch device from legacy- to MSI-mode just add these subkey and value.

 

Before adding these key and value (or changing “MSISupported” to “0x00000001” in case subkey and value already exist) you have to perform safety steps like doing backup (creating system restore point at least).

 

Do tweak one device → reboot and check (1) if it is displayed in Device Manager as correctly working device; (2) if its IRQ became negative → if no (1) and no (2) then either remove subkey “MessageSignaledInterruptProperties” (if you added it) or change “MSISupported” to “0x00000000” and reboot.

 

Theoretically if device driver (and platform = chipset) unable to perform in MSI-mode it should ignore mentioned subkey and value. 

End of copy/paste.

 

With some luck, maybe it's the the same issue. 

Edited by Jcloud
Link to comment
  • 4 months later...

So, if I want to keep CPU: 0 and its Hyperthread: 1 for Unraid, it would look like this?

 

Am I understanding the help tip correctly? At first I thought I would just pin CPU 0/1 and that would reserve it for Unraid.

 

**Update**

Yes, this is how CPU pinning for Unraid should look like if you want to pin Unraid to Core 0/1 (Threadripper 1950x). I was able to verify by launching terminal after a reboot and running htop from the command line.

 

**Update**

@Jcloud I was wrong with my previous update to this post.

 

Now, I'm totally confused. With the settings set above in the screenshot, I thought CPU 0/1 would only be reserved for Unraid. Come to find out that, CPU 2–31 weren't being used at all when I ran a test with Handbrake. Instead, Handbrake was confined only to CPU 0/1 along with Unraid. Everything else was reading 0%.

 

Now that I enabled CPU pinning on CPU 0/1 with everything else untoggled, Handbrake is now being confined only to CPU 0. Very strange.

Screen Shot 2018-11-07 at 17.05.51.png

image.png

Edited by Zer0Nin3r
Changed incorrect informaiton. Updated with new findings.
  • Like 1
Link to comment
12 hours ago, Zer0Nin3r said:

Now, I'm totally confused. With the settings set above in the screenshot, I thought CPU 0/1 would only be reserved for Unraid. Come to find out that, CPU 2–31 weren't being used at all when I ran a test with Handbrake. Instead, Handbrake was confined only to CPU 0/1 along with Unraid. Everything else was reading 0%.

 

Now that I enabled CPU pinning on CPU 0/1 with everything else untoggled, Handbrake is now being confined only to CPU 0. Very strange.

No, you were right originally.   Your screenshot showed that unRaid only had use of cores 0 and 1.  What you're missing is that Docker is part of unRaid, so by default your handbrake container also only has access to CPU 0 and 1.  You can pin Handbrake by editing it to one of the other isolated cores, but you cannot pin it to multiple isolated cores and have it execute correctly on those multiple cores.

 

IE: Isolating cores is generally only done when you want a VM to have exclusive access to cores.

  • Upvote 1
Link to comment
4 hours ago, Squid said:

What you're missing is that Docker is part of unRaid, so by default your handbrake container also only has access to CPU 0 and 1.  You can pin Handbrake by editing it to one of the other isolated cores, but you cannot pin it to multiple isolated cores and have it execute correctly on those multiple cores.

  

IE: Isolating cores is generally only done when you want a VM to have exclusive access to cores. 

Okay, I think I have a better understanding now. Thank you. I found the thread on in-depth CPU pinning, and will have to go over that in finer detail at some point.

 

So, CPU Isolation is good when you want to run Docker applications in parallel to running VM's in order to keep the shared system resources tidy. In the case of Handbrake, this is counter intuitive because you will only be limited to a single core as CPU Isolation will not allow Handbrake access to multiple pinned cores. CPU Isolation will work great with applications that are not CPU/multi-thread intensive.

Link to comment
4 hours ago, Zer0Nin3r said:

Okay, I think I have a better understanding now. Thank you. I found the thread on in-depth CPU pinning, and will have to go over that in finer detail at some point.

 

So, CPU Isolation is good when you want to run Docker applications in parallel to running VM's in order to keep the shared system resources tidy. In the case of Handbrake, this is counter intuitive because you will only be limited to a single core as CPU Isolation will not allow Handbrake access to multiple pinned cores. CPU Isolation will work great with applications that are not CPU/multi-thread intensive.

Not quite sure if you've got it, and maybe just wording it wrong.

 

CPU isolation means that the host OS will not utilize those isolated cores.  Generally you do that so that VM's have exclusive use. (Pin the VMs to those isolated cores)

 

You *can* pin a docker application to an isolated core, and it will work.  But, due to how linux actually works under the hood, pinning a docker container to multiple isolated cores means that the container will only execute on the lowest numbered core.  Generally you would pin a docker container to 1 or more cores that are not isolated.

 

Whole point in pinning is so that a VM (or container) doesn't slow down if something else happens to require all of the resources of the system.

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.