Jump to content
dlandon

Performance Improvements in VMs by adjusting CPU pinning and assignment

239 posts in this topic Last Reply

Recommended Posts

When GamingVM is OFF: VM1 and VM2 are free to use any of the 8 physical cores for their work.

When GamingVM is ON: VM1 and VM2 are only allowed to use phyiscal cores 1-4; GamingVM is allowed to only use physical cores 5-8

 

Ok yes i understand now. So assigning 8 cores but being able to scale those back on the start of another vm. Thats a very interesting idea and i didnt know that was possible. It would certainly be very useful. I had a similar idea a while back about isolcpus capability post-boot to be able to isolate and release cpus to the host without reboot. Having spoken to limetech this is something they have been investigating but it isnt in Linux's capability set and as such would be only be possible manually manipulating some things in the user space for this but it isnt something they are currently working on.

 

So yes any info on how to implement what you are talking about here would be most welcome.

 

I finally found some time to implement this for myself using python

 

https://gist.github.com/patrickjahns/cfa90a39883206e18fdaccfd9d2809f0

 

It can either be used via command line, or imported into other scripts.

 

The class is provided with different profiles (defined as json) and allows for switching between them.

When switching the profile, the previous configuration is saved to a json file and can be easily restored.

 

Examples

1) python vcpu.py  --vcpumap vcpumap.json --profile default --ignored_domains ['pandora']

2) python vcpu.py  --vcpumap vcpumap.json --restore --ignored_domains ['pandora']

 

In example 1) the profile default is applied to all vms except the vm pandora

In example 2) the previous configuration is restored

 

vcpumap.json

{"profilename": {
"default": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0], 
"vm1": {"all": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0]}, 
"vm2": {
	"0": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0], 
	"1": [1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0]
	}
}
}

 

The json allows for creating profiles , each profile has a "default" configuration for all vcpus. It can also contain profiles per vm (vm1, vm2). With these you defined either a pinning for all ("all") vcpus, or per vcpu (see vm2).

 

This should be quite flexible and help anyone else with doing this.

 

My setup:

Several VMS + 1 "GamestreamingVm" (uses nvidia gamestreaming)

 

I use a flask application providing an url (http://ip/switchprofile/profile) to switch between defined profiles.

Whenever my shield (or moonlight) connects, a AutoHotkey script on the gaming vm detects that "nvstreamer.exe" is running and calls http://ip/switchprofile/gaming and thus loads the gaming profile. The scripts also recognizes disconnects ("nvstreamer.exe is not running") and calls http://ip/restoreprofile, to restore previous configurations

 

 

 

 

Share this post


Link to post

The SSD is mounted outside the array, using the Unassigned Devices plugin. Mounted as /mnt/disks/Corsair_Force_GS_240G. Formated as XFS.

 

XML

<domain type='kvm'>

  <name>Windows 10 - Gaming</name>

  <uuid>009382e0-d3d1-2e08-f785-8c0dab461393</uuid>

  <metadata>

    <vmtemplate name="Custom" icon="windows.png" os="windows"/>

  </metadata>

  <memory unit='KiB'>8388608</memory>

  <currentMemory unit='KiB'>8388608</currentMemory>

  <memoryBacking>

    <nosharepages/>

    <locked/>

  </memoryBacking>

  <vcpu placement='static'>8</vcpu>

  <cputune>

    <vcpupin vcpu='0' cpuset='12'/>

    <vcpupin vcpu='1' cpuset='13'/>

    <vcpupin vcpu='2' cpuset='14'/>

    <vcpupin vcpu='3' cpuset='15'/>

    <vcpupin vcpu='4' cpuset='28'/>

    <vcpupin vcpu='5' cpuset='29'/>

    <vcpupin vcpu='6' cpuset='30'/>

    <vcpupin vcpu='7' cpuset='31'/>

  </cputune>

  <os>

    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>

    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>

  </os>

  <features>

    <acpi/>

    <apic/>

  </features>

  <cpu mode='host-passthrough'>

    <topology sockets='1' cores='8' threads='1'/>

  </cpu>

  <clock offset='localtime'>

    <timer name='rtc' tickpolicy='catchup'/>

    <timer name='pit' tickpolicy='delay'/>

    <timer name='hpet' present='no'/>

  </clock>

  <on_poweroff>destroy</on_poweroff>

  <on_reboot>restart</on_reboot>

  <on_crash>restart</on_crash>

  <devices>

    <emulator>/usr/bin/qemu-system-x86_64</emulator>

    <disk type='file' device='disk'>

      <driver name='qemu' type='raw' cache='writeback'/>

      <source file='/mnt/disks/Corsair_Force_GS_240GB/Windows 10 - Gaming/vdisk1.img'/>

      <target dev='hdc' bus='virtio'/>

      <boot order='1'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>

    </disk>

    <disk type='file' device='cdrom'>

      <driver name='qemu' type='raw'/>

      <source file='/mnt/user/PROGRAMS/IMAGES/Windows_10_x64.iso'/>

      <target dev='hda' bus='ide'/>

      <readonly/>

      <boot order='2'/>

      <address type='drive' controller='0' bus='0' target='0' unit='0'/>

    </disk>

    <disk type='file' device='cdrom'>

      <driver name='qemu' type='raw'/>

      <source file='/mnt/user/PROGRAMS/IMAGES/virtio-win-0.1.117.iso'/>

      <target dev='hdb' bus='ide'/>

      <readonly/>

      <address type='drive' controller='0' bus='0' target='0' unit='1'/>

    </disk>

    <controller type='usb' index='0'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>

    </controller>

    <controller type='pci' index='0' model='pci-root'/>

    <controller type='ide' index='0'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>

    </controller>

    <controller type='virtio-serial' index='0'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

    </controller>

    <interface type='bridge'>

      <mac address='52:54:00:a0:11:6c'/>

      <source bridge='br0'/>

      <model type='virtio'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>

    </interface>

    <serial type='pty'>

      <target port='0'/>

    </serial>

    <console type='pty'>

      <target type='serial' port='0'/>

    </console>

    <channel type='unix'>

      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Windows 10 - Gaming.org.qemu.guest_agent.0'/>

      <target type='virtio' name='org.qemu.guest_agent.0'/>

      <address type='virtio-serial' controller='0' bus='0' port='1'/>

    </channel>

    <hostdev mode='subsystem' type='pci' managed='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>

      </source>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>

    </hostdev>

    <hostdev mode='subsystem' type='usb' managed='yes'>

      <source>

        <vendor id='0x046d'/>

        <product id='0xc31c'/>

      </source>

    </hostdev>

    <hostdev mode='subsystem' type='usb' managed='yes'>

      <source>

        <vendor id='0x1532'/>

        <product id='0x0037'/>

      </source>

    </hostdev>

    <memballoon model='virtio'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>

    </memballoon>

  </devices>

</domain>

 

if you are using the ssd exclusively for the vm, i should passthrough the disk then install the os directly to the disk rather than use a vdisk

 

I am also trying to get the best performance I can from a Win10 gaming VM on an SSD, accessed using Unassigned Devices. Currently I am using a vdisk but if the performance gains are significant then I would pass through the entire SSD.

 

Are the gains significant? And what is the best practice for passing through the entire disk?

Share this post


Link to post

I do note the start of the thread suggest using OVMF but I'm fully invested in SeaBIOS vms...

 

Does CPU Pinning with SeaBios i440-2.3 work at all?

For those with working CPU Pinning when running a CPU benchmark do only the pinned CPUs ramp up in Dashboard?

For a dual socket config would you recommend emulator pinned on the first pair of the socket of the pinned cores or keeping emulator pinned to the first few cores of socket 0?

Since I've got 16 cores w/o HT should I dedicate a core-pair (or even a core-pair per socket) to emulator pin?

 

=======

 

Longer read:

 

Life was pretty good.  I was using cpupinning without isolcpu and without emulation pin but noticed that when I ran a CPU Cinebench test all cores lit up and that always irked me.  Either CPU Pinning wasn't really working OR the emulator was using way more processing time than expected.  Trying to sort it all out now but not having the best of luck.  Windows 10 VM is down almost 40% in CPU Cinebench CPU test (I suspect this may be due to the Hyper-V sockets/cores/threads setting which I've put back to 1/4/1 but Cinebench test still shows the system thinks its 1/2/2)?.  The Windows 7 and Server VMs are all within 10% which may be a good result considering emulator pin, under multiple stresses I'd hope to see overall better performance.

 

Another note about Emulator.  In that Windows 10 VM with 4 cores while running a CPU Benchmark my processor shows 25% utilization.  My system at idle is at about 4%.  Maxing out 4 cores should be another 12.5% so it appears the emulator is using ~7.5% of CPU time.  The emulator working this hard seems to correlate with the behavorior of all cores lighting up when running a Cinebench CPU Benchmark.

 

--dimes

Lenovo D30

2 x e5-2670 v1 @ 2.6Ghz

96 GB RAM

Supermicro AOC-SASLP-MV8

Intel® SSD DC S3500 Series 800 GB Cache

11.5TB Protected Storage

2 x gigE, on MB

NEC uPD720200 Passed USB 3, on MB

AMD Radeon HD 8570 Passed GPU

Nvidia NVS 295 Passed GPU

 

 

 

 

Share this post


Link to post

The SSD is mounted outside the array, using the Unassigned Devices plugin. Mounted as /mnt/disks/Corsair_Force_GS_240G. Formated as XFS.

 

XML

<domain type='kvm'>

  <name>Windows 10 - Gaming</name>

  <uuid>009382e0-d3d1-2e08-f785-8c0dab461393</uuid>

  <metadata>

    <vmtemplate name="Custom" icon="windows.png" os="windows"/>

  </metadata>

  <memory unit='KiB'>8388608</memory>

  <currentMemory unit='KiB'>8388608</currentMemory>

  <memoryBacking>

    <nosharepages/>

    <locked/>

  </memoryBacking>

  <vcpu placement='static'>8</vcpu>

  <cputune>

    <vcpupin vcpu='0' cpuset='12'/>

    <vcpupin vcpu='1' cpuset='13'/>

    <vcpupin vcpu='2' cpuset='14'/>

    <vcpupin vcpu='3' cpuset='15'/>

    <vcpupin vcpu='4' cpuset='28'/>

    <vcpupin vcpu='5' cpuset='29'/>

    <vcpupin vcpu='6' cpuset='30'/>

    <vcpupin vcpu='7' cpuset='31'/>

  </cputune>

  <os>

    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>

    <loader type='pflash'>/usr/share/qemu/ovmf-x64/OVMF-pure-efi.fd</loader>

  </os>

  <features>

    <acpi/>

    <apic/>

  </features>

  <cpu mode='host-passthrough'>

    <topology sockets='1' cores='8' threads='1'/>

  </cpu>

  <clock offset='localtime'>

    <timer name='rtc' tickpolicy='catchup'/>

    <timer name='pit' tickpolicy='delay'/>

    <timer name='hpet' present='no'/>

  </clock>

  <on_poweroff>destroy</on_poweroff>

  <on_reboot>restart</on_reboot>

  <on_crash>restart</on_crash>

  <devices>

    <emulator>/usr/bin/qemu-system-x86_64</emulator>

    <disk type='file' device='disk'>

      <driver name='qemu' type='raw' cache='writeback'/>

      <source file='/mnt/disks/Corsair_Force_GS_240GB/Windows 10 - Gaming/vdisk1.img'/>

      <target dev='hdc' bus='virtio'/>

      <boot order='1'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>

    </disk>

    <disk type='file' device='cdrom'>

      <driver name='qemu' type='raw'/>

      <source file='/mnt/user/PROGRAMS/IMAGES/Windows_10_x64.iso'/>

      <target dev='hda' bus='ide'/>

      <readonly/>

      <boot order='2'/>

      <address type='drive' controller='0' bus='0' target='0' unit='0'/>

    </disk>

    <disk type='file' device='cdrom'>

      <driver name='qemu' type='raw'/>

      <source file='/mnt/user/PROGRAMS/IMAGES/virtio-win-0.1.117.iso'/>

      <target dev='hdb' bus='ide'/>

      <readonly/>

      <address type='drive' controller='0' bus='0' target='0' unit='1'/>

    </disk>

    <controller type='usb' index='0'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>

    </controller>

    <controller type='pci' index='0' model='pci-root'/>

    <controller type='ide' index='0'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>

    </controller>

    <controller type='virtio-serial' index='0'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

    </controller>

    <interface type='bridge'>

      <mac address='52:54:00:a0:11:6c'/>

      <source bridge='br0'/>

      <model type='virtio'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>

    </interface>

    <serial type='pty'>

      <target port='0'/>

    </serial>

    <console type='pty'>

      <target type='serial' port='0'/>

    </console>

    <channel type='unix'>

      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Windows 10 - Gaming.org.qemu.guest_agent.0'/>

      <target type='virtio' name='org.qemu.guest_agent.0'/>

      <address type='virtio-serial' controller='0' bus='0' port='1'/>

    </channel>

    <hostdev mode='subsystem' type='pci' managed='yes'>

      <driver name='vfio'/>

      <source>

        <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>

      </source>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>

    </hostdev>

    <hostdev mode='subsystem' type='usb' managed='yes'>

      <source>

        <vendor id='0x046d'/>

        <product id='0xc31c'/>

      </source>

    </hostdev>

    <hostdev mode='subsystem' type='usb' managed='yes'>

      <source>

        <vendor id='0x1532'/>

        <product id='0x0037'/>

      </source>

    </hostdev>

    <memballoon model='virtio'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>

    </memballoon>

  </devices>

</domain>

 

if you are using the ssd exclusively for the vm, i should passthrough the disk then install the os directly to the disk rather than use a vdisk

 

I am also trying to get the best performance I can from a Win10 gaming VM on an SSD, accessed using Unassigned Devices. Currently I am using a vdisk but if the performance gains are significant then I would pass through the entire SSD.

 

Are the gains significant? And what is the best practice for passing through the entire disk?

 

yes i put all my games on a passed through disk(d) but have the os drive © on vdisk.

  <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/disk/by-id/ata-ST31000528AS_6VP41EPS'/>
      <target dev='hdd' bus='virtio'/>
    </disk>

 

you can adjust the above code for your needs.  ata-ST31000528AS_6VP41EPS is my disks id. You will need to put in yours here

Share this post


Link to post

 

I isolated some CPUs to be used by the VM from Linux with the following in the syslinux configuration on the flash drive:

 

append isolcpus=2,3,6,7 initrd=/bzroot

 

This tells Linux that the physical CPUs 2,3,6 and 7 are not to be managed or used by Linux.

 

Apologies for the newbie question, but do I just add this line to the bottom of the syslinux.cfg file, below the:

 

label Memtest86+
  kernel /memtest

 

Thanks

Share this post


Link to post

 

I isolated some CPUs to be used by the VM from Linux with the following in the syslinux configuration on the flash drive:

 

append isolcpus=2,3,6,7 initrd=/bzroot

 

This tells Linux that the physical CPUs 2,3,6 and 7 are not to be managed or used by Linux.

 

Apologies for the newbie question, but do I just add this line to the bottom of the syslinux.cfg file, below the:

 

label Memtest86+
  kernel /memtest

Each labeled section of syslinux.cfg is a menu item on your boot menu.  So adding there at the end would just be adding it to the memory test!

 

You want to edit the append line for the menu section that you plan to run.

Share this post


Link to post

 

I isolated some CPUs to be used by the VM from Linux with the following in the syslinux configuration on the flash drive:

 

append isolcpus=2,3,6,7 initrd=/bzroot

 

This tells Linux that the physical CPUs 2,3,6 and 7 are not to be managed or used by Linux.

 

Apologies for the newbie question, but do I just add this line to the bottom of the syslinux.cfg file, below the:

 

label Memtest86+
  kernel /memtest

Each labeled section of syslinux.cfg is a menu item on your boot menu.  So adding there at the end would just be adding it to the memory test!

 

You want to edit the append line for the menu section that you plan to run.

 

Got it - I see the logical place to put it now.  Really appreciate everyone being so patient with me over the last couple of weeks  :$

Share this post


Link to post

This is what unRaid auto generates

 

 

<vcpu placement='static'>10</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='11'/>
    <vcpupin vcpu='1' cpuset='12'/>
    <vcpupin vcpu='2' cpuset='13'/>
    <vcpupin vcpu='3' cpuset='14'/>
    <vcpupin vcpu='4' cpuset='15'/>
    <vcpupin vcpu='5' cpuset='27'/>
    <vcpupin vcpu='6' cpuset='28'/>
    <vcpupin vcpu='7' cpuset='29'/>
    <vcpupin vcpu='8' cpuset='30'/>
    <vcpupin vcpu='9' cpuset='31'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='5' threads='2'/>
  </cpu>

 

but I think it needs to be changed. Before I do so would like confirmation as I have been having terrible luck with stuff breaking recently.

 

I'd like to add

 

<emulatorpin cpuset='0,16'/>

 

and change  topology to

 

<topology sockets='1' cores='10' threads='1'/>

 

Is that correct?

Share this post


Link to post
Can I ask a noob question??? ;-)

 

I have 1 cpu, 4 cores (2 thread per core) total 8 threads.

 

What is the best way to assign them to 2 vm? (windows10 / Mac OSX no passtrough at the moment)

 

1- Can I assign 2 cores (4 threads) to one vm and 2 cores (4 threads) to the other vm?

2- Must I reserve 1 core to unRaid? Must I add "<emulatorpin cpuset='0,4'/>"  to the first core?

3- Is possible to assign the same cores to 2 vms? will they share power processing between vm's? is it recommended?

4- Why is not recommended assign more than 8 GB of ram?

 

Thankyou

Gus

 

*** FOUND AN INTERESTING THREAD ABOUT THIS:

http://lime-technology.com/forum/index.php?topic=51939.0

 

Gus

 

Share this post


Link to post

Wanted to post my experiences with VMs and adjusting the CPU pinning. My issue had more to do with sound audio delay when streaming any kind of videos, if it is on Youtube, twitch, plex, ect.

 

When assigning CPU cores to my VM there was a notable improved performance but the lag and delay with video streaming cause the audio and video to sync so bad that I literally made my bluetooth soundbar my PC speakers. And even that had an occasional hick up, but was tolerable.

 

I discovered that this lag was really caused by the VM using the Cache Drive. Performance plummets when using the cache drive for other things like VMs, file transfers, or ther VMs running.

 

I decided to continue using my physical machine's drive and just passthrough that drive and built a VM around it. All the audio issues were gone. Combine this with CPU pinning, I can't tell the difference between my VM and my original physical machine (before it had Unraid)

 

Just thought I'd add my experience with CPU pinning and passing through the entire dedicated OS drive instead of using the cache drive.

 

 

Share this post


Link to post

Were you using /mnt/cache/share or /mnt/usr/share mappings for your VM image settings?

Share this post


Link to post

Were you using /mnt/cache/share or /mnt/usr/share mappings for your VM image settings?

 

hey! Which one should we be using??

Share this post


Link to post

Were you using /mnt/cache/share or /mnt/usr/share mappings for your VM image settings?

 

hey! Which one should we be using??

 

if your vm images are on the cache then use /mnt/cache/sharename

Share this post


Link to post

Is there any guide or wiki page on this? Its making my brain melt....

 

I have an 8C/16T cpu on the way, and want to ensure I can run 2 gaming VMs with passthrough GPUs and 2C/4T each. Of the remaining 4C/8T I'd like to be able to ensure Plex and other dockers have some resource (don't often transcode thankfully) but perhaps also a 3rd VM on occasions. Do I just pin cores for the gaming VMs and the emulator pin and leave the other 3 real cores "floating" for anyone to use when needed?

Share this post


Link to post

My VM is on my "unassigned device" .

 

MY CPUs show up like the following. How whould I assign my VM if I want 4-6 CPUs for it?

 

 

cpu 0 / 12

 

cpu 1 / 13

 

cpu 2 / 14

 

cpu 3 / 15

 

cpu 4 / 16

 

cpu 5 / 17

 

cpu 6 / 18

 

cpu 7 / 19

 

cpu 8 / 20

 

cpu 9 / 21

 

cpu 10 / 22

 

cpu 11 / 23

 

Share this post


Link to post

I have my CPU pinned via the screenshot

 

Green = UNRAID and two virtual 2012R2 servers on the cache drive no pass-through

Yellow = Gaming VM1

Orange = Gaming VM2

 

This gives both of my gaming VM's 4 physical CPU and I don't currently have any issues playing games but say Star Citizen however it is still in alpha. I also use VM1 as my primary work computer and works great at that as well but normally my work stuff is not CPU heavy.

 

I gave UNRAID the first two CPU's and its pair and over utilizing them for the two 2012 R2 servers and have not had any issues yet.

CPUpin.PNG.410b1e88477a5a7e430fe112ee9c5394.PNG

Share this post


Link to post

It really looks like this -

 

There are only 6 core's each core has two threads and each core can handle the request from both of its threads with very minimal impact.

 

Both gaming VM's can be on playing game simultaneously and both work very well however I am using a XEON E5 1650 v3 CPU @ 3.5GHz so might have something to do with the type of CPU being used.

 

Share this post


Link to post

Hi, does CPU pinning also apply to an and fx8350?

Sorry if thus has been answered before but did not find anything about it.

Rgds

 

sent from Tapatalk

 

 

Share this post


Link to post

looking at the number of threads you have you should have no problems giving both threads per core to a VM. I do mine they way I do because of the limitations of how many threads I have and it work well for my environment. 

     

      T  C  T

cpu 8 / 20

 

cpu 9 / 21

 

cpu 10 / 22

 

I believe the fx8350 is an AMD CPU and I don't believe AMD has HT however you still my benefit from pinning the first core to UNRAID and isolating the other cores for the VM for better performance.

Share this post


Link to post

OK I will give it a try and report back. THX for support.

 

sent from Tapatalk

 

 

Share this post


Link to post

I have a problem with my CPU Settings i think. Sometimes on my Windows VM, my mouse and keyboard inputs stutter and sometimes the VM itself too.

 

I want to set up my Windows VM as a Gaming VM and my OSX VM for work. Could someone help me? Should i buy a newer CPU/Mobo with more Cores?

 

Thats my CPU Pairing

cpu 0 <===> cpu 4
cpu 1 <===> cpu 5
cpu 2 <===> cpu 6
cpu 3 <===> cpu 7

 

And this is are my VM Settings

 

Windows

<vcpu placement='static'>3</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='4'/>
    <vcpupin vcpu='2' cpuset='5'/>
  </cputune>
<cpu mode='host-passthrough'>
    <topology sockets='1' cores='3' threads='1'/>
  </cpu>

 

OSX

  <vcpu placement='static'>4</vcpu>
<cputune>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='6'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <vcpupin vcpu='3' cpuset='7'/>
  </cputune>

<cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='2'/>
  </cpu>

Share this post


Link to post
I have a problem with my CPU Settings i think. Sometimes on my Windows VM, my mouse and keyboard inputs stutter and sometimes the VM itself too.

 

I want to set up my Windows VM as a Gaming VM and my OSX VM for work. Could someone help me? Should i buy a newer CPU/Mobo with more Cores?

 

Can you post what hardware you are using please. Looking at how many vCPU's you have I am guessing it is a quad core with HT and just my recommendation I would use a minimum of a 6 core with HT but at 3.0 GHz or faster.

 

Fewer but faster cores or slower but many cores

 

You will want to try and give UNRAID CPU 0 and 4, and hide all other cores from UNRAID for VM assignment using the isolcpus=1,2,3,5,6,7

 

You could try different ways to assign your vCPU to your VM's

 

1. Give gaming VM CPU 1,5,2,6 and the work VM CPU 3,7

2. Give gaming VM CPU 1,2,3 and work VM CPU 5,6,7

 

Might want to edit the VM templets and add emulatorpin cpuset='0-4'

 

 

 

Share this post


Link to post

Thank you very much.

I just add my Hardware to my Signature.

 

I will try your recommendation, and give here an feedback.

I hope i can get it running without laggs, if not i need to use an second PC as work PC.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.