isolate CPU


Recommended Posts

I'm trying to isolate the CPU's to individual VM's, however when I edit the XML, the modification doesn't save. I altered in the system to isolate from the OS

 

append isolcpus=2-11,14-23 initrd=/bzroot,/bzroot-gui

 

By editing the XML

 

  <cputune>

    <vcpupin vcpu='0' cpuset='8'/>

    <vcpupin vcpu='1' cpuset='9'/>

    <vcpupin vcpu='2' cpuset='20'/>

    <vcpupin vcpu='3' cpuset='21'/>

  </cputune>

 

to

 

  <cputune>

    <vcpupin vcpu='0' cpuset='8'/>

    <vcpupin vcpu='1' cpuset='9'/>

    <vcpupin vcpu='2' cpuset='20'/>

    <vcpupin vcpu='3' cpuset='21'/>

    <emnulatorpin cpuset='8,20'/>

  </cputune>

 

But when I click in view xml with the VM started, the result is the following

 

  <cputune>

    <vcpupin vcpu='0' cpuset='8'/>

    <vcpupin vcpu='1' cpuset='9'/>

    <vcpupin vcpu='2' cpuset='20'/>

    <vcpupin vcpu='3' cpuset='21'/>

  </cputune>

 

Unraid version 6.3.0rc5

 

 

Link to comment

I'm trying to isolate the CPU's to individual VM's, however, when I edit the XML, the modification doesn't save. I altered in the system to isolate from the OS

 

append isolcpus=2-11,14-23 initrd=/bzroot,/bzroot-gui

 

By editing the XML

 

  <cputune>

    <vcpupin vcpu='0' cpuset='8'/>

    <vcpupin vcpu='1' cpuset='9'/>

    <vcpupin vcpu='2' cpuset='20'/>

    <vcpupin vcpu='3' cpuset='21'/>

  </cputune>

 

to

 

  <cputune>

    <vcpupin vcpu='0' cpuset='8'/>

    <vcpupin vcpu='1' cpuset='9'/>

    <vcpupin vcpu='2' cpuset='20'/>

    <vcpupin vcpu='3' cpuset='21'/>

    <emnulatorpin cpuset='8,20'/>

  </cputune>

 

But when I click in view XML with the VM started, the result is the following

 

  <cputune>

    <vcpupin vcpu='0' cpuset='8'/>

    <vcpupin vcpu='1' cpuset='9'/>

    <vcpupin vcpu='2' cpuset='20'/>

    <vcpupin vcpu='3' cpuset='21'/>

  </cputune>

 

Unraid version 6.3.0rc5

 

Remember when using emulatorpin to pin these using threaded pairs.(if you are not already doing this)

 

Also if using multiple VMS at once I find it better to pin a group of CPU cores to all the VMS for emulation functions.

By this I mean if normally you pin 1 core (2 threads) to one VM and another core to another vm for the emulatorpin fuction, it is better to pin both those cores as

a group for the emulatorpin for both the VMS so they share those 2 cores for the emulatorpin.

For example

 

vm 1 may be

<emulatorpin cpuset='8,20'/>

 

and vm 2

<emulatorpin cpuset='9,21'/>

 

it is better to have both vm1 and 2 to have

 

vm1 and vm2

<emulatorpin cpuset='8,9,20,21'/>

 

I did similar when i made the video for running 3 gaming vms at once http://lime-technology.com/forum/index.php?topic=53169.0

Link to comment

First of all, let me thank you for the answers. Now let me see if I got this right. I have 5 gaming VM's with 2 cores and 4 threads to each of them a listed below:

 

cpu 0 / 12 Unraid

cpu 1 / 13 Unraid

cpu 2 / 14 vm1 emu*

cpu 3 / 15

cpu 4 / 16 vm2 emu*

cpu 5 / 17

cpu 6 / 18 vm3 emu*

cpu 7 / 19

cpu 8 / 20 vm4 emu*

cpu 9 / 21

cpu 10 / 22 vm5 emu*

cpu 11 / 23

 

I followed the same line as it was in your video with optimizing perfomance in mind, as such should I set the emulator pin 2,4,6,8,10 in every VM?

 

 <emulatorpin cpuset='2,4,6,8,10'/> 

 

Do you have any other tips for disks or some detail that I might have overlooked? Following an example of 1 VM:

 

<domain type='kvm'>
  <name>Player 2</name>
  <uuid>c513c1e9-31eb-5467-881b-c140595ad758</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='8'/>
    <vcpupin vcpu='1' cpuset='9'/>
    <vcpupin vcpu='2' cpuset='20'/>
    <vcpupin vcpu='3' cpuset='21'/>
    <emulatorpin cpuset='2,4,6,8,10'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor_id state='on' value='none'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='2'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/sde'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/Windows10.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:8b:3a:17'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x84' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x84' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x10c4'/>
        <product id='0x8105'/>
      </source>
      <address type='usb' bus='0' port='1'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1532'/>
        <product id='0x001b'/>
      </source>
      <address type='usb' bus='0' port='2'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1c4f'/>
        <product id='0x0002'/>
      </source>
      <address type='usb' bus='0' port='3'/>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

Link to comment

I guess you have a 12 core CPU.

When using emulatorpin it is best to not pin cores/threads that are being used for the VM itself.

The point is, is to allow the VM to use the cores all for the os (for you windows) and them not to

have any overhead doing the emulation calls for the VM

I think running 5 gaming VMS off 12 cores will be 'very challenging' as I don't think there are enough resources for that, but I may be wrong.

If you are going to use emulatorpin then pin it to the cores you have marked for unraid.

<emulatorpin cpuset='0,1,12,13/>

 

When running the 5 VMs stop all dockers to keep that as light as possible.

 

If using append isolcpus then its right to isolate only the cores that are being used for the VMS.

Don't isolate the cores used for emulation calls so how you had it like below should be fine.

append isolcpus=2-11,14-23 initrd=/bzroot,/bzroot-gui

But you may be fine not isolating the CPUs. Try both and see what works best for you.

I keep multiple entries in my syslinux config file so I can easily switch on reboots. Just add another label and it will show as selectable on boot.

for example, mine looks like this

label unRAID OS
  menu default
  kernel /bzimage
append vfio-pci.ids=8086:15a1 initrd=/bzroot
label unRAID isolated 12 cores
  kernel /bzimage
  append isolcpus=2-13,16-27 initrd=/bzroot
label unRAID OS GUI Mode
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui

 

Also after you have set the VMS up you don't need to emulate a DVD/CD drive so scrap these lines

 

<disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/Windows10.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

 

You would be best to pass through the disk not using <source dev='/dev/sde'/> use device id.

 

One of mine looks like this

 

<disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/disk/by-id/ata-ST32000542AS_5XW1HXX5-part2'/>
      <target dev='hdd' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>

 

This is the id of the disk is  ata-ST32000542AS_5XW1HXX5  and then the bit on the end -part2 passes through the second partition on the disk. Without the -part2 on the end

the whole disk would be passed through (all partitions)

   

Link to comment

Quick add on scrapping the optical drives: I tend to keep exactly one optical drive around after I'm fully installed, with no <source> line, implying empty drive. This way, I can use virsh or virt-manager to hot mount images if necessary. You can't hot attach or detach drives, though.

Link to comment

Quick add on scrapping the optical drives: I tend to keep exactly one optical drive around after I'm fully installed, with no <source> line, implying empty drive. This way, I can use virsh or virt-manager to hot mount images if necessary. You can't hot attach or detach drives, though.

 

I have my optical drive powered by a sata power connector but connected to a sata to usb adapter that plugs on an internal USB port, this way I can attach it and detach it from VMs as I need.

Link to comment

Quick add on scrapping the optical drives: I tend to keep exactly one optical drive around after I'm fully installed, with no <source> line, implying empty drive. This way, I can use virsh or virt-manager to hot mount images if necessary. You can't hot attach or detach drives, though.

 

I have my optical drive powered by a sata power connector but connected to a sata to usb adapter that plugs on an internal USB port, this way I can attach it and detach it from VMs as I need.

I think you two are talking about two different things. Physical drive vs. virtual drive. CD's vs. ISO image files.
Link to comment

I guess you have a 12 core CPU.

When using emulatorpin it is best to not pin cores/threads that are being used for the VM itself.

The point is, is to allow the VM to use the cores all for the os (for you windows) and them not to

have any overhead doing the emulation calls for the VM

I think running 5 gaming VMS off 12 cores will be 'very challenging' as I don't think there are enough resources for that, but I may be wrong.

If you are going to use emulatorpin then pin it to the cores you have marked for unraid.

<emulatorpin cpuset='0,1,12,13/>

 

When running the 5 VMs stop all dockers to keep that as light as possible.

 

If using append isolcpus then its right to isolate only the cores that are being used for the VMS.

Don't isolate the cores used for emulation calls so how you had it like below should be fine.

append isolcpus=2-11,14-23 initrd=/bzroot,/bzroot-gui

But you may be fine not isolating the CPUs. Try both and see what works best for you.

I keep multiple entries in my syslinux config file so I can easily switch on reboots. Just add another label and it will show as selectable on boot.

for example, mine looks like this

label unRAID OS
  menu default
  kernel /bzimage
append vfio-pci.ids=8086:15a1 initrd=/bzroot
label unRAID isolated 12 cores
  kernel /bzimage
  append isolcpus=2-13,16-27 initrd=/bzroot
label unRAID OS GUI Mode
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui

 

Also after you have set the VMS up you don't need to emulate a DVD/CD drive so scrap these lines

 

<disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/Windows10.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <boot order='2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

 

You would be best to pass through the disk not using <source dev='/dev/sde'/> use device id.

 

One of mine looks like this

 

<disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source dev='/dev/disk/by-id/ata-ST32000542AS_5XW1HXX5-part2'/>
      <target dev='hdd' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>

 

This is the id of the disk is  ata-ST32000542AS_5XW1HXX5  and then the bit on the end -part2 passes through the second partition on the disk. Without the -part2 on the end

the whole disk would be passed through (all partitions)

 

 

Once more thank you for the answers. I removed the vdisks on the VM's.

I used the device BY-ID, it worked perfectly but then a new doubt arise, if I use part1 and part2 to 2 distinguished VM's, will they boot windows normally or these disks can only be secundary?

My PC as it is right now:

32gb ram

12c 24t (2x e5 2630v2)

HDD data disk 3tb

HDD cache disk 40gb

No pariry due to license limitation for 6 disks.

 

VM's

P1 - 3c, 6gb ram, SSD 120 full passthrough + AMD Radeon RX480

P2 - 3c, 6gb ram, SSD 120 full passthrough + GeForce GTX 550 ti

P3 - 2c, 6gb ram, HDD 500 full passthrough + AMD Radeon HD6750

P4 - 2c, 4gb ram, HDD 500 full passthrough + AMD Radeon 5450

 

I utilize various radeon because when I shut down the VM that has the nvidia, the same won't start again unless I do a full reboot of the system

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.