bb12489 Posted October 24, 2017 Share Posted October 24, 2017 Hey guys, I'm just finally getting started with setting up a Gamestream VM to use with my Nvidia Shield TV. I think I've gotten the CPU pinning set correctly, but I'm hoping someone could give it a second look. My system is running dual Xeon L5640's (6 core 12 thread), so I have 24 threads to work with. My thought was to isolate the last 3 cores (bolded below) which would give me 6 threads for the VM. Is my thinking correct? The only thing that looks off to me in the XML is the "cputune". Shouldn't this be showing 9,21,10,22,11,23? My thread pairing is as follows.... cpu 0 <===> cpu 12 cpu 1 <===> cpu 13 cpu 2 <===> cpu 14 cpu 3 <===> cpu 15 cpu 4 <===> cpu 16 cpu 5 <===> cpu 17 cpu 6 <===> cpu 18 cpu 7 <===> cpu 19 cpu 8 <===> cpu 20cpu 9 <===> cpu 21 cpu 10 <===> cpu 22 cpu 11 <===> cpu 23 Quote <domain type='kvm' id='6'> <name>Windows 10 - Gamestream</name> <uuid>0123ac52-2486-8760-f9d3-aafc8578e469</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='9'/> <vcpupin vcpu='1' cpuset='10'/> <vcpupin vcpu='2' cpuset='11'/> <vcpupin vcpu='3' cpuset='21'/> <vcpupin vcpu='4' cpuset='22'/> <vcpupin vcpu='5' cpuset='23'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/0123ac52-2486-8760-f9d3-aafc8578e469_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='3' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 - Gamestream/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/Windows/10/16299.15.170928-1534.RS3_RELEASE_CLIENTPRO_OEMRET_X64FRE_EN-US.ISO'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/virtio-win-0.1.126-2.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:87:f2:ca'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-6-Windows 10 - Gamestr/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> Quote Link to comment
saarg Posted October 24, 2017 Share Posted October 24, 2017 It doesn't matter which order the cores are in the XML. If you also want the cores dedicated to the vm you need to set isolcpus in your syslinux.cfg to match the selected cores in the vm template. Quote Link to comment
bb12489 Posted October 27, 2017 Share Posted October 27, 2017 On 10/24/2017 at 3:23 AM, saarg said: It doesn't matter which order the cores are in the XML. If you also want the cores dedicated to the vm you need to set isolcpus in your syslinux.cfg to match the selected cores in the vm template. I did add the cores I wanted isolated in my syslinux.cfg. It's just that the XML looked odd to me. Quote Link to comment
allanp81 Posted November 7, 2017 Share Posted November 7, 2017 I've noticed a strange oddity on my Windows 7 VM. I generally see perfect latency, green across the board. If I copy large files from my Unraid server to the VM, I will get massively latency. This happens whether using bridged networking or even passing through a dedicated nic. Quote Link to comment
dlandon Posted November 7, 2017 Author Share Posted November 7, 2017 29 minutes ago, allanp81 said: I've noticed a strange oddity on my Windows 7 VM. I generally see perfect latency, green across the board. If I copy large files from my Unraid server to the VM, I will get massively latency. This happens whether using bridged networking or even passing through a dedicated nic. Sounds like a disk caching issue. Install the Tips and Tweaks plugin and adjust the disk caching. See if that doesn't help. Quote Link to comment
snazz Posted November 30, 2017 Share Posted November 30, 2017 Does the "assigning cores in pairs due to hyperthreading" recommendation apply to AMD Ryzen CPUs which don't have "Hyperthreading", per se? I have a Windows 10 VM assigned 2 "Hyperthreaded" / SMT cores, and about a dozen docker containers spread out and pinned (in pairs) to various other cores, and I'm seeing a high context switching value (12000 - 15000) as measured by the Glances docker. In general performance seems to be decent in the VM and the docker apps (Plex, Ombi, Sab, Sick, Couch, PlexPy...), but I'm new to AMD CPUs and I'm not sure if this is the optimal approach for my setup. Any ideas on how to minimize context switching, or should I not worry about it? Thx! Quote Link to comment
elbro_dark Posted December 19, 2017 Share Posted December 19, 2017 (edited) Hi how can i gain more performance out of my machine? My System: Cpu: AMD FX-8350 @ 4ghz Motherboard: AsRock 970 Extreme 4 Ram: 32GB DDR 3 Graphiccards: GT520(host),GTX790(VM1),GTX770(VM2) Both Virtual Machines are installed on their own SSD + 1 HDD(for games) Both have 8GB Ram and 4 "Cpu´s" Both Virtual Machines should be used for gaming. 1st Virtual Machine: <domain type='kvm'> <name>Windows 10</name> <uuid>fdc2c928-d8dd-1564-6aa7-63f858a08548</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/fdc2c928-d8dd-1564-6aa7-63f858a08548_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk2.img'/> <target dev='hdd' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/de_windows_10_pro_10240_x64_dvd.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.126-2.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:06:5d:3b'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046a'/> <product id='0x0023'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1e7d'/> <product id='0x2dc2'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> </domain> 2nd Virtual Machine: <domain type='kvm'> <name>Windows 10-770</name> <uuid>e39c2f3c-1bba-0d69-1a0f-1dbbcba80125</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='6'/> <vcpupin vcpu='1' cpuset='7'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/e39c2f3c-1bba-0d69-1a0f-1dbbcba80125_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10-770/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disk4/Windows 10-770/vdisk2.img'/> <target dev='hdd' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/de_windows_10_pro_10240_x64_dvd.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.126-2.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:eb:f3:da'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x14' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046a'/> <product id='0x010d'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x18f8'/> <product id='0x0f97'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> </domain> Unraid shows me this as Cpu Thread Parings: Pair 1:cpu 0 / cpu 1 Pair 2:cpu 2 / cpu 3 Pair 3:cpu 4 / cpu 5 Pair 4:cpu 6 / cpu 7 But i didnt get it which one i should emulate/pin to which vm...? Edited December 19, 2017 by elbro_dark Quote Link to comment
steve1977 Posted December 31, 2017 Share Posted December 31, 2017 Great to see this sticky and would love to get advice how to set up the right CPU and Ram assignment. My System: Cpu: Intel i7 7800X: 6C/12T @ 3.5ghz Motherboard: Asus Prime X299-A Ram: 32GB DDR 3 Graphiccards: GTX1050 (host), GTX1050 (VM1) I'd like to run: 1) Unraid with dockers (sabnzbd, radarr, sonarr), no transcoding/Plex 2) VM1: Win10 for gaming 3) VM2: MacOS for Photos 4) VM3: LE Kodi 5) Optional: VM4: Win10 for either gaming or general use (TBD) Would love to seek advice how to assign CPU and Ram. Currently, I have it set up as follows: Unraid: no assignment (not used 0/6, 12GB) VM1: 4/10, 5/11, 8GB VM2: 2/8, 3/9, 8GB VM3: 1/7, 4GB No VM4 setup yet I am wondering whether my VM1 would perform better if I were to assign an additional core to it? Also curious whether I can set VM3 set up more efficiently and whether helpful to assign additional ram to any of the VMs? If I were to set up VM4 as additional gaming VM (or all-purpose VM), how would I need to change my config? Thanks in advance and happy new year! Quote Link to comment
steve1977 Posted February 18, 2018 Share Posted February 18, 2018 Any thoughts? Quote Link to comment
dlandon Posted February 18, 2018 Author Share Posted February 18, 2018 4 hours ago, steve1977 said: Any thoughts? There is no one perfect way to do this. If it works for you, then you are good to go. Quote Link to comment
steve1977 Posted February 18, 2018 Share Posted February 18, 2018 Thanks for your reply. What about leaving one core for Unraid and triple-assigning the remaining five cores in parallel to the three VMs? I will rarely use all three VMs in parallel at full load. Any experience in double- or triple-assigning the same cores to several VMs? Quote Link to comment
dlandon Posted February 18, 2018 Author Share Posted February 18, 2018 3 hours ago, steve1977 said: Thanks for your reply. What about leaving one core for Unraid and triple-assigning the remaining five cores in parallel to the three VMs? I will rarely use all three VMs in parallel at full load. Any experience in double- or triple-assigning the same cores to several VMs? Do you have any dockers? If you do, they will need some processing power. You can multiple assign CPUs to several VMs. I do that with two Windows desktop VMs. Quote Link to comment
steve1977 Posted February 18, 2018 Share Posted February 18, 2018 Thanks. Yes, running a few of them. Thought dockers are quite low in CPU need as long as I don't do any transcoding (i.e., I am not using Plex or others). So, 1C/2T may be plentyful? Quote Link to comment
L0rdRaiden Posted March 31, 2018 Share Posted March 31, 2018 (edited) I have installed win10 and win server with the virtIO drivers. I have check with HWindo64 and CPUZ and desplite the power plan being on balance the freq of the CPU is always as maximun Is there a way to configure KVM so the CPU freq scales with demand as It happens when you install the SO in baremetal? I have a Ryzen 2400G Edited March 31, 2018 by L0rdRaiden Quote Link to comment
smashingtool Posted July 20, 2018 Share Posted July 20, 2018 How should CPU pinning be handled on a ryzen chip? I assume the CCX makes it slightly more complicated. I guess the real question is, what is the layout of the CPU/Threads for 1-16? Quote Link to comment
DZMM Posted August 5, 2018 Share Posted August 5, 2018 On 6/2/2016 at 6:04 PM, dlandon said: Good information that confirms what I felt was the best approach. I have suggested that LT change the VM manager to emulatorpin every VM to the first cpu pair and not allow assigning the first pair to any VM. @dlandon I know this is a bit old, but I'm trying to troubleshoot some VM freezes I get. I have 3x W10 VMs that are always on (1 used all day (me) and x2 used by my kids a couple of hours a day concurrently) and a pfsense VM. I have 14 cores and I used to pin x2 VMs to 0,14 and x2 VMs to 1,15 - my thinking was to share the load. After reading your post, I've been thinking: - should the emulator pin for the W10 VMs at least be on the same cores i.e. the windows emulator is same 'process', so using different pinning will cause resource problems/conflicts/issues etc? - should all VMs be pinned to the first core regardless of how many are running concurrently? Thanks Quote Link to comment
Magicmissle Posted August 12, 2018 Share Posted August 12, 2018 (edited) Anyone done this with E5-2696V4 Cpus? I have 2 cpus on a ASUS Z10PE-D16. I want to pin core 0 and 1 from cpu one, and core 0 (44?) and 1 (45?) from cpu two. Ideally this should give 4 cores to a VM running at 3.7Ghz on turbo? Edited August 12, 2018 by Magicmissle making more sense Quote Link to comment
saarg Posted August 13, 2018 Share Posted August 13, 2018 Unraid likes the first core for itself, so I would suggest you do not pin it for a VM. It shouldn't matter which you use for the vm. The core pairs are listed in your screenshot. The first one is the core and after the slash is the hyperthreaded. So 0 and 44 is the first pair. Quote Link to comment
DZMM Posted August 13, 2018 Share Posted August 13, 2018 On 8/5/2018 at 3:49 PM, DZMM said: @dlandon I know this is a bit old, but I'm trying to troubleshoot some VM freezes I get. I have 3x W10 VMs that are always on (1 used all day (me) and x2 used by my kids a couple of hours a day concurrently) and a pfsense VM. I have 14 cores and I used to pin x2 VMs to 0,14 and x2 VMs to 1,15 - my thinking was to share the load. After reading your post, I've been thinking: - should the emulator pin for the W10 VMs at least be on the same cores i.e. the windows emulator is same 'process', so using different pinning will cause resource problems/conflicts/issues etc? - should all VMs be pinned to the first core regardless of how many are running concurrently? Thanks Anyone got a view on this? Quote Link to comment
1812 Posted August 13, 2018 Share Posted August 13, 2018 25 minutes ago, DZMM said: Anyone got a view on this? stacking vm's on cpu threads can introduce latency and audio issues. It's fine to share an emulator pin/thread/core among a few vm's as long as it doesn't get overloaded and max out managing them. if you want the "emulator pin to be on the same cores" then don't specify it and it will be there automatically. linux systems have a preference for core 0, so best to avoid. I tend to run an emulator pin on the hyper threaded pair without issue. Quote Link to comment
DZMM Posted August 13, 2018 Share Posted August 13, 2018 (edited) 19 minutes ago, 1812 said: stacking vm's on cpu threads can introduce latency and audio issues. It's fine to share an emulator pin/thread/core among a few vm's as long as it doesn't get overloaded and max out managing them. if you want the "emulator pin to be on the same cores" then don't specify it and it will be there automatically. Thanks. My VMs are all on different cores, my query was just how many VMs on a emulator pin pair is 'too much'. I think I'm going to go back to spreading the cores my 4 VMs are pinned to, with 2 VMs to each pair 19 minutes ago, 1812 said: linux systems have a preference for core 0, so best to avoid. I tend to run an emulator pin on the hyper threaded pair without issue. Ok, going to avoid core 0. I think pinning to 0 might explain stutters I've been getting when unRAID/dockers are busy. Do you mean you run the emulator pin just on a hyper threaded core? Edited August 13, 2018 by DZMM Quote Link to comment
1812 Posted August 13, 2018 Share Posted August 13, 2018 3 hours ago, DZMM said: Do you mean you run the emulator pin just on a hyper threaded core? yeah so, if your first pair are 0,14, I would end up probably using 14 (also because most of my other threads are otherwise occupied.) But everyone is going to have a different way they prefer. 3 hours ago, DZMM said: My VMs are all on different cores, my query was just how many VMs on a emulator pin pair is 'too much'. it depends on many factors: is the thread isolated from unraid? how active and what type of vm is it? not a straightforward question. in the end you'll have to probably do trial and error unless you just want to use 1 thread per isolated pin which will be the "best" to limit IOwait and some stuttering, but also waste 1 thread in the process for anything else. it's all a trade off. Quote Link to comment
DZMM Posted August 13, 2018 Share Posted August 13, 2018 9 minutes ago, 1812 said: it's all a trade off. Thanks. I'm going to try running two VMs on 14 and two on 15 to see what happens Vs all on 1,15 - that's if I can tell the difference. I'm not keen on isolating cores - I don't have anything that's mission critical, or so important that it warrants taking potential resources away from other tasks. Quote Link to comment
1812 Posted August 13, 2018 Share Posted August 13, 2018 1 hour ago, DZMM said: Thanks. I'm going to try running two VMs on 14 and two on 15 to see what happens Vs all on 1,15 - that's if I can tell the difference. I'm not keen on isolating cores - I don't have anything that's mission critical, or so important that it warrants taking potential resources away from other tasks. but by not isolating cores, you may cause stuttering or other issues in your vm's. just something to keep an eye on. Quote Link to comment
DZMM Posted August 13, 2018 Share Posted August 13, 2018 1 hour ago, 1812 said: but by not isolating cores, you may cause stuttering or other issues in your vm's. just something to keep an eye on. I know, but I love that my machine is a multi-tasking beast and I don't want resources unavailable when they could be available when they are not being used. My VMs do ok - occasionally I'll get stutters like today when I was doing a lot of stuff and the kids were on as well and the VMs would hang now and then for a second, but that's it. I've isolated the cores of my dockers, so only vital dockers (plex, tvh, home assistant etc) have access to all of them. It's bearable, but I'm curious to see if the stutters reduce by tweaking my pinning. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.