**VIDEO GUIDE** How to Install MacOS Mojave or High Sierra as a VM


SpaceInvaderOne

Recommended Posts

I use 2 gt710's and a gt730 (710 variant) with almost no issues... https://lime-technology.com/forum/index.php?topic=54786.msg523314#msg523314

 

I use host dev on them all. no nvidia drivers. no boot args.

 

but that is no your problem. (sharing is caring though)

 

what I use (from a running vm, so it has a few extra lines auto populated):

 

 <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
    </hostdev>

 

try:

 

removing "xvga=yes" from your xml. I don't see it in mine, for this card or my gtx760

 

Look in the boot log for the gt710 card, search by device id and by it's slot assignment and see if it is showing any errors there, like a bios bug or similar.

 

also, try it in a different slot. sometimes that helps some folks.

 

Thanks for the pointers.

 

I made a bunch of changes, removed xvga from the xml, upgraded to Sierra, changed smbios setting in clover to match an older mac, etc.

 

And IT WORKS!!! Finally. It recognized the card and without any boot arguments or installing web drivers, it worked. Phew, spent so many hours

 

Now I need to get audio working ;-)

 

EDIT: Got hdmi audio working thanks to another one of your posts http://lime-technology.com/forum/index.php?topic=51915.msg524900;topicseen#msg524900 :-) Thanks so much

 

Hi aptalca

 

try using this hdmi kext

 

https://www.dropbox.com/s/1f39m1bew9uhyio/HDMIAudio-1.1.dmg?dl=0

 

mount the dmg

open terminal

then cd to image

 

cd /Volumes/HDMIAudio

then run script to install

 

./install.sh

 

I found this worked for a few cards that i have tried before.

I can confirm that script worked for me as well for a Zotac GT 710 1GB

 

Thanks so much

Link to comment

Im still having problems with my GPU, maybe someone knew something about it.

My OSX boots correctly without issues when only one display is connected (atm. im using display port).

But when i have more than one display connected, i dont get any output and i can't do anything, i need to stop the VM through the web page and start it again with one display connected.

And it also boots correctly when no display is connected, and after one minute i plug in an monitor and i get a screen.

 

I also recognized, that my osx is registering only two cpus, but i set in the xml 4 cpus, is this normal?

 

Link to comment

Hi Guy's

 

Thank you all so much for the excellent tutorials which have helped me set up an almost perfect osx sierra VM on my unraid box! I have one issue which is hopefully an easy fix but due to being a MAC Os beginner I simply cant find the answer. I seem to have the super-fast mac issue that runs at 10x the usual speed, I have tried removing the qemu flag from the clover .plist file but it doesn't seem to make any difference. Any advice or direction would be most appreciated. This is all after running through the excellent video tutorial by gridrunner with the exception of already having a sierra based VMDK vm available. I basically ran through the second half of the vid including the conversion to raw etc. Thank's guys.

Link to comment

Hi Guy's

 

Thank you all so much for the excellent tutorials which have helped me set up an almost perfect osx sierra VM on my unraid box! I have one issue which is hopefully an easy fix but due to being a MAC Os beginner I simply cant find the answer. I seem to have the super-fast mac issue that runs at 10x the usual speed, I have tried removing the qemu flag from the clover .plist file but it doesn't seem to make any difference. Any advice or direction would be most appreciated. This is all after running through the excellent video tutorial by gridrunner with the exception of already having a sierra based VMDK vm available. I basically ran through the second half of the vid including the conversion to raw etc. Thank's guys.

 

Hi sarf,

 

I had the same problem as you and asked gridrunner about it in his thread for the new Sierra installation method  (https://lime-technology.com/forum/index.php?topic=55615.0)

I solved it using the older clover version from a previous video (he links it in the thread).

I kept the qemu flag and simply replaced the files in the EFI partition.

Link to comment

Hi gridrunner,

 

I messaged you on Youtube and you asked me to paste my VM XML and Devices. I am not sure how to passthrough a GPU, the XML I am using is your VNC template one, I am not sure which lines to get rid of and which to add, I tried using the alternate xml from your zip and got the GPU working, however I had a lot of tearing in videos and there is no audio over HDMI. Pasting my current XML below:

 

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>OSX Sierra</name>
  <uuid>a43a6297-9dfe-7c01-8706-cc24e23d4691</uuid>
  <description>Mac OSX Sierra</description>
  <metadata>
    <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/>
  </metadata>
  <memory unit='KiB'>8388608</memory>
  <currentMemory unit='KiB'>8388608</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='8'/>
    <vcpupin vcpu='1' cpuset='9'/>
    <vcpupin vcpu='2' cpuset='10'/>
    <vcpupin vcpu='3' cpuset='11'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-q35-2.7'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/a43a6297-9dfe-7c01-8706-cc24e23d4691_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='2'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/disks/VMSSD/domains/OSXSierra/osxsierra.img'/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'/>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:51:66:48'/>
      <source bridge='br0'/>
      <model type='e1000-82545em'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
    </interface>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <memballoon model='none'/>
  </devices>
  <seclabel type='none' model='none'/>
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='usb-kbd'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='usb-mouse'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='isa-applesmc,osk=amanchoosesaslaveobeys'/>
    <qemu:arg value='-smbios'/>
    <qemu:arg value='type=2'/>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='Penryn,vendor=GenuineIntel'/>
  </qemu:commandline>
</domain>

 

Devices:

IOMMU group 34
08:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Oland [Radeon HD 8570 / R7 240/340 OEM] [1002:6611]
08:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [1002:aab0]

 

Link to comment

Hey guys trying to get a Mac VM installed on my unraid machine and having some issues. I've tried to install via the video method which has gotten me as far as the loading screen for mac. Part way through the loading screen it goes to a circle with an cross through it http://prntscr.com/dzku30 I'm not trying to forward anything to the VM. Just a basic VNC based OSX.

 

Left my xml below.

<domain type='kvm' id='66' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>osx</name>
  <uuid>f6280639-c278-06d5-dc34-139749b38493</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>6</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='3'/>
    <vcpupin vcpu='1' cpuset='4'/>
    <vcpupin vcpu='2' cpuset='5'/>
    <vcpupin vcpu='3' cpuset='9'/>
    <vcpupin vcpu='4' cpuset='10'/>
    <vcpupin vcpu='5' cpuset='11'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-2.5'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/f6280639-c278-06d5-dc34-139749b38493_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='3' threads='2'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/Mac OSX Sierra/sierra.img'/>
      <backingStore/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:51:66:48'/>
      <source bridge='virbr0'/>
      <target dev='vnet1'/>
      <model type='e1000-82545em'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
    </interface>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <memballoon model='none'>
      <alias name='balloon0'/>
    </memballoon>
  </devices>
  <seclabel type='none' model='none'/>
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='usb-kbd'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='usb-mouse'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='isa-applesmc,osk=OSKKEYHERE'/>
    <qemu:arg value='-smbios'/>
    <qemu:arg value='type=2'/>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='Penryn,vendor=GenuineIntel'/>
  </qemu:commandline>
</domain>

 

Update: Think I may have found my issue. First issue was relating to the VirtIO drive instead of SATA. second was the ethernet bridge

Update2: Well I finally got booted. And into the VM. Went to teamviewers website. and both the VM and emhttp locked up until the vnc session was cleared.

Update3: And we're in ! I believe the lockup was related to the low amount of video memory and the VM does seem a bit slow still. any insight on these two things appreciated.

Link to comment

Hey guys trying to get a Mac VM installed on my unraid machine and having some issues. I've tried to install via the video method which has gotten me as far as the loading screen for mac. Part way through the loading screen it goes to a circle with an cross through it http://prntscr.com/dzku30 I'm not trying to forward anything to the VM. Just a basic VNC based OSX.

 

Left my xml below.

<domain type='kvm' id='66' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>osx</name>
  <uuid>f6280639-c278-06d5-dc34-139749b38493</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
  </memoryBacking>
  <vcpu placement='static'>6</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='3'/>
    <vcpupin vcpu='1' cpuset='4'/>
    <vcpupin vcpu='2' cpuset='5'/>
    <vcpupin vcpu='3' cpuset='9'/>
    <vcpupin vcpu='4' cpuset='10'/>
    <vcpupin vcpu='5' cpuset='11'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-2.5'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/f6280639-c278-06d5-dc34-139749b38493_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='3' threads='2'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/domains/Mac OSX Sierra/sierra.img'/>
      <backingStore/>
      <target dev='hdc' bus='sata'/>
      <boot order='1'/>
      <alias name='sata0-0-2'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='dmi-to-pci-bridge'>
      <model name='i82801b11-bridge'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
    </controller>
    <controller type='pci' index='2' model='pci-bridge'>
      <model name='pci-bridge'/>
      <target chassisNr='2'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:51:66:48'/>
      <source bridge='virbr0'/>
      <target dev='vnet1'/>
      <model type='e1000-82545em'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
    </interface>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <memballoon model='none'>
      <alias name='balloon0'/>
    </memballoon>
  </devices>
  <seclabel type='none' model='none'/>
  <qemu:commandline>
    <qemu:arg value='-device'/>
    <qemu:arg value='usb-kbd'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='usb-mouse'/>
    <qemu:arg value='-device'/>
    <qemu:arg value='isa-applesmc,osk=OSKKEYHERE'/>
    <qemu:arg value='-smbios'/>
    <qemu:arg value='type=2'/>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='Penryn,vendor=GenuineIntel'/>
  </qemu:commandline>
</domain>

 

Update: Think I may have found my issue. First issue was relating to the VirtIO drive instead of SATA. second was the ethernet bridge

Update2: Well I finally got booted. And into the VM. Went to teamviewers website. and both the VM and emhttp locked up until the vnc session was cleared.

Update3: And we're in ! I believe the lockup was related to the low amount of video memory and the VM does seem a bit slow still. any insight on these two things appreciated.

 

Good that you got it running.

 

Want it faster? Where is the vm mounted? I see it's on a cache drive, but is that an SSD or spinning disk? Spinning disk is much more sluggish. Even if it's an SSD cache, some believe that it is better for disk i/o to have the vm on an SSD mounted via unassigned devices. That's how I do it, but for other reasons.

 

Also, cpu pinning.... What i've found is that the common idea of using HT cores with os x vm's don't give the best cpu performance. For example, I ran the following cinebench test this morning on 12 of my 24 cores (on 2 processors)

 

Non paired threads spanning 2 processors

<vcpupin vcpu='0' cpuset='12'/>
    <vcpupin vcpu='1' cpuset='13'/>
    <vcpupin vcpu='2' cpuset='14'/>
    <vcpupin vcpu='3' cpuset='15'/>
    <vcpupin vcpu='4' cpuset='16'/>
    <vcpupin vcpu='5' cpuset='17'/>
    <vcpupin vcpu='6' cpuset='18'/>
    <vcpupin vcpu='7' cpuset='19'/>
    <vcpupin vcpu='8' cpuset='20'/>
    <vcpupin vcpu='9' cpuset='21'/>
    <vcpupin vcpu='10' cpuset='22'/>
    <vcpupin vcpu='11' cpuset='23'/>

 

Cinebench scores

1081

1104

1121

 

 

Paired HT threads on a single processor

<vcpupin vcpu='0' cpuset=‘6’/>
    <vcpupin vcpu='1' cpuset=‘7’/>
    <vcpupin vcpu='2' cpuset=‘8’/>
    <vcpupin vcpu='3' cpuset=‘9’/>
    <vcpupin vcpu='4' cpuset=’10’/>
    <vcpupin vcpu='5' cpuset='11’/>
    <vcpupin vcpu='6' cpuset='18’/>
    <vcpupin vcpu='7' cpuset='19'/>
    <vcpupin vcpu='8' cpuset='20'/>
    <vcpupin vcpu='9' cpuset='21'/>
    <vcpupin vcpu='10' cpuset='22'/>
    <vcpupin vcpu='11' cpuset='23'/>

Cinebench scores

759

757

749

 

 

topology: in previous tests I've done with os x I didn't see any improvement or degradation specifying the topology to the vm.

 

 

Quite a bit of difference. unRaid shows in the dashboard that when both tests are run, cpu usage is 100% on those cores, but the results are about a 45% increase in cpu power benchmark score when not using HT cores for vm assignments. I'm not saying the way I do is right for your setup, but you could at least experiment and run your own tests.

 

ALSO isolate your vm cores from unRaid if you haven't already, and then set emulator pin to one of the cores you left for unRaid. This makes sure that only the vm uses the cores assigned to it.

 

splashtop/teamviewer may still use cpu to render even if you have a video card. I don't remember... someone else will have to chime in on that. but OS X screen share will use the gpu but sadly no sound... :(

 

 

There may be some other gains from adjusting clover settings. I am about to start a new topic on that to compare notes with others and no clutter this one up so much.

Link to comment

Hey guys trying to get a Mac VM installed on my unraid machine and having some issues. I've tried to install via the video method which has gotten me as far as the loading screen for mac. Part way through the loading screen it goes to a circle with an cross through it http://prntscr.com/dzku30 I'm not trying to forward anything to the VM. Just a basic VNC based OSX.

 

Left my xml below.

 

Update: Think I may have found my issue. First issue was relating to the VirtIO drive instead of SATA. second was the ethernet bridge

Update2: Well I finally got booted. And into the VM. Went to teamviewers website. and both the VM and emhttp locked up until the vnc session was cleared.

Update3: And we're in ! I believe the lockup was related to the low amount of video memory and the VM does seem a bit slow still. any insight on these two things appreciated.

 

Good that you got it running.

 

Want it faster? Where is the vm mounted? I see it's on a cache drive, but is that an SSD or spinning disk? Spinning disk is much more sluggish. Even if it's an SSD cache, some believe that it is better for disk i/o to have the vm on an SSD mounted via unassigned devices. That's how I do it, but for other reasons.

 

Also, cpu pinning.... What i've found is that the common idea of using HT cores with os x vm's don't give the best cpu performance. For example, I ran the following cinebench test this morning on 12 of my 24 cores (on 2 processors)

 

Non paired threads spanning 2 processors

<vcpupin vcpu='0' cpuset='12'/>
    <vcpupin vcpu='1' cpuset='13'/>
    <vcpupin vcpu='2' cpuset='14'/>
    <vcpupin vcpu='3' cpuset='15'/>
    <vcpupin vcpu='4' cpuset='16'/>
    <vcpupin vcpu='5' cpuset='17'/>
    <vcpupin vcpu='6' cpuset='18'/>
    <vcpupin vcpu='7' cpuset='19'/>
    <vcpupin vcpu='8' cpuset='20'/>
    <vcpupin vcpu='9' cpuset='21'/>
    <vcpupin vcpu='10' cpuset='22'/>
    <vcpupin vcpu='11' cpuset='23'/>

 

Cinebench scores

1081

1104

1121

 

 

Paired HT threads on a single processor

<vcpupin vcpu='0' cpuset=‘6’/>
    <vcpupin vcpu='1' cpuset=‘7’/>
    <vcpupin vcpu='2' cpuset=‘8’/>
    <vcpupin vcpu='3' cpuset=‘9’/>
    <vcpupin vcpu='4' cpuset=’10’/>
    <vcpupin vcpu='5' cpuset='11’/>
    <vcpupin vcpu='6' cpuset='18’/>
    <vcpupin vcpu='7' cpuset='19'/>
    <vcpupin vcpu='8' cpuset='20'/>
    <vcpupin vcpu='9' cpuset='21'/>
    <vcpupin vcpu='10' cpuset='22'/>
    <vcpupin vcpu='11' cpuset='23'/>

Cinebench scores

759

757

749

 

 

topology: in previous tests I've done with os x I didn't see any improvement or degradation specifying the topology to the vm.

 

 

Quite a bit of difference. unRaid shows in the dashboard that when both tests are run, cpu usage is 100% on those cores, but the results are about a 45% increase in cpu power benchmark score when not using HT cores for vm assignments. I'm not saying the way I do is right for your setup, but you could at least experiment and run your own tests.

 

ALSO isolate your vm cores from unRaid if you haven't already, and then set emulator pin to one of the cores you left for unRaid. This makes sure that only the vm uses the cores assigned to it.

 

splashtop/teamviewer may still use cpu to render even if you have a video card. I don't remember... someone else will have to chime in on that. but OS X screen share will use the gpu but sadly no sound... :(

 

 

There may be some other gains from adjusting clover settings. I am about to start a new topic on that to compare notes with others and no clutter this one up so much.

 

Thanks for the tips I'm definitely going to do some playing around with it later tonight. I use my vm's for building and developing c++ applications. So most of my build environments are on a spinning 2tb array disk with 2*250gb and 2*256gb cache drives in btrfs raid. My main OS(win10) which i'm typing from now, has gpu passthrough and is stored directly on the raided cache drive. But I've never isolated any cores for the VM's or used an unmounted drive as my belief was that the VM would perform better with paired threads and a cache drive or on the raided cache drive itself.  The only thing I have done is opted to leave cores 0/6 free for unraid.

 

1. Is it possible my other VM's could benefit from core isolation as well?

 

Also, you said that most recommend to run the VM on an isolated drive which is in an unmounted state.

 

2. Wouldn't the VM benefit more from having a SSD array drive with raided SSD cache IF that SSD array drive was only being used for VM's? Or like I'm currently doing for performance on my main machine which is storing on the raided cache drive?

Link to comment

I just realized after running for a solid week - that the clock drifts and the time get out of sync badly !  After a day running see more than an hour off.  I noted different setting on the XML folks using for the clock tag but they don't seem to make any difference. 

 

Searching the WWW see some folks refer to this issue, but cant find any fix

 

Any though or ideas ?  Thanks, Gus

Link to comment

Hey guys trying to get a Mac VM installed on my unraid machine and having some issues. I've tried to install via the video method which has gotten me as far as the loading screen for mac. Part way through the loading screen it goes to a circle with an cross through it http://prntscr.com/dzku30 I'm not trying to forward anything to the VM. Just a basic VNC based OSX.

 

Left my xml below.

 

Update: Think I may have found my issue. First issue was relating to the VirtIO drive instead of SATA. second was the ethernet bridge

Update2: Well I finally got booted. And into the VM. Went to teamviewers website. and both the VM and emhttp locked up until the vnc session was cleared.

Update3: And we're in ! I believe the lockup was related to the low amount of video memory and the VM does seem a bit slow still. any insight on these two things appreciated.

 

Good that you got it running.

 

Want it faster? Where is the vm mounted? I see it's on a cache drive, but is that an SSD or spinning disk? Spinning disk is much more sluggish. Even if it's an SSD cache, some believe that it is better for disk i/o to have the vm on an SSD mounted via unassigned devices. That's how I do it, but for other reasons.

 

Also, cpu pinning.... What i've found is that the common idea of using HT cores with os x vm's don't give the best cpu performance. For example, I ran the following cinebench test this morning on 12 of my 24 cores (on 2 processors)

 

Non paired threads spanning 2 processors

<vcpupin vcpu='0' cpuset='12'/>
    <vcpupin vcpu='1' cpuset='13'/>
    <vcpupin vcpu='2' cpuset='14'/>
    <vcpupin vcpu='3' cpuset='15'/>
    <vcpupin vcpu='4' cpuset='16'/>
    <vcpupin vcpu='5' cpuset='17'/>
    <vcpupin vcpu='6' cpuset='18'/>
    <vcpupin vcpu='7' cpuset='19'/>
    <vcpupin vcpu='8' cpuset='20'/>
    <vcpupin vcpu='9' cpuset='21'/>
    <vcpupin vcpu='10' cpuset='22'/>
    <vcpupin vcpu='11' cpuset='23'/>

 

Cinebench scores

1081

1104

1121

 

 

Paired HT threads on a single processor

<vcpupin vcpu='0' cpuset=‘6’/>
    <vcpupin vcpu='1' cpuset=‘7’/>
    <vcpupin vcpu='2' cpuset=‘8’/>
    <vcpupin vcpu='3' cpuset=‘9’/>
    <vcpupin vcpu='4' cpuset=’10’/>
    <vcpupin vcpu='5' cpuset='11’/>
    <vcpupin vcpu='6' cpuset='18’/>
    <vcpupin vcpu='7' cpuset='19'/>
    <vcpupin vcpu='8' cpuset='20'/>
    <vcpupin vcpu='9' cpuset='21'/>
    <vcpupin vcpu='10' cpuset='22'/>
    <vcpupin vcpu='11' cpuset='23'/>

Cinebench scores

759

757

749

 

 

topology: in previous tests I've done with os x I didn't see any improvement or degradation specifying the topology to the vm.

 

 

Quite a bit of difference. unRaid shows in the dashboard that when both tests are run, cpu usage is 100% on those cores, but the results are about a 45% increase in cpu power benchmark score when not using HT cores for vm assignments. I'm not saying the way I do is right for your setup, but you could at least experiment and run your own tests.

 

ALSO isolate your vm cores from unRaid if you haven't already, and then set emulator pin to one of the cores you left for unRaid. This makes sure that only the vm uses the cores assigned to it.

 

splashtop/teamviewer may still use cpu to render even if you have a video card. I don't remember... someone else will have to chime in on that. but OS X screen share will use the gpu but sadly no sound... :(

 

 

There may be some other gains from adjusting clover settings. I am about to start a new topic on that to compare notes with others and no clutter this one up so much.

 

Thanks for the tips I'm definitely going to do some playing around with it later tonight. I use my vm's for building and developing c++ applications. So most of my build environments are on a spinning 2tb array disk with 2*250gb and 2*256gb cache drives in btrfs raid. My main OS(win10) which i'm typing from now, has gpu passthrough and is stored directly on the raided cache drive. But I've never isolated any cores for the VM's or used an unmounted drive as my belief was that the VM would perform better with paired threads and a cache drive or on the raided cache drive itself.  The only thing I have done is opted to leave cores 0/6 free for unraid.

 

1. Is it possible my other VM's could benefit from core isolation as well?

 

Also, you said that most recommend to run the VM on an isolated drive which is in an unmounted state.

 

2. Wouldn't the VM benefit more from having a SSD array drive with raided SSD cache IF that SSD array drive was only being used for VM's? Or like I'm currently doing for performance on my main machine which is storing on the raided cache drive?

 

maybe my machines are just weird with OS X, i don't know. but what I do know is that the scores are pretty solid. I use mine as a transcoding cluster for edited/produced video, and can confirm that a vm using non ht paired cores to transcode does it faster than one with ht cores.

 

I haven't done the test with win10 (have a vm but don't use it often) but my guess is that windows, optimized for virtual environments, will perform better with ht pairs like the common usage dictates.

 

1. all vm's can benefit with not having to share cores with unRaid services and dockers. if you've used isolcpus in your syslinux.cfg, and isolated everything but 0,6, then 0,6 are the only cores unpaid gets. The rest is clear for whatever vm's you run.

 

drives aren't unmounted, they are mounted using the unassigned devices plugin. essentially if you put the disk image there, it doesn't have to contend with any other action/file transfers/etc that the cache drive may end up doing.

 

2. ssd cache will be faster than spinning cache, which is way faster than spinning array.  If the ssd was in the array, it will still be limited by the speed of the parity drive read/writes. With a spinning parity drive, that's 30-50ishMB/s write and probably less than normal ssd reads? Using turbo write you can hit parity protected write speeds of 80-125MB/.  I don't know of anyone who has a wholy ssd only array (someone does I bet around here) but they'd be the ones to ask about ssd array performance for specific numbers.

 

Many people just leave the vm's on the cache drive, and it works fine. you all get the benefit of having a redundant copy if 1 of the paired drives fail (depending on raid configuration of the cache pool.) otherwise, you have to back it up yourself or use one of the backup program/scripts (which I don't use, i just move them manually.) But since I have a few heavy plex/docker app users in the house, then it's better for me to move mine off the cache drive and onto an unassigned disk where it gets the full bandwidth available. (if this is rambling, sorry, i'm under a quite bit of medication for seasonal allergies and things...)

 

Link to comment

 

Alright perfect ! So since i'm not using very intensive docker apps, And file transfer to and from the server is rare. My drive setup SHOULD be okay in theory. It could probably be a bit better but I'm sure I'll learn more as time comes. I've just installed the unassigned devices plugin just to check it out a bit more. I've also just learned how to isolate the cpus for vm's from unraid ;) so that should help. Going to restart the server shortly and see how it goes.

Link to comment

 

Alright perfect ! So since i'm not using very intensive docker apps, And file transfer to and from the server is rare. My drive setup SHOULD be okay in theory. It could probably be a bit better but I'm sure I'll learn more as time comes. I've just installed the unassigned devices plugin just to check it out a bit more. I've also just learned how to isolate the cpus for vm's from unraid ;) so that should help. Going to restart the server shortly and see how it goes.

 

I find when having problems with cores in VMS is not isolating the cores from unraid. I prefer to leave them all available but as 1812 says if isolated from the host system then they arent going to be used by unraid so this can be an advantage sometimes.

But I feel only people who have a lot of cores can afford to isolate cores from unraid as they have plenty but for someone who only has 4 cores do you want to only have one available for unraid?

I personally think it is best to leave all cores not isolated. Remember if you are worried about dockers using your VM cores then pin your dockers to non-vm cores.

Pin your VMS to cores avoiding the first core in your system as unraid prefers lower numbered cores.

Emulatorpin to a core your VM isn't using. If you have enough cores then pin emulatorpin to a core nothing else is using. I would rather have use 4 cores for a VM of which 3 are pinned to the VM and one for emulatorpin. Rather than 4 cores for the VM then the emulatorpin to a shared core with unraid dockers etc.

Aswell never split cores always pin your cores whole as in hyperthreaded pairs if your CPU has hyperthreading. Splitting cores will just give bad performance.

 

I find running my VMS from SSD cache is best for me. When I am running a VM there is rarely any activity on the cache drive from writes to cached shares. Maybe the girlfriend is streaming some show from on emby but that will come from the array, not cache.

Link to comment

@gridrunner

 

FYI: Clover 3974 (regular and patched version) fails to install on 10.12.3.

 

Anyone else having this issue?

 

What error are you getting installing it. As an alternative to installing with installer,you can just open the efi partition with EFI mounter and manually put the clover files into it.

My orginal video on installing sierra has those files in description (but not 3974) i cant link you 3974 to manually paste in as not at home :(

Link to comment

 

 

Aswell never split cores always pin your cores whole as in hyperthreaded pairs if your CPU has hyperthreading. Splitting cores will just give bad performance.

 

 

The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows.

Link to comment

 

 

Aswell never split cores always pin your cores whole as in hyperthreaded pairs if your CPU has hyperthreading. Splitting cores will just give bad performance.

 

 

The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows.

 

Yes that is to be expected if the situation is like this.

my cpu pairing is like this

cpu 0 <===> cpu 14

cpu 1 <===> cpu 15

cpu 2 <===> cpu 16

cpu 3 <===> cpu 17

cpu 4 <===> cpu 18

cpu 5 <===> cpu 19

cpu 6 <===> cpu 20

cpu 7 <===> cpu 21

cpu 8 <===> cpu 22

cpu 9 <===> cpu 23

cpu 10 <===> cpu 24

cpu 11 <===> cpu 25

cpu 12 <===> cpu 26

cpu 13 <===> cpu 27

 

so if i pinned 8 vcpus 0,1,2,3,4,5,6,7 then they would be over 8 separate cores so would get good performance as those cores would not be doing anything else,

But,

Then if i set up another vm with 8 vcpus 14,15,16,17,18,19,20,21 this would be sharing the same 8 cores and when both machines run

at once then performance would be bad.

 

So over those 8 cores 2 vms should be, vm1 . 0,1,2,3,14,15,16,17

                                                          vm 2  4,5,6,7,18,19,20,21

That way there is no overlap. Hope that makes sense  ;)

 

 

Link to comment

 

 

Aswell never split cores always pin your cores whole as in hyperthreaded pairs if your CPU has hyperthreading. Splitting cores will just give bad performance.

 

 

The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows.

 

Yes that is to be expected if the situation is like this.

my cpu pairing is like this

cpu 0 <===> cpu 14

cpu 1 <===> cpu 15

cpu 2 <===> cpu 16

cpu 3 <===> cpu 17

cpu 4 <===> cpu 18

cpu 5 <===> cpu 19

cpu 6 <===> cpu 20

cpu 7 <===> cpu 21

cpu 8 <===> cpu 22

cpu 9 <===> cpu 23

cpu 10 <===> cpu 24

cpu 11 <===> cpu 25

cpu 12 <===> cpu 26

cpu 13 <===> cpu 27

 

so if i pinned 8 vcpus 0,1,2,3,4,5,6,7 then they would be over 8 separate cores so would get good performance as those cores would not be doing anything else,

But,

Then if i set up another vm with 8 vcpus 14,15,16,17,18,19,20,21 this would be sharing the same 8 cores and when both machines run

at once then performance would be bad.

 

So over those 8 cores 2 vms should be, vm1 . 0,1,2,3,14,15,16,17

                                                          vm 2  4,5,6,7,18,19,20,21

That way there is no overlap. Hope that makes sense  ;)

 

yes and no.

 

If you have a bunch of cores, then you can run them non HT and get good performance. If you don't, you get reduced performance.

 

Just for kicks, i'm going to setup a couple 4 core vm's tonight on each other's pair and run some simultaneous tests... I do expect degraded benchmarks, but I'm curious to know if it's better to have 1 vm on it's own ht pair, or let it share with another vm, which may not be using the ht pair at the same time as the vm, therefore lessening the performance hit vs using ht pairs... (sorry if that's confusing, still on a bunch of medicine... results in a few hours though...)

Link to comment

 

 

Aswell never split cores always pin your cores whole as in hyperthreaded pairs if your CPU has hyperthreading. Splitting cores will just give bad performance.

 

 

The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows.

 

Yes that is to be expected if the situation is like this.

my cpu pairing is like this

cpu 0 <===> cpu 14

cpu 1 <===> cpu 15

cpu 2 <===> cpu 16

cpu 3 <===> cpu 17

cpu 4 <===> cpu 18

cpu 5 <===> cpu 19

cpu 6 <===> cpu 20

cpu 7 <===> cpu 21

cpu 8 <===> cpu 22

cpu 9 <===> cpu 23

cpu 10 <===> cpu 24

cpu 11 <===> cpu 25

cpu 12 <===> cpu 26

cpu 13 <===> cpu 27

 

so if i pinned 8 vcpus 0,1,2,3,4,5,6,7 then they would be over 8 separate cores so would get good performance as those cores would not be doing anything else,

But,

Then if i set up another vm with 8 vcpus 14,15,16,17,18,19,20,21 this would be sharing the same 8 cores and when both machines run

at once then performance would be bad.

 

So over those 8 cores 2 vms should be, vm1 . 0,1,2,3,14,15,16,17

                                                          vm 2  4,5,6,7,18,19,20,21

That way there is no overlap. Hope that makes sense  ;)

 

yes and no.

 

If you have a bunch of cores, then you can run them non HT and get good performance. If you don't, you get reduced performance.

 

Just for kicks, i'm going to setup a couple 4 core vm's tonight on each other's pair and run some simultaneous tests... I do expect degraded benchmarks, but I'm curious to know if it's better to have 1 vm on it's own ht pair, or let it share with another vm, which may not be using the ht pair at the same time as the vm, therefore lessening the performance hit vs using ht pairs... (sorry if that's confusing, still on a bunch of medicine... results in a few hours though...)

 

I think the worst problem of sharing CPU hyperthreads is the latency that can be caused in the VM. A lot of choppy sound etc. (I wish I had some cold medication as I feel real bad, that's why I'm still up. This cold caught me by surprise, the girlfriend says its because I popped outside without a coat but I refuse to believe that!)

Link to comment

and the results are in....  :-X

 

Details:

 

HP DL380 G6 dual processor server

Xeon x5670, 6 core HT processor, 12 logical cores per processor

2nd Processor used in testing isolated from unRaid/dockers/etc

vm’s on separate unassigned SSD drives

Cinebench R15 (higher scores are better)

no emulator pin set

8gb ram on each vm

clean non updated OS X sierra, (matching disk images)

no GPU, vm’s accessed via apple screen share

unless stated otherwise, cpu cores were run at 100% for benchmark, run 3x back to back

no topology defined (I don't think os x cares, based on some previous tests I ran which showed almost no difference)

 

Thread pairings

Proc 1

cpu 0 <===> cpu 12

cpu 1 <===> cpu 13

cpu 2 <===> cpu 14

cpu 3 <===> cpu 15

cpu 4 <===> cpu 16

cpu 5 <===> cpu 17

Proc 2

cpu 6 <===> cpu 18

cpu 7 <===> cpu 19

cpu 8 <===> cpu 20

cpu 9 <===> cpu 21

cpu 10 <===> cpu 22

cpu 11 <===> cpu 23

 

 

-----------------------------

section 1

all testing in this section on following cores: vm1 6-11, vm2 18-23

 

vm1

6 cores non ht paired vcpus, vm 2 off (vm1 baseline score for non ht pairs)

521

527

526

 

 

vm1 while vm2 idle on ht cores

517

522

520

 

 

vm1 100% with vm2  20-50% use on ht cores

464

441

463

 

 

vm 2 while vm 1 idle on ht cores

497

514

515

 

 

 

vm1 & vm2 max 100% usage

(simultaneous benchmark from here on down)

 

vm1 314 vm2 309

vm1 315 vm2 311

vm1 317 vm2 315

 

-------------------------

section 2

all core configurations listed on each the test

 

 

vm1 on 6-8, 18-20  (vm1 baseline score for HT paired cores)

vm2 on  9-11, 21-23

HT core Pairs for each vm

 

vm1 347 vm2 346

vm1 345 vm2 345

vm1 349 vm2 346

 

 

 

vm 1 on 6, 18, 7, 19, 8, 20

vm 2 on 9, 21, 10, 22, 11, 23

mixed vcpu ordering of ht paired cores

 

vm1 346 vm2 345

vm1 346 vm2 344

vm1 348 vm2 346

 

 

 

vm 1 & 2 sharing 2 cores (8,20) utilizing only 10 total cores, but 6 on each vm

vm1 on 6-8, 18-20

vm2 on 9,10,8, 21, 22, 20 (funny pairing to keep vcpu0 off the shared cores)

 

vm1 286 vm2 286

vm1 286 vm2 283

vm1 288 vm2 285

 

 

 

and just for fun

vm1 & vm2 on same cores (6-11) 100% utilization

 

vm 1 254 vm2 255

vm1 254 vm 2 257

vm1 250 vm 2 255

 

--------------------------

 

 

What does it mean?

 

Using HT paired cores scores a max of 349, vs non ht cores max of 522 with vm2 idle, meaning non ht cores have 50% more power in this setting. Even with 50% usage on vm2, vm1 only drops to 441, which is sill about 27% more power than ht paired cores (which makes sense actually.) Even if vm 2 is off, the max score of vm1 does not change if using ht pairs, so there is no possible performance gain regardless of what any other vm is doing or not doing, unlike using non ht paired assignments.

 

Now, if both vm's are using 100% CPU resources, then ht pairs are the clear winner by about 10% over non HT core assignment which at best hit a 317 (compared to 349.)

 

 

No video/audio testing was done in this situation because I only have 1 video card in this particular server. For many people none of this may apply. But for someone like me who needs cpu power and not graphics/audio on all vm's, then this has some merit and interest.  I may drop in a card from on of my other servers and test for stability in the future. 

 

 

this actually explains quite a bit for me on why I thought my machines were wonky, it's more how I'm using them vs operating differently.

 

Link to comment

New test for audio/video stability and core pinning in os x (for those who care)

 

For this one, I used one of my other servers with dual e5520 processors. Thread pairings as follows:

 

Proc 1

cpu 0 <===> cpu 8

cpu 1 <===> cpu 9

cpu 2 <===> cpu 10

cpu 3 <===> cpu 11

 

Proc2

cpu 4 <===> cpu 12

cpu 5 <===> cpu 13

cpu 6 <===> cpu 14

cpu 7 <===> cpu 15

 

Proc 2 isolated from unRaid.

 

vm1 with a gt730 gpu, assigned cores 4-7

vm2 no gpu, apple screen share, assigned cores 12-15

emulator pin 0-3 for both vm's

vm disk images located on same ssd

 

This cpu assignment was selected because it is accepted to cause audio/video issues when placing two vm's onto the same physical core(ht pairs).

 

Result

when vm2 was running cinebench with its cores at 100%, vm1 had zero audio/video issues streaming youtube, netflix, or blu-ray movies on vlc from a network server. No matter activities I did, I could detect no slow down, no lag,

 

 

 

 

 

Just for fun, I put both vm’s on the same thread/core:

 

vm1 with a gt730 gpu, assigned cores 4-7

vm2 no gpu, apple screen share, assigned cores 4-7

 

 

youtube videos and local blu-ray streamed with little video quality issues but audio was at times dirty and/or non functional (to be expected)  even when vm2 was only idling.

 

Interestingly, when cinebench pushed the shared cores at 100% during blu-ray playback, the video quality did not suffer at all, but as before, audio was less than desirable when it actually worked.

 

 

 

So in my case, it is still fine (if not better performance-wise) to put os x vm’s on not ht paired cores since it does not cause audio/video issues while retaining he benefits of higher performance when the other vm isn’t 100% active.

 

Why is like this?

 

I don't know if this is OS X specific in regard to not causing the audio/video issues that are said to occur in windows 10 with this core assignment , if it is the particular gpu that is less susceptible to audio noise in ht pinning, if it is due to the fact that I am using it on older enterprise equipment, or only a gpu on a single vm (though people have reported issues with gpu-less dockers causing problems on ht cores of vm’s.)

 

ok, no more tests for a while...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.