bluepr0

Members
  • Posts

    214
  • Joined

  • Last visited

Posts posted by bluepr0

  1. 1,7    core2      use second core to pin the emulatorpin cpuset

     

    ...leaving 0,6,1,7 for unraid. I would use 1 core for the emulator pin thats not being used by any other process.

     

    Hi GridRunner. For the benefit of myself and others not familiar with cpupinning, can you explain this part.

     

    I thought you could basically just put unraid on core 0,6 (physical core 1) and then dedicate any remaining cores to virtual machines. What is the function of having a dedicated core and hyper thread for emulatorpin?

    You might want to read this excellent post that explains it very well http://lime-technology.com/forum/index.php?topic=49051.0

  2. Sorry to interrupt you guys, but I've got a question in case you tested it.

     

    @dlandon: how much usage the emulator requires (emulator pin cores)? would it be fine if I share the same cores for Plex transcoder and for the emulatorpin of my VMs?

     

    Yes.  But only emulatorpin on latency sensitive VMs.  Normally the cpus assigned to the VM are also used for emulator tasks.  It's only necessary when a VM suffers from latency issues (pausing, stuttering, etc) when serving media or gaming.

     

    Don't get carried away with assigning cpus.  You don't normally need to do any special cpu assigning or isolating cpus.  I also notice a lot of people assigning more than 4 cpus to VMs.  I see no reason that any VM needs that kind of horsepower.  I've also read that assigning more than 4 can also cause problems.

     

    If you are seeing latency issues, don't think that assigning more cpus will solve the problem.  The issue is more with assignment of the cpus.

     

    Thanks a lot for the reply!

  3. Dude! Your pile of running hard drives is making my brain itch. Is the circuit board of the top 4TB red REALLY sitting directly on top of the drive below it? If you are gonna stack running hard drives, at least put some corrugated cardboard between them and give them some supplemental airflow.

     

    I am digging the motherboard box as a way to let the card slot hang over so the video card will stay seated properly. I've done that exact trick a bunch of times.

     

    Haha, no no... the circuit part is on the bottom of the disk so it's not touching it.

  4. Hello!

     

    Well I've got some news, mostly good ones. I was able to install El Capitan without problems following the guide, I've also received today the Radeon HD5770 and it works out of the box on OS X. For getting HDMI Audio I had to install HDMIAudio.dmg that pete provided.

     

    Hooooowever, there's some things that are not working. For example iMessage doesn't work, and it seems "some parts" of iCloud sync aren't either. I've found solutions for this but all are using Clover boot loader. I tried to follow the guide in page #4 of this thread to install it but I can't make it work:

     

    Here's the options I chose

    XUUKSYdz+

     

    Here's the first warning saying that it's not made for this OS, but I downloaded the latest version

    3ErgV52T+

     

    And here's the error install

    ktzHkJVy+

     

    Still, I continue to follow the guide in case the error was not a problem. Here you can see how the folder of my EFI Clover looks like. As the guide says I download the Clover configurator and followed the steps to save a new config.plist

    oq26Si1F+

     

    Then I removed the chameleon boot loader from the XML (removed only this line)

    <kernel>/mnt/cache/vm_images/enoch_rev2795_boot</kernel>

     

    So my XML looked like this

    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>OSX-El-Capitan-10.11-VNC</name>
      <uuid>0ba39646-7ba1-4d41-9602-e2968b2fe36d</uuid>
      <metadata>
        <type>None</type>
      </metadata>
      <memory unit='KiB'>12582912</memory>
      <currentMemory unit='KiB'>12582912</currentMemory>
      <vcpu placement='static'>18</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='2'/>
        <vcpupin vcpu='2' cpuset='3'/>
        <vcpupin vcpu='3' cpuset='4'/>
        <vcpupin vcpu='4' cpuset='5'/>
        <vcpupin vcpu='5' cpuset='6'/>
        <vcpupin vcpu='6' cpuset='7'/>
        <vcpupin vcpu='7' cpuset='9'/>
        <vcpupin vcpu='8' cpuset='10'/>
        <vcpupin vcpu='9' cpuset='11'/>
        <vcpupin vcpu='10' cpuset='12'/>
        <vcpupin vcpu='11' cpuset='13'/>
        <vcpupin vcpu='12' cpuset='14'/>
        <vcpupin vcpu='13' cpuset='15'/>
        <vcpupin vcpu='14' cpuset='16'/>
        <vcpupin vcpu='15' cpuset='17'/>
        <vcpupin vcpu='16' cpuset='18'/>
        <vcpupin vcpu='17' cpuset='19'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-q35-2.3'>hvm</type>
        <boot dev='hd'/>
        <bootmenu enable='yes'/>
      </os>
      <features>
        <acpi/>
      </features>
      <cpu mode='custom' match='exact'>
        <model fallback='allow'>core2duo</model>
      </cpu>
      <clock offset='utc'/>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>destroy</on_crash>
      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/cache/vdisks/Elcapitan/Elcapitan.img'/>
          <target dev='hda' bus='sata'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <controller type='usb' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='dmi-to-pci-bridge'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
        </controller>
        <controller type='pci' index='2' model='pci-bridge'>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:00:20:30'/>
          <source bridge='br0'/>
          <model type='e1000-82545em'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
        </interface>
        <memballoon model='none'/>
      </devices>
      <seclabel type='none' model='none'/>
      <qemu:commandline>
        <qemu:arg value='-device'/>
        <qemu:arg value='ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=03:00.0,bus=pcie.0,multifunction=on,x-vga=on'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=03:00.1,bus=pcie.0'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='usb-kbd'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='usb-mouse'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='isa-applesmc,osk=redacted'/>
        <qemu:arg value='-smbios'/>
        <qemu:arg value='type=2'/>
      </qemu:commandline>
    </domain>
    
    

     

    BUT, when booting the VM the boot loader is not loaded, it get stuck in the BIOS screen saying "Booting from Hard Disk" (but it never find it)

     

    Anyone have got ideas on what I'm doing wrong here? I'm using unRAID 6.1.9

     

    Thanks a lot for this guide, I've been testing the VM and I think I will be perfectly capable to work on it. Note that I haven't optimised it with isocpu, etc. HD performance is not that awesome, but I guess with unRAID 6.2 and the new QEMU version + maybe an entire passthrough of a PCIe NVME SSD the performance could be ridiculously fast!

  5. I honestly thought that passing through an entire SSD would give almost bare-metal performance (as it does with GPUs)... So if performance is not that good it might not be worth to have a separate SSD and just use the cache pool.

    I only tested it once, but yes perfomance will be "near" bare-metal. (like GPU -> 95-99%)

     

    Sorry to ask you again, just to clarify... you tested once doing a SSD passthough and got 95-99% of performance in comparison to bare-metal? Because if that's what you meant, that's absolutely perfect for me! (using QEMU/KVM caching/improvements?)

     

    Also when you mentioned 50-70% of bare metal performance, were you referring about using vdisk running on the cache pool? On this percentage did you included the caches and other improvements QEMU/KVM offer?

     

    Thanks!