Jump to content

yesdog

Members
  • Posts

    16
  • Joined

  • Last visited

Posts posted by yesdog

  1. 1) from what I can tell the hyperv extensions dont make any real difference. ive seen benchmarks that proved it (ill try to find link later), plus personal 'look and feel' ive noticed nothing. also my findings:

    *hyperv relaxed - mostly exists to surpress some noisy logs from windows 'watchdog timer' service.

    *vapic - virtual APIC controller. this flag just tells windows to manage its own interrupts instead of using the emulated hardware APIC. might matter for some virtual pci devices, but shouldnt actually affect passthrough devices.

    *spinlocks - think this tells windows to favor spinlocks instead of kernel locks when it can. this probably matters most when you have several windows guests and it causes the hypervisor to context switch too often. shouldnt matter with only a few guests

    *hypervclock - high performance virtual timer. It's good to have a high performance timer, but HPET works fine. I think this exists as an alternative to HPET which might not be as secure or scale as well or something.

     

    A good thing to keep in mind is that the hyperv extensions enable a hpyervisor to: run more guests and run more securely without performance penalty. So i think for the most case it doesnt apply to the handful of gaming guests.

     

    2) My theory is its because of the GRID SDK. NVIDIA does actually offer commercial support for proprietary virtualization solutions. This can be seen first hand by the 'floating point' cloud computing units available through services like AWS. They basically virtualize GPU access to virtual machines. From what i can tell there was ways to subdivide cards and do other fun stuff and you were exposed raw CUDA devices to the virtual machine. I'm guessing the hyperv extensions tip off the drivers that there's additional hypervisor functionality coming its way to tell it what to do and it never gets it and shuts down. Plus the virtual devices exposed to the guest might not even be real GPUs and may be some generic CUDA device. Some day I might try to to wander through the grid SDK and see whats up.

     

    3) Only windows. There's generally specialized 'virtual' kernels for linux that are designed to be guests only and offer some of the same performance advantages listed above. I think hyper-v is just also the microsoft preferred flavor.

     

    This is what i finally settled on for my nvidia guests:

     

      <features>

        <acpi/>

        <apic/>

        <hyperv>

          <relaxed state='off'/>

          <vapic state='off'/>

          <spinlocks state='off'/>

        </hyperv>

        <kvm>

          <hidden state='on'/>

        </kvm>

      </features>

     

      <clock offset='localtime'>

        <timer name='hypervclock' present='no'/>

        <timer name='rtc' present='no'/>

        <timer name='pit'  present='no'/>

        <timer name='hpet' present='yes'/>

      </clock>

     

    EDIT: must make sure HPET is enabled in bios, x64 for x64

    • Like 1
×
×
  • Create New...