Jump to content

duketwo

Members
  • Posts

    7
  • Joined

  • Last visited

Posts posted by duketwo

  1. Can we please have the clock settings in the UI? e.g.

      <clock offset='localtime'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='yes'/> # yes reduces idle cpu resources a lot on windows vm's, current default is no.
      </clock>

    or the functionality to keep the changes made to the XML not being lost if using the UI again?

    • Upvote 1
  2. Quote

     kernel:[  585.812028] Uhhuh. NMI received for unknown reason 31 on CPU 0.

    Message from syslogd@debian at Apr 26 16:31:11 ...
     kernel:[  585.812028] Do you have a strange power saving mode enabled?

    Message from syslogd@debian at Apr 26 16:31:11 ...
     kernel:[  585.812028] Dazed and confused, but trying to continue

    Message from syslogd@debian at Apr 26 16:31:41 ...
     kernel:[  615.812030] Uhhuh. NMI received for unknown reason 21 on CPU 0.

    Message from syslogd@debian at Apr 26 16:31:41 ...
     kernel:[  615.812030] Do you have a strange power saving mode enabled?

    Message from syslogd@debian at Apr 26 16:31:41 ...
     kernel:[  615.812030] Dazed and confused, but trying to continue

     

    Also affected,

    Asrock X99 extreme4 // 32gb ECC // GTX 750 passthrough (other vm)

     

    The affected vm is a debian jessie vm ( 3.16.0-4-amd64 ). The problems weren't there until I disabled ipv6 in the vm. 

     

    sysctl.conf

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1
    net.ipv6.conf.eth0.disable_ipv6 = 1

    Not sure if related, i'll try to revert the sysctl settings and see if it goes away.

     

     

    Any ideas where to start to debug errors like this?

  3. Update:

     

    I tried to set the pci stub of the Raid controller, which wasn't working. Then I tried to set all pci stubs of the group 23. Now I can see my Raid 5 of the intel raid controller within the VM.

    I also tried to disable the "PCIe ACS Override" afterwards. Which had no effect at all, still working.

     

    My syslinux.cfg now looks like like this: ( unrelevant parts removed )

    
    label unRAID OS
      menu default
      kernel /bzimage
      append pci-stub.ids=8086:2822,8086:8d47,8086:8d22 initrd=/bzroot

     

    The partition is encrypted with truecrypt/veracrypt, I can mount it without any issues and use it. The raid functionality seems to be given also. I accidentally didn't connect one drive properly and the raid controller re-build the array after reconnecting the missing drive.

     

    The problem which I had also with Esxi persists. The Intel Storage Manager tool tells me the drive is incompatible, even everything seems to work. I think it's the raid oprom which isn't loaded within the VM. Is there any possibility to load the raid controller oprom within the VM?

     

    I tried to change the settings in the Bios (CSM) to load the Raid oprom: BIOS/UEFI/Never. But that seem to not have any effect at all.

     

    The log of the vm shows: 

     

    2017-03-01T00:24:32.517154Z qemu-system-x86_64: vfio: Cannot reset device 0000:00:1f.2, no available reset mechanism.

     

    Thanks!

     

  4. I want to pass the Intel onboard raid controller to a windows 10 vm. 

     

    My hardware specs are: 

    Intel xeon ES v4 E5-2630
    64GB ECC
    Asrock X99 extreme4 
    GTX 750 (  N750-2GD5/OCV1 )
    VT-d enabled

     

    I have enabled "PCIe ACS Override".  The output of the IOMMU group after enabling the override and rebooting is:

    IOMMU group 23
    	[8086:8d47] 00:1f.0 ISA bridge: Intel Corporation C610/X99 series chipset LPC Controller (rev 05)
    	[8086:2822] 00:1f.2 RAID bus controller: Intel Corporation SATA Controller [RAID mode] (rev 05)
    	[8086:8d22] 00:1f.3 SMBus: Intel Corporation C610/X99 series chipset SMBus Controller (rev 05)

    I have tried to passthrough the device with: 

       <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
          </source>
        </hostdev>

    While trying to launch the vm I get the following error:

    internal error: qemu unexpectedly closed the monitor: 2017-02-28T22:30:08.326161Z qemu-system-x86_64: -device vfio-pci,host=00:1f.2,id=hostdev4,bus=pci.0,addr=0x8: vfio: error, group 23 is not viable, please ensure all devices within the iommu_group are bound to their vfio bus driver.
    2017-02-28T22:30:08.326194Z qemu-system-x86_64: -device vfio-pci,host=00:1f.2,id=hostdev4,bus=pci.0,addr=0x8: vfio: failed to get group 23
    2017-02-28T22:30:08.326212Z qemu-system-x86_64: -device vfio-pci,host=00:1f.2,id=hostdev4,bus=pci.0,addr=0x8: Device initialization failed

    I'm moving from Esxi. While using Esxi I managed to passthrough the raid controller. With unRAID in managed to passthrough the gpu, but struggling with the raid controller.

    Do I need to detach the Raid controller from unRAID, because I can see the attached drives?

    Any hints are appreciated.

  5. I managed to passthrough a single Nvidia GPU without manually supplying the GPU rom.

     

    Did that by disabling the video opROM via CSM. ( https://www.manualslib.com/manual/805051/Asrock-X99-Extreme4.html?page=92 )

     

    So it won't be loaded until the VM starts up. The only downside is that you can't access the bios anymore until you reset the bios settings.

     

    My hardware:

    Intel xeon ES v4 E5-2630

    64GB ECC

    Asrock X99 extreme4 

    GTX 750 (  N750-2GD5/OCV1 )

×
×
  • Create New...