SpaceInvaderOne

Community Developer
  • Posts

    1741
  • Joined

  • Days Won

    29

Posts posted by SpaceInvaderOne

  1. HI folks,

     

      I imagine that the usb 3 ports are on bus 02 (sysdevices page attached below), but whenever I plug a device into any of the usb3 ports (on the motherboard i/o OR the case front usb ports) the device just doesn't show up under lsusb.

     

     

    when checking the lsusb are you plugging a usb 2.0 device or usb 3 device in to check it. Maybe try a usb 3.0 device if you have one. (maybe worth a try)

     

    If you are sure that device 02:00.0 USB controller: VIA Technologies, Inc. Device 3483 (rev 01) isnt the controller your unraid key is on(please double check)  then you could just pass it through and see in windows if it is the usb 3 ports on front.

     

    add to xml

     

    <qemu:arg value='-device'/>

        <qemu:arg value='vfio-pci,host=02:00.0,bus=root.1,addr=00.2'/>

     

     

    so end of your xml file would look like this

    <qemu:commandline>
        <qemu:arg value='-device'/>
        <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=06:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=01.0'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=06:00.1,bus=root.1,addr=00.1'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=02:00.0,bus=root.1,addr=00.2'/>
      </qemu:commandline>
    </domain>

  2. To get started mine is win 10 vm

     

    i7 6700, ASRock - Z170M Extreme4, EVGA gtx 960

    seabios i440fx 2.3,  8 cpu cores,  24 gigs ram

     

    test                score        graphics    physics    combined

    fire strike 1.1    6804        7873          10619    2661

    skydiver 1.0      20769      26255        10140    20881

    cloudgate 1.1    22187      47923        7705

    Just setup a win 10 vm same specs but with ovmf bios

     

    i7 6700, ASRock - Z170M Extreme4, EVGA gtx 960

    ovmf  i440fx 2.3,  8 cpu cores,  24 gigs ram

     

    test                score        graphics    physics    combined

    fire strike 1.1    6975        8072          10770    2738.............all better than my seabios vm

    skydiver 1.0      20629      26176        10061    20353...........all slightly lower than my seabios vm

    cloudgate 1.1    22476      49975        7682..........................higher graphics slightly lower physics

     

     

    each test i ran 3 times and got similar results.

    Overall it would seem for me anyway i get better 3d performance using an ovmf vm

  3. Sorry can you also post your pci devices and iommu groups from the tools, system devices.

    Please post it using the insert code button on the tool bar # just makes it easier to read than a file attachment. so your xml would look like this  :)

    <domain type='kvm' id='2' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
      <name>Windows Rig</name>
      <uuid>9e4bbedc-f281-a0d5-4129-c2589968ed39</uuid>
      <metadata>
        <vmtemplate name="Custom" icon="windows.png" os="windows"/>
      </metadata>
      <memory unit='KiB'>16777216</memory>
      <currentMemory unit='KiB'>16777216</currentMemory>
      <memoryBacking>
        <nosharepages/>
        <locked/>
      </memoryBacking>
      <vcpu placement='static'>6</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='2'/>
        <vcpupin vcpu='2' cpuset='3'/>
        <vcpupin vcpu='3' cpuset='5'/>
        <vcpupin vcpu='4' cpuset='6'/>
        <vcpupin vcpu='5' cpuset='7'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough'>
        <topology sockets='1' cores='6' threads='1'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/Domains/Windows Rig/vdisk1.img'/>
          <backingStore/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <alias name='virtio-disk2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/ISOs/en_windows_10_multiple_editions_version_1511_x64_dvd_7223712.iso'/>
          <backingStore/>
          <target dev='hda' bus='ide'/>
          <readonly/>
          <boot order='2'/>
          <alias name='ide0-0-0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/ISOs/virtio-win-0.1.112.iso'/>
          <backingStore/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <alias name='ide0-0-1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'>
          <alias name='pci.0'/>
        </controller>
        <controller type='ide' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:41:26:73'/>
          <source bridge='br0'/>
          <target dev='vnet0'/>
          <model type='virtio'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/0'/>
          <target port='0'/>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/0'>
          <source path='/dev/pts/0'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Windows Rig.org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x00' slot='0x14' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <memballoon model='virtio'>
          <alias name='balloon0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </memballoon>
      </devices>
      <qemu:commandline>
        <qemu:arg value='-device'/>
        <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=00:03.0,bus=root.1,addr=01.0'/>
        <qemu:arg value='-device'/>
        <qemu:arg value='vfio-pci,host=00:1b.0,bus=root.1,addr=02.0'/>
      </qemu:commandline>
    </domain>

  4. To get started mine is win 10 vm

     

    i7 6700, ASRock - Z170M Extreme4, EVGA gtx 960

    seabios i440fx 2.3,  8 cpu cores,  24 gigs ram

     

    test                score        graphics    physics    combined

    fire strike 1.1    6804        7873          10619    2661

    skydiver 1.0      20769      26255        10140    20881

    cloudgate 1.1    22187      47923        7705

  5. I had a similar issue with my onkyo amp. I cant remember quite what i did but it was into do with the hdmi setting within the amp.

    Also you must have the amp set to the correct hdmi channel before you start the vm. If you dont the graphics card doesnt see its plugged into an hdmi and the display on card defaults to the dvi port.

    Once the vm has started it you can switch hdmi channels on the amp with no probs but you must start the vm with amp on the correct hdmi channel

     

    hope this helps

  6. Nice work hunting down 'gpe6f'!

     

    I suspect you both need to either keep monitoring for another BIOS update, or hope for a workaround in a future Linux kernel.  That's how it usually works.  You're apparently too close to the 'bleeding edge'.

     

    Yes I think its a z170 problem not just Asrock problem.Hopefully a bios update can address this although asrock just say that it is because skylake only put into linux kernal from 4.3  However the problem had been reported on https://bugzilla.kernel.org/show_bug.cgi?id=105491 and that was with kernal 4.3.0 rc4

     

    I am happy with how my system is now with unraid, but I didnt except so many problems going to skylake. Some of the problems are here http://lime-technology.com/forum/index.php?topic=46141.msg441036#msg441036

     

    I think maybe we should post some topics for each motherboard chipset where we can post problems we have had for various issues to help others with the same hardware or others who are thinking of buying the same. We should do the same with gpu types. Just to try and get the info in some logical order

     

  7. I have alot of vms on my unraid. 21 at last count.  Some are just duplicates of the same vm with different xml (ie one for gpu passthrough with usb passtgrough and one for just vnc). Sometimes i start the wrong one in error.

     

    I would like to be able to pin my favourite or most used vms to the top of the vm list and/or onto the dashboard.

     

    This would make using my vms much easier. Thanks

     

  8. I added it to my go file which wil run it as soon as unraid boots

    goto your flash drive then in the folder config you will see a file called go.

    You need to edit this file and add the line

     

    #disable gpe for bios acpi error

    echo disable > /sys/firmware/acpi/interrupts/gpe6F

     

    the part #disable gpe for bios api error  is just to name what the line does and isnt actually necessary

  9. unfortunately this is an issue that effects nvidea cards.

    When in the primary pcie slot they dont passthrough unless you have unraid output on an onboard gpu.

    If you changed your gt9500 to an amd  gpu on the primary slot it "should"work together with the gtx970 in the secondary slot.

     

    (also i dont know if you would ever get a gt9500 to passthrough anyway as its quite an old card (2008). I may be wrong though)

  10. yes it is possible to passthrough all gpus and run unraid headless. Just telnet or ssh into unraid.

    The exception if you have intigrated onboard graphics it is not possible to pass that through at this time.

    Also if you have an nvidea gpu as your primary card (and no integrated graphics) then you may have problems passing it through.