-
Posts
1741 -
Joined
-
Days Won
29
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by SpaceInvaderOne
-
-
Try using the usb driver for that controller(via) from the driver cd for your motherbaord. Or if you dont have it goto motherboard manufacturers website and dl the driver from there.
-
HI folks,
I imagine that the usb 3 ports are on bus 02 (sysdevices page attached below), but whenever I plug a device into any of the usb3 ports (on the motherboard i/o OR the case front usb ports) the device just doesn't show up under lsusb.
when checking the lsusb are you plugging a usb 2.0 device or usb 3 device in to check it. Maybe try a usb 3.0 device if you have one. (maybe worth a try)
If you are sure that device 02:00.0 USB controller: VIA Technologies, Inc. Device 3483 (rev 01) isnt the controller your unraid key is on(please double check) then you could just pass it through and see in windows if it is the usb 3 ports on front.
add to xml
<qemu:arg value='-device'/>
<qemu:arg value='vfio-pci,host=02:00.0,bus=root.1,addr=00.2'/>
so end of your xml file would look like this
<qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=06:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=01.0'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=06:00.1,bus=root.1,addr=00.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=02:00.0,bus=root.1,addr=00.2'/> </qemu:commandline> </domain>
-
It sounds like when you shutdown the vm its not releasing the graphics card, so when you start it again the screen is black. restarting the server releases the card hence the vm works again.
best to post your pci devices iommu groups and xml so we can see that
-
To get started mine is win 10 vm
i7 6700, ASRock - Z170M Extreme4, EVGA gtx 960
seabios i440fx 2.3, 8 cpu cores, 24 gigs ram
test score graphics physics combined
fire strike 1.1 6804 7873 10619 2661
skydiver 1.0 20769 26255 10140 20881
cloudgate 1.1 22187 47923 7705
Just setup a win 10 vm same specs but with ovmf bios
i7 6700, ASRock - Z170M Extreme4, EVGA gtx 960
ovmf i440fx 2.3, 8 cpu cores, 24 gigs ram
test score graphics physics combined
fire strike 1.1 6975 8072 10770 2738.............all better than my seabios vm
skydiver 1.0 20629 26176 10061 20353...........all slightly lower than my seabios vm
cloudgate 1.1 22476 49975 7682..........................higher graphics slightly lower physics
each test i ran 3 times and got similar results.
Overall it would seem for me anyway i get better 3d performance using an ovmf vm
-
Sorry can you also post your pci devices and iommu groups from the tools, system devices.
Please post it using the insert code button on the tool bar # just makes it easier to read than a file attachment. so your xml would look like this
<domain type='kvm' id='2' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Windows Rig</name> <uuid>9e4bbedc-f281-a0d5-4129-c2589968ed39</uuid> <metadata> <vmtemplate name="Custom" icon="windows.png" os="windows"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>6</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='7'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='6' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/qemu-system-x86_64</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/Domains/Windows Rig/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/en_windows_10_multiple_editions_version_1511_x64_dvd_7223712.iso'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/ISOs/virtio-win-0.1.112.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:41:26:73'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/Windows Rig.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x14' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=01:00.1,bus=root.1,addr=00.1'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=00:03.0,bus=root.1,addr=01.0'/> <qemu:arg value='-device'/> <qemu:arg value='vfio-pci,host=00:1b.0,bus=root.1,addr=02.0'/> </qemu:commandline> </domain>
-
yes the first specs is from the server hosting my vm.
i7 6700, ASRock - Z170M Extreme4, EVGA gtx 960
The second is from the vm itself
seabios i440fx 2.3, 8 cpu cores, 24 gigs ram
you can download the 3dmark basic from http://www.techpowerup.com/downloads/2497/futuremark-3dmark-2013-v1-5-915/
-
please post your xml file
-
ah yeah the unraid 6.2 "soon™" edition,
I want the unraid 6.2 "now" edition!!!!!!!!!! ....but i was never patient
-
To get started mine is win 10 vm
i7 6700, ASRock - Z170M Extreme4, EVGA gtx 960
seabios i440fx 2.3, 8 cpu cores, 24 gigs ram
test score graphics physics combined
fire strike 1.1 6804 7873 10619 2661
skydiver 1.0 20769 26255 10140 20881
cloudgate 1.1 22187 47923 7705
-
Hi Thought i'd make a post where we can post some benchmarks of our gaming vms using 3d mark.
Post your cpu, motherboard, and graphics card in your unraid box.
Then type of bios seabios or ovmf. Amount of cpu cores and ram assigned.
-
disconnect your harddrives you use for your windows computer. pop in a spare drive if you have one. Then boot it off a usb. that way you can do some proper testing.
I have never virtualised unraid so couldnt help you with that.
-
Yes sleep hibernation causes probs.
Follow through Jon Panozzo's guide
especially from 15:20 onwards there are a few post install things to do to on a win 10 vm.
-
I had a similar issue with my onkyo amp. I cant remember quite what i did but it was into do with the hdmi setting within the amp.
Also you must have the amp set to the correct hdmi channel before you start the vm. If you dont the graphics card doesnt see its plugged into an hdmi and the display on card defaults to the dvi port.
Once the vm has started it you can switch hdmi channels on the amp with no probs but you must start the vm with amp on the correct hdmi channel
hope this helps
-
Best thing you can do is passthrough a whole usb controller. Then anything you plug into it will work plus you will have the benefit of usb hot swap.
Guide is here http://lime-technology.com/forum/index.php?topic=36768.0
-
-
Try enabling msi interupts for the nvidea card. i had some probs with my gtx 960 until i did
use this prog https://www.dropbox.com/s/gymaipg6vprd508/MSI_util.zip?dl=0
run it as admin on windows and tick the nvidea stuff
enable it for the video and audio parts of card
-
But you had passthrough working for a few days to start
-
Probably a long shot but try changing the position of the card in different slots without the acs override setting turned on. Maybe in slot 3 and sas cards in 1 and 2 without acs overide. I would try all combinations possible of slots and acs overide on and off
-
Nice work hunting down 'gpe6f'!
I suspect you both need to either keep monitoring for another BIOS update, or hope for a workaround in a future Linux kernel. That's how it usually works. You're apparently too close to the 'bleeding edge'.
Yes I think its a z170 problem not just Asrock problem.Hopefully a bios update can address this although asrock just say that it is because skylake only put into linux kernal from 4.3 However the problem had been reported on https://bugzilla.kernel.org/show_bug.cgi?id=105491 and that was with kernal 4.3.0 rc4
I am happy with how my system is now with unraid, but I didnt except so many problems going to skylake. Some of the problems are here http://lime-technology.com/forum/index.php?topic=46141.msg441036#msg441036
I think maybe we should post some topics for each motherboard chipset where we can post problems we have had for various issues to help others with the same hardware or others who are thinking of buying the same. We should do the same with gpu types. Just to try and get the info in some logical order
-
-
I have alot of vms on my unraid. 21 at last count. Some are just duplicates of the same vm with different xml (ie one for gpu passthrough with usb passtgrough and one for just vnc). Sometimes i start the wrong one in error.
I would like to be able to pin my favourite or most used vms to the top of the vm list and/or onto the dashboard.
This would make using my vms much easier. Thanks
-
I added it to my go file which wil run it as soon as unraid boots
goto your flash drive then in the folder config you will see a file called go.
You need to edit this file and add the line
#disable gpe for bios acpi error
echo disable > /sys/firmware/acpi/interrupts/gpe6F
the part #disable gpe for bios api error is just to name what the line does and isnt actually necessary
-
unfortunately this is an issue that effects nvidea cards.
When in the primary pcie slot they dont passthrough unless you have unraid output on an onboard gpu.
If you changed your gt9500 to an amd gpu on the primary slot it "should"work together with the gtx970 in the secondary slot.
(also i dont know if you would ever get a gt9500 to passthrough anyway as its quite an old card (2008). I may be wrong though)
-
yes it is possible to passthrough all gpus and run unraid headless. Just telnet or ssh into unraid.
The exception if you have intigrated onboard graphics it is not possible to pass that through at this time.
Also if you have an nvidea gpu as your primary card (and no integrated graphics) then you may have problems passing it through.
Windows 10 Unusable/BSOD/Freezing/Graphics driver issues
in VM Engine (KVM)
Posted
Problem looks like your gpu not alone in its own iommu group
Try the PCIe ACS Override set to yes and see if that sets the gpu in its owm iommu group.
hope that helps