Dezian

Members
  • Posts

    2
  • Joined

  • Last visited

Dezian's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Nope, no locks. Just making sure the pci was supplying enough power for both cards when doing the dump (was an overclock setting).
  2. Hi All- I am new to UNRAID and I had several of the Windows 10 / NVIDIA GPU pass through issues that have been posted here. It seems that most video cards and setups have different tweaks that make them work, so this is my story which hopefully helps you if you face any of the same issues I did. The build: Full Build- HERE MB- MSI MPG Z390 Gaming Edge AC (I wanted USB 3.1 header) VC- EVGA GeForce GTX 1060 6GB 06G-P4-6163-KR The issues: I could not get the video card to pass through, no monitors worked at all. When I did RDP in to the PC I saw the video card driver had error 43 When I fixed number 1, I was left with error 43 TL;DR fixes: Make sure when dumping the GPU vBios, that your 2nd card is powering on and UNRAID sees it. This matters and will create a bad dump (which is easy to tell because it's very small). When you add your vBios to whatever UNRAID share, name it xxx.ROM and NOT xxx.dump. This way the UI in Unraid 6.6.6 will let you choose and you can limit the XML editing as much as possible. Turn off Hyper-V the real way, That means editing the XML of your VM and DELETING the whole <hyperv> tag below: <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> Add KVM Hidden flag after the <apic/> tag: <kvm> <hidden state='on'/> </kvm> The secret trick that finally fixed it for me was removing the "timer name" sub tags of the clock offset. For whatever reason, this was telling the NVIDIA driver that it was on a VM: Original: <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> What worked for me: <clock offset='localtime'/> Try your hardest not to modify the XML any more than absolutely necessary. If you screw it up, the UI will no longer be able to save (just sits at updating forever) and the net result is probably creating a new VM or VM XML shell (this happened to me) if you want to easily add USB by checkbox. The path to success: Before I begin I just want to say most credit for all of the setup and design goes to SpaceInvader One. He doesn't know me, but without him I would never have gone down the UNRAID path nor would I have fixed most of the issues I've had. If you read this, hat's off to you, Sir. I've really enjoyed and benefited from all of your hard work and learned a ton; keep it up! So after ripping open all my parts, getting the UNRAID setup and playing with Dockers and Windows VM, I was ready to start working on my setup. So I plugged in my monitors, and followed the Part 2 of SpaceInvader One's win 10 build series. I made sure my BIOS was right, I passed through the sound card portion of the video card, and checked UEFI settings, and powered on... No dice. I sat there staring blankly at two monitors in power save mode. So I tried enabling split IOMMU groups and tried again... and again, two very dark monitors and this time a new feeling of overwhelming sadness. I RDP'ed into the VM, and found that the windows environment listed the GPU, but it was showing it as error 43. Some Google FU immediately hinted to me that I was in for a relatively mediocre time. So then I decided to try dumping the bios of the GPU. I followed two separate videos here and here. Determined not to open a forum post and have other people fix my problems, I forged ahead. And what I found was something very surprising: apparently when I was dumping the GPU bios with the 2 video card method, UNRAID wasn't showing the first video card and the dumps created were very small (26KB) and I distinctly remember in the video hearing that the size should be ~125KB. So after some painstaking research I found that the motherboard had a setting for passing power to the PCI-E slot. To this day, I still don't understand why it powered the 2nd GPU and not the first (I didn't have to set it for the first card, it just worked), but I chalk it up to bad luck. I dumped the GPU bios, updated the link in UNRAID, and instantly my monitors came to life with a successful (albeit very large) windows login screen. Victory!... or so I thought. After logging in, I was faced with the dreaded windows driver error 43. After extensive research on KVM, UNRAID, and NVIDIA it was clear that the reason the GPU was failing to load the driver was NVIDIA's identifying it was being used on a VM. So I went to work compiling all the knowledge I could find about how other people had fixed it. I tried everything I could find including some rather unorthodox XML editing, but alas, no success. I was starting the process of downloading files for my last ditch effort, when I had a thought about how ESXi passes time to VMs, so read some more about KVM in general and found that the clock offset tag was what I was looking for. I went through my XML and found that the tag was referencing HyperV, so I thought I will just delete the tag altogether and..... BINGO! driver started up system was great. Only problem was that the clock in Windows kept resetting to UTC on every boot. So I had to go back and add a flag for 'localtime'. The only side effect of my many edits of the VM XML was that I could no longer save the VM in the UNRAID non-XML UI. I found this very annoying for adding USB devices, so in the end I decided to rebuild the VM from scratch with lessons learned. So that's my story... I hope that folks can learn something from my experience to improve their own. While this was a challenge, I have had a ton of fun learning and exploring all the possibilities. Next up for me is motherboard audio pass through, USB controller pass through, and (possibly) attempting to pass through my m.2 as a raw device to windows. Thanks for reading! EDIT: Fixed some links