• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Brydezen's Achievements


Apprentice (3/14)



  1. That's great. I have tried this on my own VM. It does get more usable but is still unable to play games using it. So i'm shopping for new hardware soon. Was time for a upgrade anyways.
  2. That was my next question in regards to how you installed the nvidia drivers. Because loading the VM up with no graphics driver but MSI enabled works fine. But halfway through it might crash because it maybe changes the device instance path because of the update. So you just installed them in safe mode? Because having the installer crash midway is for sure gonna cause some kind of weird problem if it's not allowed to finish fully. Think I might prep a fresh new VM in Q35 5.1 on my 6.8.3 machine and try and migrate that to either 6.9.2 or 6.10RC2
  3. So you are saying that you now have a fully functional VM with a GPU passed through with no hickups at all just by enabling MSI interrupts? Also have some other questions about your VM: what bios and version did you use? Did you do a fresh reinstall? What Nvidia driver did you install? Do you have HyperV enabled on the VM? if Yes, then what do you have in their? Any other special XML you have added? Tell as much as you can. So I can try and recreate on my own machine 🤞🏻 I thought it was mostly enabled if you had audio issues on your VM. Looking at the lspci -v -s <ID> I can se my current VM on 6.8.3 does have MSI enabled on the GPU. Just seems odd it's should all be down too that. Maybe someone can or have created a script for manually check if it's enabled on every boot. EDIT: This little snippet in powershell can grab the "DISPLAY" aka GPU installed and give you the patch. Will see if I can get some sort of script up and running for check if MSISupported is set to 1 or not. gwmi Win32_PnPSignedDriver | ? DeviceClass -eq "DISPLAY" | Select DeviceID EDIT 2: Think I have most of the pieces for creating a noob script. I'm no way good at powershell. This is my first ever attempt at creating on. But it will: Check if their is a graphics card - Get the Device instance path. Check is the "MessageSignaledInterruptProperties" exsist in the registery keys. And then check if "MSISupported" exsist and what value it has. And based on what value it has it should change it. And if it changes I will make it do a automatic reboot (maybe) Or maybe just a popup saying its been changes and you should reboot.
  4. UPDATE: I pulled the plug spend over 5 days trying to fix it. So rolled back to 6.8.3 - before I did I also tried 6.10-RC2 as last straw. I read somewhere that the linux kernel had problems with VFIO passthrough in version 5.1, 5.2 and 5.3 - and unraid just updated to 5.1 in 6.9.2 - so I blame it on the kernel choice. I hope later versions of unraid could advance beyond those kernels with potential problems. EDIT: Not saying he is right. But seems odd that so many are having problems with the 5.1 kernel in unraid 6.9(.X)
  5. Wanted you to maybe try a new Windows 10 VM on 6.9.2 Q35 with all UEFI booting. I just tried it. And seemed to still give me error 43. Then I tried pulling out my old main VM vdisk. I created a new template for it. And low and behold it somewhat worked. It was not unusable but still wasn't able to play any games. I then tried to upgrade to the latest Nvidia geforce game ready driver. Using a "clean install" in the advanced section. And after doing that it went back to totally unusable. I blame nvidia for the issue now. But hard to say for sure. Before it was running 471.68 (nvidia driver) - Not sure what to do now. Maybe I will try this guide and see if it can fix the VM for good.
  6. If you don't mind trying it out? I have seen people talking about lower graphics driver versions. But I haven't tried it yet. My next move is trying to go back to Legacy. Right now I'm on UEFI. Wanna see if they gets me further. But installing the machine on Q35 and adding those lines to my template for sure got me further. Don't have time until Monday to work on it a bit more. But only gonna give it two more days before migrating back to 6.8.3 - can be spending this much time on something that should just work.
  7. I got some good news. I have been able to reinstall a new Windows 10 machine with only the GPU passed through and connect to it over RDP. Tho windows keeps reporting error 43 with the GPU in device management. I followed this guide to setup the VM itself: I then also unticked the GPU from Tools > System Devices and added them directly to the flash drive command using: pci-stub.ids=XXXX:XXXX,XXXX:XXXX but I have not overcome the error 43 yet. But it is for sure a step further than I have ever come before. Think I will try and follow this long guide next:
  8. I just tried a completely new install of unraid. And at first glance everything seemed great. My main windows VM actually started up and was somewhat usable. WAAAAAY more than just updating the normal usb. But still not enough for me to be happy running on daily. So I decided to just try and make a completely new VM. Everything went great over VNC and the first boot with a graphics card also seemed fine. But after the nvidia drivers where about 15% into the installation the VM just freezed up. At this point I don't know what else to do. Can't be on 6.8.2 forever. I want to upgrade to 6.9.2 or beyond. But I don't know what to do at this point. I'm begining to give up on unraid. If it's this much hassle I might just end up switching to another hypervisor. I feel like I have done close to everything I can. Trying to run the VM in a bazillion different configurations: Hypervisor: Yes, No. USB: 2.0, 3.0 etc etc. More ram, less ram and so on. Once in a while it will actually start up into the desktop itself even with only the gpu and rdp. But crash after like 2 - 3 minutes
  9. So you are running Legacy and i'm running UEFI. Same motherboard and both problems with nvidia gpu's. Seems weird. Have you done any customization outside of the unraid web gui? I have done a few things related to CPU's but now I can't remember exactly what it was. That's why I was thinking about doing a completly fresh unraid install and test. I haven't tried any Seabios VM's. Seems like most people recommend OVMF so never really striked me to use the Seabios. if a completly fresh unraid install don't work i'm sadly moving to another hypervisor. Makes me kinda sad if it has to come down to that. But seems like no one is really interested in helping anymore.
  10. Alright. Can you remember what boot option you where running? UEFI or Legacy?
  11. What motherboard are you using? And are you using UEFI or legacy boot? My problems also start the second a graphics card is installed. Work just fine with VNC. But a graphics card makes the VM unusable
  12. I'm using the same motherboard. But I doubt it's the motherboard. But could maybe be the UEFI boot option. Will maybe try and do a complete fresh install of unraid with legecy and UEFI. Just copy over the disk and cache arrays. So everything else is totally clean. It's a long shot. But I can't keep staying at 6.8.3 - every plugin is getting outdated for my version. I just don't have that much time do deal with it. I need it to just work so it's really frustrating. Please let me know if you find a way to fix your issue
  13. What motherboard and cpu are you running?
  14. I could not get it to even boot into legacy mode for some reason. It just kept telling me it's not a bootable device. So i'm giving up now. going back to 6.8.3 - don't wanna waste more time troubleshooting when nothing seems to help at all. Hopefully the next release will work for me.
  15. I might just redo the installation on try to go with legacy boot and see if that will fix anything for me.