dylantweedy

Members
  • Posts

    4
  • Joined

  • Last visited

About dylantweedy

  • Birthday 01/03/1992

Converted

  • Gender
    Undisclosed

dylantweedy's Achievements

Noob

Noob (1/14)

0

Reputation

  1. It's been a while since this post, but yeah, from what I recall your problem sounds the same as mine I ended up removing the GPUs from that board and using them in different machine. I've had some success trying to passthrough them on Pop OS with KVM and Virtual Machine Manager and I'm gradually understanding how this whole thing works, but I'm still encountering performance issues on a Windows 10 gaming VM. I've just ordered a RTX 3060 Ti, so I may trying moving a 1080 back to the ASRock EP2C602 motherboard and taking another crack at it. My goal is to have each GPU able to function as an individual gaming VM, then set mining crypto when not in use (to hopefully recoup the cost on these ludicrous GPU prices!). It's taking me a while to overcome each problem I run into, but when I have more useful information I'll be sure to post it here! I was honestly hoping there would be an update to fix the issue by now.
  2. It works! I seem to be having performance issues, but it's booting and running windows, so this is a big improvement.
  3. I have tried with SeaBIOS and didn't have much luck there. I'm currently trying to make a new VM with SeaBIOS to see if it's any different. Tried it without the rom, it made things worse and the VM wouldn't even show the boot screen! I did post the diagnostics, didn't I?
  4. I've been trying to fix this for about a month with no luck. I've been running a windows 10 gaming vm for a couple of years now (mostly) without a problem. It runs with a Nvidia Geforce 1080 and is the only graphics card in the system and I'm also passing through a USB controller. (I vaguely remember having to edit my syslinux config and getting the Graphics ROM BIOS to get it all to work.) Then I updated to 6.9.1 and things stopped working, I believe at that point I got a notification "Legacy PCI Stubbing found, please help clear this warning, vfio-pci.ids or xen-pciback.hide found within syslinux.cfg. For best results on Unraid 6.9+, it is recommended to remove those methods of isolating devices for use within a VM and instead utilize the options within Tools - System Devices" I removed the "vfio-pci.ids=8086:1d2d" and checked it in the System Devices Menu, still no luck. As the USB controller seemed to be working fine and because I could still boot into recovery mode, I figured it was something to do with the graphics card, so I checked that too. That doesn't seem to help, and after disabling it and rebooting the green dots remain, which makes me wonder if it's actually been disabled. It was at this point I made a big mistake, I saw that Unraid 6.9.2 was available. I was hoping that it would just magically fix it, but it didn't, and it wasn't until I had already hit the update button that I noticed the restore button. I tried restoring to 6.9.1 but was unable to restore again to 6.8 so now I'm stuck! I've gone back to the latest version, removed PCIe ACS override from syslinux and disabled it in VM settings to see if it made a difference (it didn't.) I even made a brand new VM, which was working perfectly until it installed the graphics card drivers, it actually seemed to work for a couple of hours, but after a restart it broke and hasn't worked since. It will attempt to boot, but fail and load into recovery mode. I'm pretty much out of ideas at this point, so if anyone can help me I'd be extremely grateful! I have attached the diagnostics file if it's of any use. storageserver-diagnostics-20210521-1701.zip