SirKronan

Members
  • Posts

    5
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

SirKronan's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Okay, my friend, it appears to be the MOTHERBOARD!! I literally just threw everything (drives, GPUs, SSDs, RAM, KB, Mouse) into the X570, turned it on, AND IT ALL WORKED FREAKING PERFECTLY. Each VM restarts quickly and normally. Everything appears to be funcitonal. You know, I bought this motherboard for the white shroud to fit a special black and white custom look, but when I first fired it up it reminded me suspiciously of an ASRock BIOS. I tried making some multiple GPU rigs with 2 or 3 ASRock motherboards - just regular one OS builds, not VMs, mind you - and it was a NIGHTMARE. I could never get them to address the multi-GPUs plus the integrated GPU correctly. I ended up returning both of them and using Asus Prime motherboards instead. Both AMD and Intel based Prime motherboards had NO TROUBLE running and handling the multiple GPUs EXACLTY how I set them in the BIOS, and Windows 10 just hummed along happily. This is an Asus TUF X570, and it is just humming happily along doing EXACTLY what I set it to do. ASRock + multi-GPU = NIGHTMARES. Further testing shall be conducted, but it looks like I've figured out my problem.
  2. Correct! And I will attach a diag file as soon as I get the chance, but I'm going to try a different motherboard just to rule out the mobo, although the only spare I currently have is an X570 board with a 5700G processor in it.... kind of hoping it IS the motherboard, as otherwise I may go insane, lol.
  3. I am not at home on the computer now, but I will post a diagnostic sheet when I can. Let me try to clear any confusion. There is no 1660. Right now I have a 1650 and a 1060 that I am trying to get to work. These are the cards that have gotten me the closest to success, but I'm not there yet. If there are two cards installed (one for each VM) it doesn't matter which card is in which slot. Whichever card is in the first slot works. I can get either VM to start with that card assigned. The 1650 and the 1060 work when in slot one. If there is only one card, even if it is in slot two, it works fine. I can start either VM with that card assigned, the 1650 or the 1060. If I start one VM with one of the Nvidia cards and one VM with the VNC emulated screen, both will start and operate normally. I have tried every setting I can think of. I'm wondering if it is simply a motherboard limitation if there are two cards installed. I have an X570 motherboard I am thinking about transferring the GPUs and the SSDs to, just to rule out the motherboard being the issue. I am at my wits end with this. I'm glad Unraid offers a free trial, because there's NO WAY IN **** I would ever pay money for something THIS DIFFICULT to get running. Rant aside, if I end up figuring out how to make this work, I will gladly pay Unraid my $$. I have dumped the VBIOS from each card, and that is what I am using. It doesn't seem to make a difference. I still get the " internal error: qemu unexpectedly closed the monitor " error every time I try to load a VM with the card installed in the second slot, no matter which card it is. With an AMD card in the second slot (a 5500XT), I can get it to boot both VMs successfully, and the 5500XT will even load the AMD drivers and run benchmarks and operate perfectly - until I restart that VM. Then it's back to the typical black screen error that AMD cards seem to be famous for.
  4. Here is the tail end of my XML. It is using the VBIOS I dumped myself using Spaceinvader's script: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/vbios/HP1650Supervbios.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0461'/> <product id='0x4d0f'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc335'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> </domain> This is for my second VM using the 1650. The VM with the 1060 in slot one continues to work properly and reboot without any issue. Any ideas on how to fix mine? No matter what I try, I'm still getting the "internal error: qemu unexpectedly closed the monitor" error message. The 1650 does have 2 USB devices, and I have them selected in this VM as well. They are set to "Bind selected to VFIO at boot". Thanks again for attempting to help me.
  5. I feel like I'm banging my head against the wall with this. I have a GTX 1650 Super. I have got it working properly for passthru on one successfully enabled VM. I was thrilled with that progress, as it took a WHILE, lol! I also have several other video cards I'm trying to run in the second PCIe slot for the second VM. It seems like I can assign and successfully use any devices I want, EXCEPT that second video card. If I boot up the VM with the VNC as the virtual monitor over the network to the PC I am using to remotely set up the VMs, it will boot into windows fine. It will use USB devices, networking, install updates, etc. If I use the 1060, it gives me the " internal error: qemu unexpectedly closed the monitor " error message and won't proceed. It will do this if initially trying to create the VM with the 1060 as passthru or if trying to passthru the 1060 after VM creation. I tried a second GTX 1060 just to verify it's not a specific problem to this GPU. Now, if I throw in an RX 580 or a 5500XT into that second slot, it will actually start starting the VM. It will even boot into Windows with the basic display adapter. However, as soon as Windows loads a proper driver for the GPU, whether automatically with Windows Update, or manually with the latest driver downloaded from AMD's website, the screen goes black. You can reboot the VM, and you will even see the logo and the dots start to spin, but the SECOND it initializes the GPU driver, the screen with either freeze or go black. Note, the display doesn't enter power save. It just stays stuck either on the black screen, or the frozen screen if rebooting. No error messages, and according to the logs, the VM is running correctly. Could it be my motherboard having issues with that second PCIe slot? It's an NZXT Z490 motherboard (bought to match the black/white themed PC I'm trying to make for my kiddos)...? I'm about to toss in an Asus Prime Z490A just to see if it works better. I have tried both bound and unbound with the 1060 and I get the same result (referring to the "bind selected to VFIO at boot" option). I get the same error message. Occasionally I'll get a different error message, but it's similar, and the QEMU one I posted above is the most common. Any help is GREATLY appreciated. Trying to get this PC going before I have to leave (again) for more AF training out of state. UPDATE: Okay, here's where I'm at now. I could never get the 1060 to work with passthru with either VM, even as a single GPU, even with binding, the VBIOS form techpowerup, different versions of things, lots of different settings - nothing. I deleted the header, followed several tutorial videos, tried different versions of the VM (i440fx-5, 4, etc.) This solution WORKS with the 1650 Super, but I can't get it to work with either of the 1060's that I've tried. So I can get into windows now and get the AMD drivers to load with a 5500XT in one of the VMs. YAY! But if I restart that VM, it goes to the black screen/froze screen error on the next boot while loading windows when it gets to the part where you can tell it's loading the GPU drivers. Here's the funny part. If I force stop it, restart it, restart the whole array, it will perpetually do the froze screen thing. However, if I change the version of the VM (like switch from Q35 5.0 to Q35 4.0), the next boot after making that change, it will boot into windows quickly and successfully, and the AMD drivers are happy. Reboot? Locked screen again. Now, if my son just had to run a script to reset the AMD card and then he could boot up again, that's no big deal. I have a shortcut for the scripts. It's simple. But you have to do that, PLUS change to a different Q35 version of the VM. I feel like I'm almost there! The 1650 Super VM is running just fine. It can restart at will and fire back up every time. Any further ideas? (cross posted on Reddit) Thanks in advance for any help anyone can offer!