Checkm4te Posted June 16, 2020 Posted June 16, 2020 Hey i have a problem with my Windows 10 VM system overview unraid OS Version 6.8.3 2020-03-05 CPU Intel Xeon E-1246v3 MB ASRock Z97 Extreme v6 GPU Gainward Nvidia 980Ti GS a situation summary: i shutdown my server for a few hardware upgrades, two NVMe SSDs for the cache and a 10GbE Card. I also used that to do a BIOS Upgrade. After that i started unraid and my array and followed this guide to change my cache drives this worked fine for me. then as i wanted to start my Windows 10 VM i noticed that the GPU i have installed didn't showed up in the VM settings I searched and found out that i need to enable IOMMU pass-through or Intel VT-d (for me) in my BIOS since the BIOS settings got reset after the upgrade i checked everything again and booted my system up again. my error: Now the GPU showed up in my VM again and everything looked good so far. but when i start the VM the following error occurs internal error: qemu unexpectedly closed the monitor: 2020-06-16T17:56:01.149672Z qemu-system-x86_64: -device vfio-pci,host=0000:01:00.0,id=hostdev0,x-vga=on,bus=pci.0,addr=0x5: vfio 0000:01:00.0: group 1 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. so i checked the IMMOU groups and i have no idea why it would group itself like that VM settings so i did some research and found this but i didnt really get the solution so i wanted to ask what should i do. enable PCIe ACS (also which one) Or do that vfio-bind (and how) Quote
harshl Posted June 16, 2020 Posted June 16, 2020 Have a look at this video and the other videos Spaceinvader has done around passthrough. They helped me a lot. If nothing else you might learn a bit about each option and why you might use one or the other. Good luck! -Landon 1 Quote
Checkm4te Posted June 16, 2020 Author Posted June 16, 2020 (edited) hey thank you for the information. I must have overlooked that video from him as i researched. i started with his first assumption to add "vfio-pci.ids=10de:17c8,10de:0fb0,1d6a:d107" after i rebooted my server i think i have now a much larger problem since it shows my cache drives as new drives. i tried to remove the line i added and reboot the system but it still says both of them are new drives. and as dumb as i am i didnt do a backup of my cache files in the past month so is there any possibility to access my files from them? i didnt start the array yet. it was btrfs raid1 edit i see them in Unassigned devices i also still have the 2 old caches drives i swapped at the weekend. Edited July 5, 2020 by Checkm4te censored serialnumber Quote
Checkm4te Posted June 18, 2020 Author Posted June 18, 2020 I managed to backup my data form the cache pool thanks to this guide i just had to use nvme0n1p1 as drive letter also whats not mentioned, you unassign the two cache drives and start the array so you can copy your data to one of your disks/shares in /mnt/disks/... Quote
Checkm4te Posted July 5, 2020 Author Posted July 5, 2020 So the past weeks, i managed to backup my data as mentioned above, but it seem it was just a display error because after i finally started my array yesterday to restore the data and after i started the drives were shown as normal and everything was still there. I still do recommend doing a full backup of your data/server. i followed this video from spaceinvador one after that i set my PCI ACS override to Downstream and after a reboot all my IMMOU Groups are looking good and my VM is starting again. Thank you harshl! Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.