Raventhone Posted May 2, 2022 Share Posted May 2, 2022 (edited) Hello, Hopefully one of you amazing peoples can help me with an issue that I am having. I had a win 10 vm working perfectly fine, I was gaming on it and had no issues with video outputs to my dual monitors. I moved to an other house and when I hooked up my sever again my windows 10 vm stoped outputting my video. I didn't make any changes when I moved, since it wasn't working I nuked my vm and attempted it again. I can see the windows desktop and do everything via VNC within unraid but not my monitors. My linux vm outputs video without any issues. Attached are my logs for the VM from start up. I also attached images of my config of the windows 10 vm, same set up for my windows 11 vm expect for passing though the TPM. Any ideas or help would be awesome. I dont use my graphics card for anything but the VM. Win 10 VM Log.docx Edited May 5, 2022 by Raventhone Quote Link to comment
ghost82 Posted May 3, 2022 Share Posted May 3, 2022 Please attach diagnostics file, not office documents. Quote Link to comment
Raventhone Posted May 3, 2022 Author Share Posted May 3, 2022 Sorry im still pretty new to unraid...how do I get the diagnostic files for the VM? Quote Link to comment
ghost82 Posted May 3, 2022 Share Posted May 3, 2022 2 hours ago, Raventhone said: how do I get the diagnostic files for the VM? Please read this: https://wiki.unraid.net/Manual/Troubleshooting#System_Diagnostics Quote Link to comment
Raventhone Posted May 4, 2022 Author Share Posted May 4, 2022 unraid-6.10.0-rc5.txt unraid-6.10.0-rc5.txt domain.cfg flash.cfg cache.cfg docker.cfg super.dat ident.cfg share.cfg network.cfg go.txt disk.cfg vfio-pci.cfg libvirt.txt docker.txt vfio-pci.txt syslog.txt Windows 10.txt appdata.cfg r----r.cfg s----r.cfg domains.cfg b-----------e.cfg b-----------r.cfg W-----------s.cfg W--------0.cfg M---a.cfg M----s.cfg h------l.cfg isos.cfg G---s.cfg system.cfg b-------------------n.cfg d----e.cfg u-----c.cfg R---------e.cfg shareDisks.txt P----s.cfg T------s.cfg D-------s.cfg M----------s.cfg T--------p.cfg j-----t.cfg t--p.cfg P---------------r.cfg ST4000DM004-2CV104_ZTT2XBGF-20220503-2054 disk1 (sdc).txt _USB_DISK_3.0_0700139C81E07704-0-0-20220503-2054 flash (sda).txt ST4000DM004-2CV104_ZFN1KDFW-20220503-2054 parity (sdb).txt ST4000VN000-1H4168_Z3032VNC-20220503-2054 disk2 (sde).txt ST4000VN008-2DR166_ZDH1RJ2Z-20220503-2054 disk4 (sdd).txt ST4000VN000-1H4168_Z306JWT5-20220503-2054 disk3 (sdf).txt ADATA_SX8100NP_2J4720012214-20220503-2054 cache (nvme0).txt eui.00000000010000004ce00018dd8c9084-20220503-2054 cache (nvme0).txt cmdline.txt unraid-api.txt plugins.txt motherboard.txt iommu_groups.txt ethtool.txt folders.txt lsmod.txt lsscsi.txt loads.txt memory.txt lspci.txt meminfo.txt lsusb.txt urls.txt top.txt ps.txt vars.txt lscpu.txt lsof.txt df.txt ifconfig.txt Windows 10.xml Windows 11.xml Ubuntu.xml Quote Link to comment
ghost82 Posted May 4, 2022 Share Posted May 4, 2022 (edited) Hi, I can't see any error in the logs, the only thing I would suggest is to change in your windows 10 vm the gpu layout to multifunction, change from: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> To: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0b' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x3'/> </hostdev> If it still doesn't work, remove the gpu passthrough and enable again vnc. Boot the vm with vnc, enable remote desktop inside the vm. Remove vnc from xml and add again the gpu passthrough, FIX the gpu multifunction as per above. Run the vm and connect directly to the vm with remote desktop from a second device. Look at the system devices in windows and see if the gpu is detected and what errors has. In this state try also to reinstall the nvidia drivers inside the vm. PS: if you need to attach again diagnostics, also in the future, do not attach single files, but the zip including all. Edited May 4, 2022 by ghost82 Quote Link to comment
Raventhone Posted May 5, 2022 Author Share Posted May 5, 2022 Thank you kind sir...my windows VM is now working again...have no idea why it stopped working but im glad its working again Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.