ThePockets

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by ThePockets

  1. Update, it did work! Thank you so much for your help! I made the mistake of putting "video=efifb:off" on a new line in syslinux.cfg, rather than the same line after "append". Thank you for your help!
  2. Thank you for your help! So I followed what you said, and adding "video=efifb:off" didn't seem to change anything with the results or the iomem itself. Here's a copy of what iomem has, it still says "efifb" in for that address range in device 8:00.0. This is after adding the change in syslinux and rebooting:
  3. Hello friends! I've had a VM with a dedicated GPU passed through to it for use as an HTPC on my TV for 4-5 months now, and it has been working great. A couple weeks ago, I updated my array to be encrypted, and ever since then the VM hasn't been outputting video. This line is just repeated as a warning (in yellow) in the logs nonstop: 2021-10-07T22:41:27.349751Z qemu-system-x86_64: vfio_region_write(0000:08:00.0:region1+0x106c28, 0x0,1) failed: Device or resource busy And the "Fix Common Problems" plugin notifies me that the var/log folder is 100% used as well. Device 8:00.0 is my the GPU I'm passing through, do you know what could be causing this? I'm not completely sure it was because of encrypting my array, I hadn't tried to use the VM for a couple weeks before I did the encryption. Also, the VM is completely off the array, only using an unassigned device SSD that is also passed through. Here are the relevant specs: Ryzen 3600 Asus B550M-Plus 2x8GB RAM Sabrent Rocket 256 GB Nvme SSD EVGA GTX 1660Ti SC Ultra Thank you for your help! I have attached my diagnostics below, let me know if I can provide more info. tommy-diagnostics-20211007-1656.zip
  4. @testdasi Thank you for the help! That didn't seem to change anything, but I appreciate you helping me. What changes did you make to the XML file? Trying to learn XML still. Thanks again!
  5. Yes this is correct. I might be doing something wrong, but if I try using SeaBIOS at all it can't find a boot device, so I'm not really able to test. It's the same even if I'm just using my cache drive instead of the separate SSD. Thanks for the suggestion, I did notice that when looking at the VM logs, and all 4 devices are being passed through. The only error I can find is the "2020-07-09 02:31:15.643+0000: Domain id=1 is tainted: custom-argv" line which is marked red. But I have no idea what that means haha. Here are my most recent diagnostics: tommy-diagnostics-20200725-1540.zip
  6. @jonp Sorry for the late update, thank you all for the suggestions! Using OVMF + Q35 did solve the problem of it crashing the entire server. If I continue using VNC for graphics it looks like I can pass anything else through fine. I passed a USB controller and onboard audio through and both worked great. If I try to pass through my GPU though, all I get is a black screen. On the VM logs I get this: 2020-07-09 02:31:15.643+0000: Domain id=1 is tainted: high-privileges 2020-07-09 02:31:15.643+0000: Domain id=1 is tainted: custom-argv <---- THIS LINE IS MARKED RED 2020-07-09 02:31:15.643+0000: Domain id=1 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2020-07-09T02:31:17.580603Z qemu-system-x86_64: -device vfio-pci,host=0000:08:00.0,id=hostdev0,bus=pci.2,addr=0x0,romfile=/mnt/cache/domains/vbios/TU116_edited.rom: Failed to mmap 0000:08:00.0 BAR 3. Performance may be slow <----- THIS LINE IS MARKED YELLOW 2020-07-09T02:48:15.583801Z qemu-system-x86_64: terminating on signal 15 from pid 6361 (/usr/sbin/libvirtd) I dumped the GPU's vBIOS using GPU-Z on another computer, and I tried using an unedited vBIOS as well as the changes that SpaceInvader One explained in his NVIDIA GPU passthrough video. Neither version seems to work. I have been trying to troubleshoot these problems (hence why I haven't replied in a while), but I haven't found anything that has helped. Thanks again for your help!
  7. That didn't seem to help, thank you though! Just in case there's something else I'm doing wrong, here's my XML file: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Windows 10</name> <uuid>15a50093-78bb-c4f0-502c-99ed3151c72e</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>12582912</memory> <currentMemory unit='KiB'>12582912</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='7'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='9'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='10'/> <vcpupin vcpu='6' cpuset='5'/> <vcpupin vcpu='7' cpuset='11'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-5.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/15a50093-78bb-c4f0-502c-99ed3151c72e_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-model' check='none'> <topology sockets='1' dies='1' cores='4' threads='2'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows/Windows.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows/virtio-win-0.1.173-2.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:97:ea:d8'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/domains/vbios/TU116_edited.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x4'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,topoext=on,invtsc=on,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vpindex,hv-synic,hv-stimer,hv-reset,hv-frequencies,host-cache-info=on,l3-cache=off,-amd-stibp'/> </qemu:commandline> </domain>
  8. Hello friends! I've been working on setting up a Windows 10 VM for a gaming HTPC for the past week, and can't figure out what's going on. I am using a Sabrent Rocket PCIe 3.0 passthrough dedicated to the VM, and if I use no other PCIe device it works fine through VNC. However, if I add any other device (GPU, different USB controllers, motherboard sound card is what I've tried, each one at a time) the VM does not start, and after a couple minutes I lose connection to the entire server and am forced to do a hard shutdown. When trying to pass through GPU, I am also passing the vBIOS I dumped from GPU-Z as explained by SpaceInvader One. I have searched for solutions for many hours, and have no luck. I think it's really weird because with the new Unraid build 6.9.0-beta22 I can pass the NVMe drive through and that works fine, but any other device crashes the whole server. Below is my specs and diagnostics file, and help would be greatly appreciated. Ryzen 5 3600 Asus TUF Gaming B550m-plus 2x8GB Team T-Force Dark Z Silicon Power A60 256GB (cache) Sabrent Rocket 256 GB (for VM) EVGA Geforce GTX 1660Ti SC Ultra Cooler Master MasterWatt 550W 80+ Bronze 10Gtek Broadcom BCM5751 1Gb Ethernet NIC 3x Shucked WD EasyStore 8TB tommy-diagnostics-20200703-1730.zip
  9. Dang, thanks for looking into that. Do you know if I'm able to send a request to Limetech for it to be fixed in the next release? Also in the mean time, do you know of a cheap 1Gb nic that would work with Unraid? Thanks again!
  10. tower-diagnostics-20190102-1629.zip Alright here are the diagnostics for version v6.9-beta22, thank you for helping!
  11. Hello friends! Coming to you for help after a couple days of troubleshooting. I am building a new NAS/HTPC using the Asus TUF Gaming B550m Plus motherboard with the most recent BIOS. Everything is detected correctly, however it is not able to connect to the network even though it is direct connected to the router through Ethernet. Because of this, I can't get a registration key to be able to set up my drives and all. In my troubleshooting, I have tried using the stable and beta versions of Unraid (6.8.3 & 6.9.0), I have tried both DHCP and setting a static IP, and I have tried putting the Unraid flash drive in different USB slots. None of this made any changes. I then tried booting into a Ubuntu disc, and found I was not able to connect to the network there either. From what I can tell, I don't think Ubuntu nor Unraid has the driver for the Realtek RTL8125B controller that the motherboard uses. I am able to find a Linux driver on Realtek's website here. Do you know if this is the problem? If so, is there a way to add the driver to Unraid? If this isn't the problem, do you have any ideas on what it could be? Thank you! tommy-diagnostics-20190101-0127.zip