methanoid

Members
  • Posts

    583
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by methanoid

  1. I removed RX5700 and put a GT740 in.... zero issues with any reboots etc. I was doing it to try to confirm the BIOS wasnt at fault or my config. I am now pretty sure its either the FLR patch that is at fault OR my Powercolor RX5700 is not same as other RX5700s. I have tried non Powercolor BIOS rom in config and no difference.... I guess my unRAID VMs are gonna have to be done using 2 x GT740s
  2. @snolly Does your Rx5700 survive Vm reboots? Presumably you are using the ICH777 docker to rebuild kernel for FLR reset etc ??
  3. @Skaterscare Go look into ICH777 Kernel Helper docker.. Will build kernel for you... and LMK if you can pass (and reboot) that RX5700 afterward - I can't!
  4. @Formal with ICH777 docker built kernel for FLR reset for AMD cards? Still working... Looking for help!!!!
  5. @Gixy are you using ICH777 kernel builder and RX5700 as primary and CAN reset it? I'm stumped, totally and desperate now
  6. @Skitals so far you are one of the few I find who have got it working. Would you be able to help the rest of us out somehow? Iv'e tried MANY permutations over last week Mainboard X570 Taichi - what have U got and BIOS version (might be AGESA related) BIOS - not UEFI (but tried that too) - AER, IOMMU, ACS settings Stubbings - tried NOT stubbing GPUs (as per SIO latest video) and stubbing, stub 2 USB controllers and related AMD device, plus NVME for passthrough GPUs - Primary RX5700 with Rom (tried appx 20 diff roms now), Secondary Nvidia GT740 (not used but will be once RX5700 works) VM settings - Wind10, Q35-5.1, OVMF, HyperV on (tried off), RX5700 GPU&Sound& Rom entered Passed devices: AMD Starship/Matisse Reserved SPP | Non-Essential Instrumentation (0a:00.0) AMD Matisse USB 3.0 Host Controller | USB controller (0a:00.1), AMD Matisse USB 3.0 Host Controller | USB controller (0a:00.3) Phison Electronics E12 NVMe Controller | Non-Volatile memory controller (0d:00.0) Tried with sound portion and without, with one or two USB controllers, with/without the AMD device. I am thinking would be nice to know what you stubbed, what you have in config (ACS patch(, BIOS settings, VM XML? I even after some googling switched the PCIE from Gen4 to Gen2 as someone somewhere claimed that fixed for them. And raised issue at Gnif's Github too. Stumped. NEED help! PLEASE!
  7. @happythatsme Doesnt sound to me like its working if you have the FLR reset bug still... Pretty much ANY config i try works for first boot but all fail after restart of VMs @grizzle what about yours? I wanna see if anyone has RX5700 working
  8. @thisisnotdave or @happythatsme did either of you solve? I've tried asking on gnif's github repo as I have same issue... works on fresh boot but not if VM rebooted. It has to be the FLR again
  9. Here's my IOMMU when I DONT use any ACS patches - I see at bottom both GPUS (Nvidia in one group, AMD in two) but BOTH audio sections dont show FLR enabled and at top the pair of USB controllers we normally have to pass (allegedly other one doesnt like being passed) one of which doesnt show FLR enabled. I wonder if the BIOS is borked? I'm clutching at straws
  10. Hi, thanks for any help here I have efifb:off but not vesafb - is that needed? No, no unsafe interrupts I have tried with and without acs_override... no difference Not tried Linux (will later) but also tried i440FX instead of Q35 (with existing NVME so wasnt likely to work) - worse Checked my logs and interesting here ErrorWarningSystemArrayLogin -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 \ -device pcie-root-port,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 \ -device pcie-root-port,port=0xb,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x3 \ -device pcie-root-port,port=0xc,chassis=5,id=pci.5,bus=pcie.0,addr=0x1.0x4 \ -device pcie-root-port,port=0xd,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x5 \ -device pcie-root-port,port=0xe,chassis=7,id=pci.7,bus=pcie.0,addr=0x1.0x6 \ -device pcie-root-port,port=0xf,chassis=8,id=pci.8,bus=pcie.0,addr=0x1.0x7 \ -device pcie-root-port,port=0x10,chassis=9,id=pci.9,bus=pcie.0,addr=0x2 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:ed:cd:78,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device vfio-pci,host=0000:10:00.0,id=hostdev0,bus=pci.3,addr=0x0,romfile=/boot/GPU_Roms/Powercolor.RX5700.rom \ -device vfio-pci,host=0000:10:00.1,id=hostdev1,bus=pci.4,addr=0x0 \ -device vfio-pci,host=0000:0a:00.0,id=hostdev2,bus=pci.5,addr=0x0 \ -device vfio-pci,host=0000:0a:00.1,id=hostdev3,bus=pci.6,addr=0x0 \ -device vfio-pci,host=0000:0a:00.3,id=hostdev4,bus=pci.7,addr=0x0 \ -device vfio-pci,host=0000:0d:00.0,id=hostdev5,bus=pci.8,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-04-15 09:07:11.798+0000: Domain id=2 is tainted: high-privileges 2021-04-15 09:07:11.798+0000: Domain id=2 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2021-04-15T09:07:15.788543Z qemu-system-x86_64: vfio: Cannot reset device 0000:10:00.1, no available reset mechanism. 2021-04-15T09:07:15.932195Z qemu-system-x86_64: vfio: Cannot reset device 0000:10:00.1, no available reset mechanism. 2021-04-15T09:08:52.266352Z qemu-system-x86_64: terminating on signal 15 from pid 11862 (/usr/sbin/libvirtd) 2021-04-15 09:08:58.100+0000: shutting down, reason=shutdown ErrorWarningSystemArrayLogin -nodefaults \ -chardev socket,id=charmonitor,fd=31,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 \ -device pcie-root-port,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0xa,chassis=3,id=pci.3,bus=pcie.0,addr=0x1.0x2 \ -device pcie-root-port,port=0xb,chassis=4,id=pci.4,bus=pcie.0,addr=0x1.0x3 \ -device pcie-root-port,port=0xc,chassis=5,id=pci.5,bus=pcie.0,addr=0x1.0x4 \ -device pcie-root-port,port=0xd,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x5 \ -device pcie-root-port,port=0xe,chassis=7,id=pci.7,bus=pcie.0,addr=0x1.0x6 \ -device pcie-root-port,port=0xf,chassis=8,id=pci.8,bus=pcie.0,addr=0x1.0x7 \ -device pcie-root-port,port=0x10,chassis=9,id=pci.9,bus=pcie.0,addr=0x2 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:ed:cd:78,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device vfio-pci,host=0000:10:00.0,id=hostdev0,bus=pci.3,addr=0x0,romfile=/boot/GPU_Roms/Powercolor.RX5700.rom \ -device vfio-pci,host=0000:10:00.1,id=hostdev1,bus=pci.4,addr=0x0 \ -device vfio-pci,host=0000:0a:00.0,id=hostdev2,bus=pci.5,addr=0x0 \ -device vfio-pci,host=0000:0a:00.1,id=hostdev3,bus=pci.6,addr=0x0 \ -device vfio-pci,host=0000:0a:00.3,id=hostdev4,bus=pci.7,addr=0x0 \ -device vfio-pci,host=0000:0d:00.0,id=hostdev5,bus=pci.8,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-04-15 09:09:54.742+0000: Domain id=3 is tainted: high-privileges 2021-04-15 09:09:54.742+0000: Domain id=3 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2021-04-15T09:10:04.876682Z qemu-system-x86_64: vfio: Cannot reset device 0000:10:00.1, no available reset mechanism. 2021-04-15T09:10:05.020320Z qemu-system-x86_64: vfio: Cannot reset device 0000:10:00.1, no available reset mechanism. It hangs on 2nd boot (this is both listed here) Looks to me like the Reset ISNT working, since that is where it hangs!
  11. I am running 6.9.2 and built the kernel for Gnif's FLR patch. Backed up files. Copied the new files over the old ones and rebooted. My Win10 VM boots fine on a fresh boot but any restart causes me to lose the VM and display. I have a Powercolor Red Dragon RX5700, primary card stubbed, efifb=off and amd_iommu=on in syslinux config and pass through with GPU Rom included. I have another GPU for 2nd VMs but not used it yet. Any ideas where I am going wrong, please?
  12. Not having used unRAID for 18 m... do I need to edit the syslinux config to add "amd_iommu=on" still or not? I have no idea but I thought unRAID did most of the syslinux edits now ???! EDIT: 10 days and no replies - nevermind... found by trial/error.
  13. Here are the controllers including motherboard headers. [revised, BIOS 4.15 seems to have re-activated my dead port] @Endy Some questions PLEASE #1 can you confirm my theory that one of the USB2 headers is dead by design? I have one that doesnt work and it seems I am not alone (Asrock forums). EDIT: BIOS 4.15 revived it.. but one port remains dead even though header cable is 100% okay #2 X570 USB problem isn't fixed by the beta 4.15 BIOS but Asrock say 4.20 should contain the AMD AGESA 1.2.0.2 that contains the fix. I have no XHCI menu options at all in BIOS 4.10 I can tell you! #3 can you pass controllers 1 & 2 separately to two different VMs?
  14. Thanks for this (and the PMs) Is the advice on the controllers to pass still valid? My new X570 Taichi has same controller layout as per AGESA (most X570 are same I think) but my controllers have #1 - 1xA, 1xC #2 - 1xC, 4xA, 2xUSB2-A #3 - 4xA #1 and #2 are in same group but can they be passed to diff VMs once stubbed/bound? #3 is in the group you say we cannot pass. Ideally I would want unRAID stick on #1 since its least ports but if I have to do as per your post I'd end up using 4xA for unRAID stick and having #1 for one VM and #2 for another VM ----- if all that makes any sense
  15. Check manual for your board. Pretty sure 2400G has less PCIE lanes than some CPUs cos some allocated to that iGPU
  16. Hi My board arrived on BIOS 3.60 and no options at all for ANY of those shown in 1st post.. what BIOS are you running please?
  17. @paweljan you tried both the CPU connected slots (the x8 slots NOT the x4 chipset slot) ?? @Simonwhitlock so it ONLY worked in a PCIE slot connected to CPU and definitely NOT in a chipset-connected slot? I dont know what mobo you have to check slots. I'm interested as I am considering this card but would want it in a chipset-connected slot
  18. I dont claim to understand HOW but adding "-soundhw sb16" to Qemu command line emulates a Sound Blaster 16 at IRQ 5, DMA 1, high DMA 5, and port0x220 and passes the emulated sound to the "native host audio" I THINK through ALSA or Pulse Audio or something like that!! ๐Ÿ™‚
  19. I hear what you are saying about Qemu-ppc and Qemu-arm - hopefully @limetech will help us out soon, as would really make unRAID the goto VM platform as well as NASbox++ You didnt mention any issues around audio (apart from lack of hours in the day I assume) - I saw we've had DVB versions of unRAID before so maybe audio support might not be as horrible to add as Qemu-PPC/Arm ? Again, could be enabled by good old @limetech if they are feeling the love for us ๐Ÿ™‚ Don't get me wrong, I am VERY happy we have an easier FLR for Navi cards in unRAID - the rest are icing on the cake and cherries on top of that icing ๐Ÿ˜‚
  20. Thank you @ich777 for including the GPU reset for AMD cards in your "something for everyone" docker. When I return to an unRAID machine I will definitely be using unless Limetech do the same and fix AMD card reset for us Could I ask if you could apply your wizardry to enabling Audio support in unRAID, please? ๐Ÿ˜ I'd love to try using the -sb16 and -ac97 switches in VMs but they only simulate a Soundblaster16 or whatever and pass the audio back to the host Linux system and unRAID has none (I think!). Such a change would make VMs for older OS like Win9x/2000 and DOS possible. Another "limitation" of the KVM-QEMU we have (again not sure as not running unRAID right now) is that it does not include qemu-ppc or qemu-arm which would enable VMs for PPC-Macs, PPC-Amigas and Arm-RaspberryPi2.... If your magic wand is able to fix that too I'd be a very happy user ๐Ÿ˜
  21. The zombie thread wont die... I hope I am not confusing matters (and am not running unRAID right now) but do we not need hardware drivers in unRAID for sound cards like a basic Pulse audio driver for Realtek ALC (that seems to be 95% of mobos these days) so that we can define ac97 or sb16 in the VM XML and have that emulated device output sound via the onboard mobo audio? Or is that already possible?
  22. QEMU package normally contains flavours so QEMU-Arm, QEMU-PPC etc. Would enable the virtualisation of additional machines like PPC-Macs, AmigaOne PPC and so on. I've seen videos of both running well on x86 machines with QEMU-PPC
  23. Thanks.. nice write up. About to buy same board but would be useful to know which headers etc go to which USB controllers. Normally you get two in same group and one separate. The two same group can be stubbed and passed separately from memory.
  24. So Sleeping unRAID does NOT sleep the VM first ? Seems to me that would be cleanest way to do it....