ghost82

Members
  • Posts

    2721
  • Joined

  • Last visited

  • Days Won

    19

Everything posted by ghost82

  1. Just to add an additional info, if you download from the home assistant website: https://www.home-assistant.io/installation/alternative to download the qcow2 file (KVM/Proxmox) you will download a compressed .xz file. Remember to all to extract the qcow2 image from this archive before using it as a disk in a kvm vm.
  2. A reset of the nvram (clean ovmf vars file) should fix it and/or you should be able to add a new boot entry from the ovmf bios. Start the vm and press esc untill you go into the ovmf bios windows; if you boot in the uefi shell just type 'exit' without quotes and press enter to access ovmf bios settings. Then, follow this to add a new boot option or boot directly to the correct partition: https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries
  3. ooooh I see now...you downloaded the arch64 architecture which is unsupported, you need to download the x86_x64 version!
  4. Yes you are right, the nic settings should show in the vm settings gui. I read a similar issue and that user was able to make it show by resetting the unraid network configuration; he deleted the config and created a new one.
  5. Yes topology is now correct for i440fx, as you can see you have the nic on bus 0, slot 3 and function 0. I think the issue here was a wrong target address. For windows unraid defaults to i440fx but I would think twice before using it (I'm not saying it doesn't work), but q35 is more compatible with passed through pcie devices.
  6. try to boot from the uefi shell: type: FS0: and press enter Then list directories and files with dir command (the uefi shell is like a command prompt) and navigate (with the cd command) somewhere since you will find an .efi file to boot. No idea what is the file in your case, as an example for microsfot it's in: FS0:\EFI\Microsoft\Boot\bootmgfw.efi To manually boot, just type name.efi and the boot should proceed.
  7. I remember I read that the issue can come for the audio part of the gpu. try to remove this block in the xml: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> in other words remove the passthrough of the audio with source address: <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> and see if it crashes. Another test, for shoutdown/boot (not for reboots): you can try to manually remove the gpu (video and audio) and rescan for it, and then start the vm. You can try this and see if it works without crashing. 1. start the vm, it should output video 2. shutdown the vm 3. open a terminal in unraid and give the following commands, press 'enter' after each line: echo "1" | tee -a /sys/bus/pci/devices/0000\:03\:00.0/remove echo "1" | tee -a /sys/bus/pci/devices/0000\:03\:00.1/remove sleep 1 echo -n mem > /sys/power/state sleep 1 echo "1" | tee -a /sys/bus/pci/rescan 4. start the vm
  8. Unfortunately I think not for your case: unraid should have the patch included, but this patch, the gnif reset vendor patch, does not include the 6000 series
  9. Being an old nvidia you could have error 43 in the vm and no video output. Nvidia in the past did not allow consumer gpus to be passed through and used in a vm: there could be workaround to add to the xml just to mask that the gpu is running in a vm and make the drivers to work. Newer drivers don't have this issue, since nvdia is allowing gpu passthrough now.
  10. All I can say is that from what I saw your configuration is correct and unfortunately the errors recall the so called 'amd reset bug', which the 6000 series shouldn't have..but some reports show that some of them could have it, it's not clear if it depends on the gpu or the bios or whatever, so trying a nvidia is the right way to go, just to see if it depends on the gpu itself.
  11. Did you try to mount and map the partition in the 2 vms as a smb share? For my steam games I load them from a smb drive and they work quite well.
  12. I doubt you had unsupported vt-d: vt-d is needed for vfio, so it had to be available.. anyway just proceed the same way as described in the above posts.
  13. In your last diagnostics you defined a i440fx machine type vm. Unless you changed it to q35, you cannot have in the target vm bus 1, so either you changed the machine type or the host automatically changed the target address to be on bus 0 (machine type i440fx has only bus 0!). i440fx: only bus 0, choose first available slot, each address must be unique q35: bus 0 for built-in devices, choose first available slot, bus>0 for not built-in devices (bus X means it's attached usually to pcie-root-port or pcie-to-pci-bridge defined in xml, with index X), each address must be unique
  14. I heard of some possible issues with the 6000 series and some specific brands. When you say it crashes, does it crash the whole hotst or you are not able to run only the vm? Check in bios: 1. resizable bar: disabled 2. decode above 4g: disabled if this doesn't work I have no more ideas.
  15. Yes now select both 13 and 14. ok course otherwise you wont be able to passthrough anything.
  16. yes. By split I mean having the two amd pci bridges not in the same iommu group of the vga and audio. Now 2 amd pci bridges together with vga and audio are in iommu group 1. Check if with acs override you can have vga and audio in one or 2 different iommu groups, other than the iommu group of the pci bridges. Then attach to vfio only vga and audio and not the bridges.
  17. Try the followings: 1. enable acs override --> both (meaning downstream,multifunction); restart the server, check if you are able to split the current iommu group 1, try to split the amd pci bridges from vga and audio; do not attach to vfio pci bridges at 01:00.0 and 02:00.0, only video and audio of the gpu 2. change in your vm xml the gpu part from this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> to this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev>
  18. there's nothing wrong in here, describe your issue and post full diagnostics
  19. I think the host is taking back the gpu and it's not able to release it resulting in a crash. I would set the igpu as primary in bios and not the 6600; a setup with the 6600 as primary is possible but you need the vbios to be passed through for the vm. Bind the gpu audio and video to vfio. Check the syslog in unraid and see if efifb is attaching to the 6600 (check the address of the gpu), if it attaches blacklist it in unraid sysconfig.
  20. Hi, it will be difficult to passthrough that igpu to a vm, if you will success I think you will be the first, or one of the firsts: https://forums.unraid.net/topic/130514-raptor-lake-13600k-integrated-graphics-passthrough/ In general just not have the vm running, a reboot is not needed.
  21. You have an issue with irq conflicts: Feb 16 20:15:41 NetPlex kernel: genirq: Flags mismatch irq 33. 00000000 (vfio-intx(0000:07:00.0)) vs. 00000080 (vfio-intx(0000:06:00.1)) where 7:00.0 is the mellanox card and 6:00.1 is the intel network controller (it's a part of it). You have different options, and it could be you wont solve the issue because you couldn't apply one of the following things: 1. Some bioses have irq assignement: if your bios does have it, assign a different irq to the mellanox or intel card 2. Phisically move the mellanox card in another pcie slot to "hopefully" change the irq 3. Disable in bios the intel controller so the irq doesn't conflict (...but you use it, so that's not the way...) PS: did you try the script on 6:00.1? intx should not be available.. You removed the mellanox...you need to remove the intel if the mellanox is passed to the vm you want to run.
  22. Another option is to save the image on a shared drive (like a samba drive), but yes, booting from a live linux distro is the right way to go.
  23. If I understood correctly you have a pcie network card which unraid uses as eth0; all you have to do is to go in network configuration in the unraid gui and set it to bridge; a br0 network will automatically be created and you can use it in dockers and vm; in a vm, just choose br0 and a virtual network card (if you choose virtio remember to install the drivers in the vm, or choose vmxnet3 or e1000) when you setup the vm.