ghost82

Members
  • Posts

    2721
  • Joined

  • Last visited

  • Days Won

    19

ghost82 last won the day on December 15 2022

ghost82 had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ghost82's Achievements

Proficient

Proficient (10/14)

558

Reputation

82

Community Answers

  1. If an older backup works, just replace the img file with the newer one and it should boot.
  2. Hello @The Transplant I never used the backup plugin, so I'm sorry but I don't know how this plugin works. Anyway... An img file is the disk file. The xml file contains the emulated hardware and layout of the virtual pc. The edk2 ovmf code fd file is the bios of your virtual pc. The edk2 ovmf vars fd file is the nvram of your virtual pc. You need these 4 things to boot a qemu vm. You should be able to boot the img by recreating the other 3 files; you can copy the ovmf code fd file from another vm (it's the bios, the bios is the same for every vm), and you also can use the ovmf vars fd file copied (better) from the same vm (the vars file can contain nvram variables, such as boot order). As far as the xml you can create a new vm, so to generate a new xml and change the relevant part of this new xml to point to your img disk file, and fd files; if hardware layout is different than the old xml (for example, you add a new usb virtual controller which was not there in the original vm), windows should be smart enough to not hang. Obviously you need to know if your original vm was a uefi or bios vm, q35 or i 440fx, so you can create the new vm accordingly, using ovmf or seabios nad q35 or i440fx virtual chipsets. Correct, it doesn't matter, unraid applies a prefix with the uid of the vm to the filename. Either edit the path to point to that of where the img is, or move the img to that path. This file should contain some nvram variables, but it shouldn't matter, make sure to check the xml to point to this file. If it doesn't boot you may need to enter the ovmf bios screen (by pressing esc key on vm boot repeatedly) and change the boot order.
  3. Do not take it bad, but it must be something else..both unraid and proxmox use qemu to do virtualization and the host os cannot make such a difference.
  4. These are warnings and that features are simply ignored because your native cpu doesn't support them. You don't need that file, that is for emulated nvram, and you will not emulate it. This is the issue, if you use the configurator with an older or newer version of opencore it will mess your configuration file. What is not working is opencore because it's not configured properly/wrongly or because the image you are using is too old for the os you're trying to boot. I suggest to not copy the efi folder to the macos disk, but let it stay alone in its own disk and configure the vm boot from it, so that the original efi in the macos disk will be simply ignored, otherwise if you mess things with the efi it will be more difficult to mount that partition.
  5. Yes I agree 100%, but stupid software houses don't understand this; instead of programming some serious anticheating software they only check if you're running the game in a vm.
  6. /lib64/ld-linux-x86-64.so.2 /boot/config/unraider 😒👎
  7. the "do not use a rom" should be changed to "use a rom only when needed", when the gpu you are passing through is the primary gpu, flagged as boot vga by the host.
  8. As you can see network types are "PCNet32"; most probably the os has few drivers for network adapters, maybe only that one. You can try to manually edit your xml and put 'pcnet' as model type for network, instead of 'e1000' or whatever it is. <model type='pcnet'/>
  9. From your xml you are using legacy bios boot (seabios). The issue could be related to legacy bios vs the gpu you are passing through (the 3060 is a recent gpu with uefi support), not able to display legacy boot. You should convert your vm to uefi bios (ovmf): this means convert the installed vm to uefi and also modify the xml template to boot with uefi-ovmf bios). -- To confirm the culprit you can temporary switch from gpu passthrough to vnc, and I'm quite sure that with vnc you will see the verbose boot.
  10. no idea, maybe you don't have any virtio driver installed in your linux vm, maybe they are part of kvm and need to install it to have virtio drivers...frankly speaking for everyday use you wont notice any difference between sata and virtio. qemu agent has nothing to do with virtio.
  11. Sorry, I missed a 5...try with this: e1000-82545em I think your issue is within the vm, because as you can see from the output of lspci you have the ethernet controller at 02:01.0. Did you configured the network controller inside the vm? Search google how to configure network in your os distribution.
  12. I agree, but take into account that secure boot must be SECURE, so it should be outside of unraid scope to include a VARS file with injected certificates with sec boot enabled, because unraid will have the private key of that certificates, so not secure!
  13. Try this: 1- add: video=efifb:off in the syslinux configuration. You find it under: Main - Boot Device - Flash - Syslinux Configuration Add video=efifb:off to the Unraid OS label, in the 'append line', so it results like this: append video=efifb:off vfio_iommu_type1.allow_unsafe_interrupts=1 isolcpus=1-8,13-20 iommu=pt avic=1 This may not be necessary since the boot vga is not the one you are trying to pass, but let's decrease the possibilities of errors. If it will work you could try to remove efifb:off and see if it still works. 2- modify your vm in avanced view (xml mode), replace the whole xml with this: <domain type='kvm'> <name>Windows 11</name> <uuid>2b7d02e7-ce93-6934-5afb-641e9b93ab6e</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows11.png" os="windows10"/> </metadata> <memory unit='KiB'>33554432</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='14'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='15'/> <vcpupin vcpu='6' cpuset='4'/> <vcpupin vcpu='7' cpuset='16'/> <vcpupin vcpu='8' cpuset='5'/> <vcpupin vcpu='9' cpuset='17'/> <vcpupin vcpu='10' cpuset='6'/> <vcpupin vcpu='11' cpuset='18'/> <vcpupin vcpu='12' cpuset='7'/> <vcpupin vcpu='13' cpuset='19'/> <vcpupin vcpu='14' cpuset='8'/> <vcpupin vcpu='15' cpuset='20'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/2b7d02e7-ce93-6934-5afb-641e9b93ab6e_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/domains/iso/Win11_22H2_English_x64v2.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/domains/iso/virtio-win-0.1.240.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/domains/loaders/spaces_win_clover.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:1f:a0:a3'/> <source bridge='br1'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x33' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x33' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x2f' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x32' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x33' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x33' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x3'/> </hostdev> <memballoon model='none'/> </devices> </domain> 3- Reboot the server nad start Unraid OS (NO GUI) 4- try to run the vm I don't see any other configuration error; basically, it could hang at reboot because the drivers could expect a gpu multifunction device, like in bare metal, and the gpu in your vm was not configured to be a multifunction device. I hope it's not dependant on the clover bootloader you set in the vm (it should not...), which I think it's not needed anymore with recent windows. It could also be related to the passed through nvme controller, sometimes passing both gpu and nvme could create issues.
  14. I need recent diagnostics, the one you attached doesn't have any vm defined.
  15. None of the vms you have have the e1000 type network emulated card. Open the vm in advanced mode (xml view), find the network block, for example: <interface type='bridge'> <mac address='52:54:00:ce:dc:cb'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> and change the model type line by hand, like this: <model type='e1000-8254em'/> Save and boot the vm. If it still doesn't work reattach diagnostics and let us know which vm you are working on of the 10 vms you have. Paste also the output of this terminal command, from inside the virtual machine: lspci