ghost82

Members
  • Posts

    2721
  • Joined

  • Last visited

  • Days Won

    19

Posts posted by ghost82

  1. 17 hours ago, The Transplant said:

    I did manage to restore the older file and get it running.  But I had made some changes and want to get the newer image restored so will work through your suggestions above

    If an older backup works, just replace the img file with the newer one and it should boot.

    • Like 1
  2. Hello @The Transplant

    I never used the backup plugin, so I'm sorry but I don't know how this plugin works.

    Anyway...

    18 hours ago, The Transplant said:

    But the newest img file does not have an xml or an fd file associated with it

     

    An img file is the disk file.

    The xml file contains the emulated hardware and layout of the virtual pc.

    The edk2 ovmf code fd file is the bios of your virtual pc.

    The edk2 ovmf vars fd file is the nvram of your virtual pc.

    You need these 4 things to boot a qemu vm.

     

    18 hours ago, The Transplant said:

    But the newest img file does not have an xml or an fd file associated with it

     

    You should be able to boot the img by recreating the other 3 files; you can copy the ovmf code fd file from another vm (it's the bios, the bios is the same for every vm), and you also can use the ovmf vars fd file copied (better) from the same vm (the vars file can contain nvram variables, such as boot order).

    As far as the xml you can create a new vm, so to generate a new xml and change the relevant part of this new xml to point to your img disk file, and fd files; if hardware layout is different than the old xml (for example, you add a new usb virtual controller which was not there in the original vm), windows should be smart enough to not hang. Obviously you need to know if your original vm was a uefi or bios vm, q35 or i 440fx, so you can create the new vm accordingly, using ovmf or seabios nad q35 or i440fx virtual chipsets.

     

    19 hours ago, The Transplant said:

    My fd file is not named ovmf_vars.fd but 20240206_0300_5ab648ed-0c63-aa51-400c-277ece7bd277_VARS-pure-efi.fd.  I assume that doesn't matter?

    Correct, it doesn't matter, unraid applies a prefix with the uid of the vm to the filename.

     

    19 hours ago, The Transplant said:

    /mnt/user/domains/Outlook/vdisk1.img - the image is currently in a backups folder - so I will move it to the corresponding folder in domains and leave this as is.

    Either edit the path to point to that of where the img is, or move the img to that path.

     

    19 hours ago, The Transplant said:

    Should I do anything with the 20240206_0300_5ab648ed-0c63-aa51-400c-277ece7bd277_VARS-pure-efi.fd file that is currently in the backups folder?

    This file should contain some nvram variables, but it shouldn't matter, make sure to check the xml to point to this file.

    If it doesn't boot you may need to enter the ovmf bios screen (by pressing esc key on vm boot repeatedly) and change the boot order.

  3. 10 hours ago, austin said:
    2024-01-13T22:46:40.073081Z qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.01H:ECX.pcid [bit 17]

    These are warnings and that features are simply ignored because your native cpu doesn't support them.

     

    10 hours ago, austin said:

    The next "weird" thing was that the EFI partition didn't include the NvVars file

    You don't need that file, that is for emulated nvram, and you will not emulate it.

     

    10 hours ago, austin said:

    OpenCore configurator it said this is an older efi, i just ignore this and continued following along.

    This is the issue, if you use the configurator with an older or newer version of opencore it will mess your configuration file.

    What is not working is opencore because it's not configured properly/wrongly or because the image you are using is too old for the os you're trying to boot.

     

    10 hours ago, austin said:

    Again, ignore and copied it to the correct partition.

    I suggest to not copy the efi folder to the macos disk, but let it stay alone in its own disk and configure the vm boot from it, so that the original efi in the macos disk will be simply ignored, otherwise if you mess things with the efi it will be more difficult to mount that partition.

  4. As you can see network types are "PCNet32"; most probably the os has few drivers for network adapters, maybe only that one.

    You can try to manually edit your xml and put 'pcnet' as model type for network, instead of 'e1000' or whatever it is.

     

          <model type='pcnet'/>

     

  5. From your xml you are using legacy bios boot (seabios).

    The issue could be related to legacy bios vs the gpu you are passing through (the 3060 is a recent gpu with uefi support), not able to display legacy boot.

    You should convert your vm to uefi bios (ovmf): this means convert the installed vm to uefi and also modify the xml template to boot with uefi-ovmf bios).

    --

    To confirm the culprit you can temporary switch from gpu passthrough to vnc, and I'm quite sure that with vnc you will see the verbose boot.

  6. Try this:

    1- add:

    video=efifb:off

    in the syslinux configuration. You find it under: Main - Boot Device - Flash - Syslinux Configuration

    Add video=efifb:off to the Unraid OS label, in the 'append line', so it results like this:

    append video=efifb:off vfio_iommu_type1.allow_unsafe_interrupts=1 isolcpus=1-8,13-20 iommu=pt avic=1

     

    This may not be necessary since the boot vga is not the one you are trying to pass, but let's decrease the possibilities of errors. If it will work you could try to remove efifb:off and see if it still works.

     

    2- modify your vm in avanced view (xml mode), replace the whole xml with this:

    <domain type='kvm'>
      <name>Windows 11</name>
      <uuid>2b7d02e7-ce93-6934-5afb-641e9b93ab6e</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows11.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>33554432</memory>
      <currentMemory unit='KiB'>33554432</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>16</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='13'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='14'/>
        <vcpupin vcpu='4' cpuset='3'/>
        <vcpupin vcpu='5' cpuset='15'/>
        <vcpupin vcpu='6' cpuset='4'/>
        <vcpupin vcpu='7' cpuset='16'/>
        <vcpupin vcpu='8' cpuset='5'/>
        <vcpupin vcpu='9' cpuset='17'/>
        <vcpupin vcpu='10' cpuset='6'/>
        <vcpupin vcpu='11' cpuset='18'/>
        <vcpupin vcpu='12' cpuset='7'/>
        <vcpupin vcpu='13' cpuset='19'/>
        <vcpupin vcpu='14' cpuset='8'/>
        <vcpupin vcpu='15' cpuset='20'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-7.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/2b7d02e7-ce93-6934-5afb-641e9b93ab6e_VARS-pure-efi-tpm.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv mode='custom'>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough' check='none' migratable='on'>
        <topology sockets='1' dies='1' cores='8' threads='2'/>
        <cache mode='passthrough'/>
        <feature policy='require' name='topoext'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/domains/iso/Win11_22H2_English_x64v2.iso'/>
          <target dev='hda' bus='sata'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/domains/iso/virtio-win-0.1.240.iso'/>
          <target dev='hdb' bus='sata'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/cache/domains/loaders/spaces_win_clover.img'/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0xb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0xc'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0xd'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
        </controller>
        <controller type='pci' index='7' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='7' port='0xe'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
        </controller>
        <controller type='pci' index='8' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='8' port='0xf'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='usb' index='0' model='qemu-xhci' ports='15'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:1f:a0:a3'/>
          <source bridge='br1'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <tpm model='tpm-tis'>
          <backend type='emulator' version='2.0' persistent_state='yes'/>
        </tpm>
        <audio id='1' type='none'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x33' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x33' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x2f' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x32' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x33' slot='0x00' function='0x2'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x2'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x33' slot='0x00' function='0x3'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x3'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

     

    3- Reboot the server nad start Unraid OS (NO GUI)

    4- try to run the vm

     

    I don't see any other configuration error; basically, it could hang at reboot because the drivers could expect a gpu multifunction device, like in bare metal, and the gpu in your vm was not configured to be a multifunction device.

    I hope it's not dependant on the clover bootloader you set in the vm (it should not...), which I think it's not needed anymore with recent windows.

     

    It could also be related to the passed through nvme controller, sometimes passing both gpu and nvme could create issues.

  7. None of the vms you have have the e1000 type network emulated card.

    Open the vm in advanced mode (xml view), find the network block, for example:

        <interface type='bridge'>
          <mac address='52:54:00:ce:dc:cb'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>

     

    and change the model type line by hand, like this:

          <model type='e1000-8254em'/>

     

    Save and boot the vm.

    If it still doesn't work reattach diagnostics and let us know which vm you are working on of the 10 vms you have.

    Paste also the output of this terminal command, from inside the virtual machine:

    lspci

     

  8. 6 hours ago, n0rx said:

    Do you still think the "BAR 1: assigned to efifb"

    No, I've seen that line in other logs, most probably efifb attaches early and then it is detached because of the syslinux directive. If you look at the memory I'm quite sure efifb will not be there.

     

    6 hours ago, n0rx said:

    Do you think re-installing Unraid might resolve the issue?

    I think not..

     

    In my opinion it is either related to the gpu itself (note that I'm not saying the gpu isn't working) or the motherboard (and that's why I suggested a bios update, because agesa was updated).

  9. 34 minutes ago, takkkkkkk said:

    I'm used to simple windows ways where mounting an image/iso would mean windows would simply create "C:" or "D:" so that I can access it, I never thought of being asked of "where do you want to mount to", it just doesn't really click to me that it wouldn't automatically get mounted as another unassigned devices. Once it gets mounted, does it act as folder within share? this concept seems really unusual to me...

    The tutorial refers to mount the img disk in the host (unraid). In linux in general, you create an empty folder and you mount the img inside that folder (the mounting point): files on the disk will be shown inside the mounting point, and you will have read/write permissions.

    If you mount the img in windows I think you will have only read permissions, but I may be wrong.

  10. Configuration seems good to me.

    I imagine the vbios is dumped from your card and hex edited, and not a downloaded one, right?

    Did you try to enable remote desktop in windows vm (booted without gpu passthrough) and see if it boots or if it's hanging?

    Once you enable remote desktop and the os is able to boot but with a black screen, try to install the gpu drivers.

     

    I noticed that your mb bios is not the latest, I would try to update to v. 4401 released on 31st of october.

  11. 33 minutes ago, xtrap225 said:

    i am going to edit and passthrough the TPM without it ever seeing a virtual one. any idea if i should tell it that it is TIS or CRB?

    and try to do that serial thing, which i hope i am not misremembering.

    When you passthrough the tpm device you need to choose a model.

    In this example:

    <devices>
      <tpm model='tpm-tis'>
        <backend type='passthrough'>
          <device path='/dev/tpm0'/>
        </backend>
      </tpm>
    </devices>

     

    you are passing through a tpm device located at /dev/tpm0 'tis' type.

    If the device is crb just use 'tpm-crb' instead of 'tpm-tis' for the model.