ghost82

Members
  • Posts

    2721
  • Joined

  • Last visited

  • Days Won

    19

Community Answers

  1. ghost82's post in vms boot blackscreen until login was marked as the answer   
    From your xml you are using legacy bios boot (seabios).
    The issue could be related to legacy bios vs the gpu you are passing through (the 3060 is a recent gpu with uefi support), not able to display legacy boot.
    You should convert your vm to uefi bios (ovmf): this means convert the installed vm to uefi and also modify the xml template to boot with uefi-ovmf bios).
    --
    To confirm the culprit you can temporary switch from gpu passthrough to vnc, and I'm quite sure that with vnc you will see the verbose boot.
  2. ghost82's post in Windows 10 VM enable AES-NI was marked as the answer   
    Usually, windows 10 vms are configured with cpu hostpassthrough, so if the real cpu supports aes, the aes flag will be passed to the guest too...can you check with cpu-z if aes is listed in the cpu flags?and what is the real cpu?
  3. ghost82's post in GPU Passthrough - Turned off VM Power Consumption GPU - IDLE? was marked as the answer   
    I think it wont change anything. You say it's disconnected to unraid but in real it's not, because it's always connected to a driver, nvidia, amd or vfio.
  4. ghost82's post in Changing the HardDrive Model Number possible? was marked as the answer   
    That's because model is not defined in libvirt.
    I don't think disk model was implemented in libvirt.
    Try to add a qemu override and see if that works.
    So, for serial and model do the following:
    - define the disk block with your disk, set a serial and give it an alias starting with ua-:
     
    <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/path/to/disk.img'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' unit='1'/> <alias name='ua-mydisk'/> <serial>YOURSERIALNUMBERHERE</serial> </disk> - add at the bottom before the domain closing tag a qemu override referring to the alias:
    <qemu:override> <qemu:device alias='ua-mydisk'> <qemu:frontend> <qemu:property name='model' type='string' value='VMware Virtual IDE Hard Drive'/> </qemu:frontend> </qemu:device> </qemu:override> </domain>  
    And make sure you defined the proper legacy schema at the top of your xml:
    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>  
  5. ghost82's post in Windows 11 VM freezes randomly when PCI passthrough was marked as the answer   
    May be caused by virtiofs; try to stop using this and use a samba share for example and see if the crashes stop.
  6. ghost82's post in Windows VM drive setup (2023) was marked as the answer   
    When you compare things to bare metal, so without taking consideration the cache, the best thing is to have the vm behaving as closed as possible to bare metal, because everything you emulate adds some overhead, that you may notice, or not, depending on how you use your vm.
    In your case you are using an emulated (virtio or sata) controller, to attach an emulated vdisk.
    Better could be an emulated controller (virtio or sata) with a physical disk passed through.
    Best is to passthrough the controller (with the disk attached), sata or nvme.
  7. ghost82's post in Windows 10VM wont recognise GPU was marked as the answer   
    There are several things that are wrong:
    1. vfio configuration: it seems you changed the slot of the gpu, but vfio config is old and your gpu (multifunction) is not attached to vfio at boot
    2. Your gpu is still in use by the host because BAR3 is assigned to efifb; just add 'video=efifb:off' to syslinux configuration
    3. Your gpu is flagged as boot vga by the host: you need to pass a vbios in the vm configuration, otherwise it wont work, either dump it from your gpu (recommended) or download one from techpowerup and hex edit it (remove the nvidia nvflash header)
    4. I would advice to make a new vm q35 type (not i440fx) + ovmf: q35 has better support for pcie passed through devices
    5. I suggest to passthrough all the components of the gpu in the vm and not only the video part, i.e.:
    02:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU106 [GeForce RTX 2070 Rev. A] [10de:1f07] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] TU106 [GeForce RTX 2070 Rev. A] [1462:3734] Kernel driver in use: vfio-pci Kernel modules: nvidia_drm, nvidia 02:00.1 Audio device [0403]: NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] TU106 High Definition Audio Controller [1462:3734] 02:00.2 USB controller [0c03]: NVIDIA Corporation TU106 USB 3.1 Host Controller [10de:1ada] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] TU106 USB 3.1 Host Controller [1462:3734] Kernel driver in use: xhci_hcd 02:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU106 USB Type-C UCSI Controller [10de:1adb] (rev a1) Subsystem: Micro-Star International Co., Ltd. [MSI] TU106 USB Type-C UCSI Controller [1462:3734]  
    After you set the vm you need to switch to the xml view and set the gpu as a multifunction device for the target.
     
    All of this is covered in the VM Engine (KVM) subforum.
  8. ghost82's post in passthrough unmounted linux_raid_member to Ubuntu VM was marked as the answer   
    In unassigned device set the 'Pass Through' switch to on for that drive.
    Navigate with the terminal to /dev/disk/by-id/
    List the content with ls command and identify the drive:

     
    You can see I have 3 disks starting with ata-; first disk has 2 partitions, the other 2 have one partition.
    The id of interest is that without -partX suffix, for example:
    ata-Hitachi_HTS542525K9SA00_080713BB2F10WDETGEEA
    ata-WDC_WD20EZRX-00DC0B0_WD-WCC300035617
    ata-WDC_WD60EZRX-00MVLB1_WD-WX21D74532LF
     
    It could be that when you list the content the ids could have '@' at the end of the file name, ignore it.
     
    Go to your linux vm xml and add a new block for the drive, inside <devices></devices> for example:
    <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/XXXXXXXXXXX'/> <backingStore/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk>  
    Replace XXXXXXXX with the correct id, i.e. ata-Hitachi_HTS542525K9SA00_080713BB2F10WDETGEEA
    This will attach the disk to the default emulated sata controller of the linux vm.
  9. ghost82's post in unRAID server locking up after setting up Win 11 pro vm with 7900xt passthrough was marked as the answer   
    Apr 13 17:40:10 Tower kernel: Kernel command line: BOOT_IMAGE=/bzimage initrd=/bzroot pci=noaer iommu=pt amd_iommu=on nofb nomodeset initcall_blacklist=sysfb_init isolcpus=10-15,26-31  
    nofb parameter is no more valid in the syslinux configuration, replace it with:
     
    video=efifb:off  
  10. ghost82's post in Getting Execution error trying to pass USB controller to VM was marked as the answer   
    Apr 10 13:54:15 Storage kernel: vfio_iommu_type1_attach_group: No interrupt remapping support. Use the module param "allow_unsafe_interrupts" to enable VFIO IOMMU support on this platform  
     Settings -> VM -> change "VFIO allow unsafe interrupts" to Yes. Apply, reboot and try
  11. ghost82's post in VM - GPU Passthrough Audio Not Working was marked as the answer   
    Replace this:
    <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x10' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev>  
    with this:
    <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x10' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x10' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x10' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x10' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x3'/> </hostdev>  
  12. ghost82's post in GPU passthrough not working on Win11 was marked as the answer   
    Don't use windows drivers but download and install drivers downloaded from nvidia website.
    Maybe windows drivers are outdated and nvidia didn't allow consumer gpu passthorugh in the past.
    If that doesn't work, attach diagnostics.
  13. ghost82's post in VM Ubuntu 22.04 no network after changing vm configuration (unraid os 6.10.3) was marked as the answer   
    Not a network expert, but why you have bond0 if you have only eth0? What are you bonding eth0 with?
    Maybe I'm wrong but I would disable bonding and set only bridging (br0) on eth0.
    After this, check if you have connection in the host, then start vm, check if the virtio device is recognized in the guest, then check for internet in the vm.
    Xml is ok.
  14. ghost82's post in Reserved VM RAM & CPU was marked as the answer   
    Yes, I confirm cpu(s) and ram can be used by the host (unraid) and they are not locked/reserved when the vm is not running.
  15. ghost82's post in Nvidia 3060Ti GPU Passthrough not working for Win 10VM - Exhausted all of Spaceinvader Ones videos lol need halp! was marked as the answer   
    xml is ok, make sure that it wont change the gpu part you highlighted after making modifications.
     
    As you can see in your logs:
     
    Feb 4 10:44:34 Akashic-Records kernel: pci 0000:06:00.0: BAR 1: assigned to efifb ... ... Feb 4 10:59:41 Akashic-Records kernel: vfio-pci 0000:06:00.0: BAR 1: can't reserve [mem 0xd0000000-0xdfffffff 64bit pref] you need to modify your syslinux config (main > flash > syslinux configuration) and add in the append line video=efifb:off, like in this picture taken from the web:

     
    Note that since this gpu is marked as boot gpu the screen on boot will seem frozen, because the gpu will attach to vfio, but unraid will boot, so connect to unraid from another device on the lan.
     
    Reboot the server.
     
    It seems also that you are using a wrong vbios.
    From what I found the vbios you attached is for a 3060 Ti, device/vendor id 10de:2489, subsystem 1043:8829.
    From your lspci your device is 10de:2489, subsystem 1043:884f.
    It is always better to dump the vbios directly from YOUR gpu, in extreme cases you could download a vbios from techpowerup: this will prevent you from using a wrong vbios, as it seems the case.
     
    So, dump the vbios from your gpu and use it (check if hex editing is needed, i.e. if dumped from windows with gpu-z), or if you have no alternatives and it doesn't work with your vbios try the attached one (already hex edited).
    241689.rom
  16. ghost82's post in MacOS Macinabox CPU issue was marked as the answer   
    https://github.com/SpaceinvaderOne/Macinabox/blob/master/xml/Macinabox BigSur.xml#L132-L142
  17. ghost82's post in New to unraid - 1 (starting?) questions was marked as the answer   
    It should be possible to leave the hdmi plugged in and:
    1. bind to vfio the gpu (both audio, video and usb controller parts, if any)
    2. if booting unraid with efi, prevent efifb to attach to the gpu (video=efifb:off in syslinux config)
  18. ghost82's post in HELP, cant create an empty VM to passthrough NVMe: VM creation error was marked as the answer   
    Exactly, try this, copy/paste and save xml:
    <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>Windows 11</name> <uuid>bf15b8e2-225d-d84f-2b67-00d3e3e61b20</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 11" icon="windows11.png" os="windowstpm"/> </metadata> <memory unit='KiB'>9437184</memory> <currentMemory unit='KiB'>9437184</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='6'/> <vcpupin vcpu='3' cpuset='7'/> <vcpupin vcpu='4' cpuset='8'/> <vcpupin vcpu='5' cpuset='9'/> <vcpupin vcpu='6' cpuset='10'/> <vcpupin vcpu='7' cpuset='11'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-7.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader> <nvram>/etc/libvirt/qemu/nvram/bf15b8e2-225d-d84f-2b67-00d3e3e61b20_VARS-pure-efi-tpm.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='1'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:b2:bb:98'/> <source bridge='virbr0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-gb'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <tpm model='tpm-tis'> <backend type='emulator' version='2.0' persistent_state='yes'/> </tpm> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <boot order='1'/> <alias name='ua-sm2262'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <qemu:override> <qemu:device alias='ua-sm2262'> <qemu:frontend> <qemu:property name='x-msix-relocation' type='string' value='bar2'/> </qemu:frontend> </qemu:device> </qemu:override> </domain>  
    If it doesn't work attach diagnostics.
    If part of the code is stripped it could be an unraid issue.
    You didn't have any graphics defined in the xml, I added basic qxl + vnc, so connect to the vm with builtin novnc.
    Once and if you fix the nvme issue you can try to passthrough the igpu.
  19. ghost82's post in Rename Primary vDisk Folder? was marked as the answer   
    Yes.
     
    Backup first, if something doesn't work fix it or revert.
  20. ghost82's post in Ubuntu 20.04 VM runs with VNC but cannot start with GPU passthrough was marked as the answer   
    you are out of memory! (I didn't notice it in the first diagnostics)
    Jan 3 15:58:08 visionserver kernel: Out of memory: Killed process 10811 (qemu-system-x86) total-vm:65246480kB, anon-rss:63859640kB, file-rss:8kB, shmem-rss:24688kB, UID:0 pgtables:125152kB oom_score_adj:0 then you have several kernel panics.
    Try 32 Gb ram in the vm.
  21. ghost82's post in Troubleshooting EMULATOR '/USR/LOCAL/SBIN/QEMU' DOES NOT SUPPORT VIRT TYPE 'KVM' was marked as the answer   
    https://download.gigabyte.com/FileList/Manual/mb_b450m-ds3h-v2_e.pdf
    SVM mode
    https://www.youtube.com/watch?v=Q-_3Hzl2gvI
     
    search also for iommu and set it to enable if you want to passthrough devices
  22. ghost82's post in VM performance issues Disk speed with NVME and SSD drives (Solved) was marked as the answer   
    Another way could be injecting the virtio driver in this way:
     
    0. switch the boot disk from sata to virtio in the vm
     
    1. add to the vm 2 cd rom drives: the win 10 installation iso and the virtio iso
     
    2. boot from the win 10 cd rom and get a cmd from the repair mode option
     
    3. load the driver (assuming e: is the virtio cd rom drive):
    drvload e:\viostor\w10\amd64\viostor.inf A new drive will be mounted (your virtio boot disk, assuming f:)
     
    4. use DISM to inject the virtio controller driver:
    dism /image:f:\ /add-driver /driver:e:\viostor\w10\amd64\viostor.inf  
  23. ghost82's post in Cannot get NIC to show when setting up a VM was marked as the answer   
    Syslinux config looks good to me, but I would set it to:
    pcie_acs_override=downstream,multifunction
    Reboot the server.
    Go to your iommu groups through the unraid gui and see if you can put checkmarks on ehternet controllers that you want to passthrough, save and reboot the server.
    Now go to your vm and see if something shows.
    Note that you can't passthrough ethernet controllers that are in use by unraid (which seems the case, since you are setting in unraid eth0 -->eth3), boxes will show greyed out in iommu groups.
    --
    The above instructions may not work..
    Be sure to have another device to attach the unraid usb in case something goes wrong and restore a backup
    Not sure but you may need to set the config manually since all the controllers have the same vendor/device id.
    1. backup the unraid usb, so to restore if something goes wrong
    2. open config/vfio-pci.cfg with a text editor in the unraid usb stick
    3. Add BIND=0000:01:00.1|14e4:165f 0000:02:00.0|14e4:165f 0000:02:00.1|14e4:165f
    Note that I didn't write 01:00.0 which should be eth0, needed and reserved for unraid
     
    Save and reboot, and your eth1-->eth3 (3 ports) should now be isolated, without loosing eth0 connectivity for unraid.
  24. ghost82's post in GPU reserved to a VM which no longer exist. kernel: vfio-pci 0000:2b:00.0: BAR 1: can't reserve [mem 0xd0000000-0xdfffffff 64bit pref] was marked as the answer   
    see here:
    default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label Unraid OS menu default kernel /bzimage append initrd=/bzroot video=efifb:off label Unraid OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label Unraid OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Unraid OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest  
    Start "Unraid OS" (no gui) and that argument will be applied.
    If it doesn't work attach diagnostics
     
    I suggest to modify it from the unraid gui not the file:
    Main - Boot Device - Flash - Syslinux Configuration
     
    Modify the append line (all in the same line) of label "Unraid OS" like this:
    append initrd=/bzroot video=efifb:off and save and reboot.
  25. ghost82's post in Virtual machine log alarm was marked as the answer   
    Basically, it means that qemu detected a device that it wont work when passed through to a vm (vfio) and automatically notify you and disabled it for the virtual machine.
     
    --
    Looking at the qemu source code, it seems your device has broadcom BCM 57810, or a device with vendor 0x14e4, id 0x168e:
    if (vfio_opt_rom_in_denylist(vdev)) { if (dev->opts && qdict_haskey(dev->opts, "rombar")) { warn_report("Device at %s is known to cause system instability" " issues during option rom execution", vdev->vbasedev.name); error_printf("Proceeding anyway since user specified" " non zero value for rombar\n"); } else { warn_report("Rom loading for device at %s has been disabled" " due to system instability issues", vdev->vbasedev.name); error_printf("Specify rombar=1 or romfile to force\n"); return; } }  
    Your case is inside the else block.
     
    The "deny list" is specified here, together with the explanation of why they included that device:
    /* * List of device ids/vendor ids for which to disable * option rom loading. This avoids the guest hangs during rom * execution as noticed with the BCM 57810 card for lack of a * more better way to handle such issues. * The user can still override by specifying a romfile or * rombar=1. * Please see https://bugs.launchpad.net/qemu/+bug/1284874 * for an analysis of the 57810 card hang. When adding * a new vendor id/device id combination below, please also add * your card/environment details and information that could * help in debugging to the bug tracking this issue */ static const struct { uint32_t vendor; uint32_t device; } rom_denylist[] = { { 0x14e4, 0x168e }, /* Broadcom BCM 57810 */ };  
    To force loading loading the rom you need to specify romfile in the xml of the vm, like you do with a vbios for a gpu passthrough:
    <rom file='/path/to/romfile.rom'/> inside the hostdev block.
     
    Or, (I think), force loading rom with:
    <rom bar='on'/> inside the hostdev block.
     
    but are you sure you want to pass it even knowing it could cause issues?If the blacklist is hardcoded into qemu I would not modify it...