ghost82

Members
  • Posts

    2314
  • Joined

  • Last visited

  • Days Won

    14

Community Answers

  1. ghost82's post in Newbie VM Confusion was marked as the answer   
    If the vm is properly configured, without anything else running in the host (or nearly nothing running on the host) you could take into account that the vm will perform bare metal -5 - -10%; so if you assign let's say 4 cores/4 threads to the vm, it will perform like a real 4 cores/4 threads minus 5-10% in performance; this is a rough estimation.
    this is only a matter of money, but make sure the cpu(s) supports vt-d and vt-x, so you will be able to passthrough hardware (descrete gpu?) through vfio to the vm.
     
    About the gpu, to give you an example, I have the latest 6900xt which gives a score of about 140.000 in Geekbench (it's mac os); online results with the same gpu are higher too (about 180.000), but this may depend on cpu bottleneck as I have 2 old sandy bridge xeon cpus. I would say that also the gpu will perform near bare metal performance, vfio is good; make sure to use q35 machine type for the vm so it will be more compatible with pcie passthrough.
  2. ghost82's post in VM GPU Passthrough, EPYC 7551, Nvidia 2080 on Supermicro H11SSL-i was marked as the answer   
    Check: 
    BIOS >> Advanced >> NB Configuration >> IOMMU
    --> is enabled, not auto, not disabled
  3. ghost82's post in Windows VM looking for USB device, won't start (no device in config) was marked as the answer   
    Open the vm in the xml mode and check at the bottom if there is something related to that usb device; if it's there manually delete the offending block of code.
  4. ghost82's post in GPU acceleration in Monterey VM was marked as the answer   
    Monterey has no drivers for 1070, so there's no way to get acceleration or any video output from that gpu; stick to high sierra (with installed nvidia drivers), change gpu with a compatible one, or do not use gpu passthrough.
  5. ghost82's post in Grafigkarte NVIDIA 1080 TI was marked as the answer   
    Monitor is attached to the gtx, but the gpu is bound to vfio, so it's perfectly normal that at some time during unraid boot and after some video output the screen will look like it's frozen. The gpu is isolated and the os cannot use it for its video output anymore, because you set it to be reserved for something else (vm for example).
    Connect to unraid from another device and you will find that it's booting and it's not crashing.
  6. ghost82's post in My qcow2 VM image for Home Assistant seems corrupted, thoughts? was marked as the answer   
    data should be in the larger partition, nbd0p8, try to mount it somewhere.
  7. ghost82's post in [SOLVED][6.10] Cannot boot VM Windows Server 2003 Installation ISO was marked as the answer   
    You setup the disk with virtio: no windows version includes virtio drivers.
    I suggest to setup the vm with legacy devices, sata, ide, e1000, etc, at least for the basic devices.
  8. ghost82's post in Using A Passthru GPU But Still Would Like To Access Via VNC From Unraid - How ? was marked as the answer   
    It should be possible if you add an emulated gpu as primary in the xml.
    vmvga should be preferred as model for compatibility (it's like 'vmware compatible').
    This will result in something like this:
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='vmvga' vram='9216' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> vram can be tweaked if necessary.
    address 00:02.0 should be used for the emulated primary gpu.
     
    Note that this will set the primary gpu to the emulated one, so the os will use it by default, defeating the purpose of the discrete gpu passthrough..but this depends on the user case.
     
    I would try to avoid this and solve the issues with a vnc server/teamviewer or whatever you want installed inside the vm.
  9. ghost82's post in Upgraded to 6.10.3 - internal error: process exited while connecting to monitor for VM was marked as the answer   
    You need disk type='block' (not file) for the passed through disk (by-id).
  10. ghost82's post in GPU Passthrough issue: BAR 1: can't reserve was marked as the answer   
    syslinux config, add to the append line:
    video=efifb:off  
  11. ghost82's post in VM with another NIC in host-only network for NFS was marked as the answer   
    Hi, how many physical nics do you have in the system?
    Easiest and fastest way, if you have 2 nics (at least).
    Configure both nics in unraid for bridge (br0 and br1).
    Let's say you have eth0 and eth1: eth0 bridged to br0, eth1, bridged to br1.
    eth0 having internet access, br0 will have internet access too, so use br0 in the vm; configure eth0/br0 (eth0 in the host, br0 in the vm) with dhcp from router, or assign manually ips in the network 192.168.172.0/24.
    eth1 without internet access (no cable plugged in the adapter), br1 will not have internet access, use additional br1 in the vm; configure eth1/br1 (eth1 in the host, br1 in the vm) manually to have ips in the network 10.1.1.0/24.
     
    If you have only one nic (eth0):
    eth0 having internet access, br0 will have internet access too, so use br0 in the vm; configure eth0/br0 (eth0 in the host, br0 in the vm) with dhcp from router, or assign manually ips in the network 192.168.172.0/24.
     
    For the second nic I think you can create a virtual network (vnet)?you could use also virbr0 which has ips 192.168.122.0/24; for custom ip addresses you need to define the new network in a new xml and enable it.
     
    Or
    For the second local network (10.1.1.0/24) you may create a dummy nic in the host (dummy1) and bridge it (br1), and assign manually the ips: I never tried in unraid (I don't know if unraid has included the dummy kernel module), but in other generic linux oses it's feasible.
    Depending on your case I can try to see if it works in unraid too.
     
    For this second case, in a generic linux host, it works like this with systemd-networkd:
     
    in /etc/systemd/network/
     
    file bridge1.netdev:
    [NetDev] Name=br1 Kind=bridge  
    file bridge1.network:
    [Match] Name=br1 [Link] MACAddress=4e:c0:b1:12:13:a2 [Network] Address=10.1.1.1/24 [Route] Gateway=10.1.1.1 Metric=2048  
    file dummy1.netdev:
    [NetDev] Name=dummy1 Kind=dummy  
    file dummy1.network:
    [Match] Name=dummy1 [Network] Bridge=br1 DHCP=No  
  12. ghost82's post in Passthrough issues was marked as the answer   
    Hi,
    you need to:
    1. setup the gpu as multifunction in the vm <-- to be done
    2. it should be isolated (bound to vfio) <-- done
    3. allow unsafe interrupts may be required in unraid <-- done
    4. newest drivers should be installed <-- cannot say anything on this
    5. modification to syslinux config may be required (es: video=efifb:off) <-- (to be done)
    6. q35+ovmf should be preferred <-- give it a try
    7. video rom should be dumped from your gpu and not downloaded somewhere <-- cannot say anything on this
     
    Setup a q35+ovmf virtual machine with vnc, with all the advices above, enable remote desktop inside the vm, shutdown the vm, enable gpu passthrough, boot and connect directly to the vm with remote desktop with a second external device to install the drivers; look at the system devices for errors if it doesn't work.
  13. ghost82's post in HP Z230 Can't get Win10 VM booting to installer. Ubuntu VM works fine. was marked as the answer   
    Try to download again the windows 10 iso and/or check its hash to see if it's corrupted.
  14. ghost82's post in Issue creating VMs on Unraid 6.10 was marked as the answer   
    Issue is with audio at 00:1f.3 you are trying to passthrough.
    You attached nothing to vfio at boot, all devices that you want to passthrough should be attached to vfio at boot.
    Currently, audio is in the same iommu group with:
    00:1f.0 ISA bridge [0601]: Intel Corporation C236 Chipset LPC/eSPI Controller [8086:a149] (rev 31) Subsystem: Dell Device [1028:07c5] 00:1f.2 Memory controller [0580]: Intel Corporation 100 Series/C230 Series Chipset Family Power Management Controller [8086:a121] (rev 31) DeviceName: Onboard SATA #1 Subsystem: Dell Device [1028:07c5] 00:1f.3 Audio device [0403]: Intel Corporation 100 Series/C230 Series Chipset Family HD Audio Controller [8086:a170] (rev 31) Subsystem: Dell Device [1028:07c5] 00:1f.4 SMBus [0c05]: Intel Corporation 100 Series/C230 Series Chipset Family SMBus [8086:a123] (rev 31) Subsystem: Dell Device [1028:07c5] Kernel driver in use: i801_smbus Kernel modules: i2c_i801  
    You may want to apply the acs override patch in unraid to see if it can break iommu group 7 and separate the audio.
     
    Or.....do not passthrough audio at all.
  15. ghost82's post in Error 43 - GPU Passthrough - Bare metal Win 10 Pro uefi boot - Nvidia GTX 1060 6G - unraid 6.10.2 was marked as the answer   
    From the logs:
    Jun 3 12:07:41 MYDEEDEE kernel: pci 0000:0a:00.0: vgaarb: setting as boot VGA device The 1060 is choosen as boot vga by the host.
    However, after this, it is successfully attached to vfio, so it should work.
    The vm is correctly configured and also the vfio options.
    I would try to add also allow unsafe interrupts in settings --> vm
     
    Did you try to install the latest nvidia drivers with the gpu passed through connecting to it through remote desktop installed inside the vm?
    If you have old nvidia drivers or no drivers installed there could be no output.
    Are you 100% sure the vbios is working?
     
    Even if the infamous nvidia error 43 could be caused by nvidia detecting the gpu running in a vm (only with old drivers), you may also change in the xml from this:
    <features> <acpi/> <apic/> <hyperv mode='custom'> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='1234567890ab'/> </hyperv> <ioapic driver='kvm'/> </features>  
    to this:
    <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='1234567890ab'/> </hyperv> <kvm> <hidden state='on'/> </kvm> <vmport state='off'/> <ioapic driver='kvm'/> </features>  
     
  16. ghost82's post in Intel GVT-g 支持12代CPU吗? was marked as the answer   
  17. ghost82's post in WIN 10 VM \Boot\BCD I/O error after unraid 6.10.1 update was marked as the answer   
    I never used seabios, so I don't know, but it could be that you need to enable bootmenu and add a timeout to your xml.
    Anyway, I'm seeing that the vm is using seabios 1.15: I compiled the latest version (1.16.0-4-gdc88f9b), maybe you can try this version (attached).
    Extract file bios.bin from the zip and save it somewhere on unraid (i.e. /path/to/bios.bin).
    Open the vm settings in xml view and change from:
    <os> <type arch='x86_64' machine='pc-i440fx-6.2'>hvm</type> </os>  
    to:
    <os> <type arch='x86_64' machine='pc-i440fx-6.2'>hvm</type> <loader type='rom'>/path/to/bios.bin</loader> <boot dev='hd'/> <bootmenu enable='yes' timeout='30000'/> </os>  
    Try to boot.
    As you can see I enabled the bootmenu with a timeout of 30 seconds, so you should have enough time to see what boot options it proposes.
    Seabios-1.16.0-4-gdc88f9b.zip
  18. ghost82's post in libvirt Service won't start was marked as the answer   
    There are kernel panic related to amd gpu.
    You are running unraid 6.9.2 and you are talking about "6900", is the 6900xt?That is pretty new and if you're using an old unraid version drivers may not play well, try to upgrade to 6.10.1.
    Once upgraded check that gpu (audio, video, etc) is bound to vfio.
  19. ghost82's post in Unraid 6.10. Win10 VM not able to start Intel BT/WIFi 3168 Device Error Code 10 was marked as the answer   
    @astronax I think I found the culprit for the issue "xml is not saving".
    I had a spare usb key with unraid 6.10.1, I had also an additional pendrive, so made the array on that usb pendrive  just to test the qemu/libvirt behavior.
    Just a note on my above post: one, before using the virsh command should export nano as default editor with this command (in terminal):
    export EDITOR='/usr/bin/nano' then run the virsh command.
    --------
    However it is not needed to run the virsh command and unraid gui in xml view can be used.
     
    The issue is that the domain type line is stripped by unraid.
     
    When you view your xml in unraid make 2 changes:
    1. on the top you will see a line with this:
    <domain type='kvm'> Change it to:
    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>  
    Then to the bottom add:
    <qemu:capabilities> <qemu:del capability='usb-host.hostdevice'/> </qemu:capabilities> before the </domain> tag.
     
    This time it will save.
     
    I found this because using the virsh command failed to validate too, because qemu schemas was not defined.
     
    PS: not sure if this will solve your bluetooth/wifi issue, just try...
  20. ghost82's post in Home assistant vm wont start after multiple things. need help sorting this out. was marked as the answer   
    See if this helps:
    https://forums.unraid.net/topic/123419-functionnal-vm-wont-boot-anymore-stuck-on-autoboot/
     
  21. ghost82's post in iGPU acceleration for RDP on VM was marked as the answer   
    If you google enable gpu acceleration over rdp you will find solutions for discrete gpu by tweaking the windows registry if I remember well..this could work also for the igpu...or not...
    I know you are now thinking, if you don't know why you reply?
    I would not consider RDP at all for your use, if you passthrough a gpu/igpu you are enabling hardware acceleration, so your apps/games should use hardware acceleration but rdp will render the screen output and it will send it over the network; this is how rdp works, vnc for example works in a different way.
    I would consider parsec or any other protocol studied for game streaming and avoid rdp, and obviously with a passed through gpu/igpu.
  22. ghost82's post in ServerCore VirtIO driver install was marked as the answer   
    modify it manually instead of with the gui.
    Switch to xml view on your vm settings (top right), find the network block, here an example:
    <interface type='bridge'> <mac address='aa:aa:aa:aa:aa:aa'/> <source bridge='br1'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> Change model type from virtio or virtio-net to e1000-82545em, save and boot the vm.
     
    Post your diagnostics if you have issues.
  23. ghost82's post in Boot Order Messed Up was marked as the answer   
    https://forums.unraid.net/topic/123300-vm-bios-change/
  24. ghost82's post in Trouble creating new VM from existing Windows 10 disk img was marked as the answer   
    Since you have a dos partition table I think it was a legacy bios installation and not uefi, which requires gpt.
    Moreover the 3 partitions reflect that of a legacy bios installation, system, windows, recovery.
    Simply use seabios to boot that disk, instead of ovmf.
  25. ghost82's post in Random hang on shutdown/reboot if msi enabled on gpu [Windows 11]? was marked as the answer   
    Nevermind, it was a placebo effect, it seemed it didn't crash for several reboots, then it crashed again.
    Thanks to an app to analyze the memory dumps (WhoCrashed) I was able to identify the culprit: it's a driver power failure of my mellanox infiniband card.
    It's a very old card, worked well in windows 7, survived windows 10, but it seems it has issue in windows 11. Luckily I don't use it very much and I can disable the card and unloading the driver (ibbus.sys) before rebooting/shutting down.