ghost82

Members
  • Posts

    2721
  • Joined

  • Last visited

  • Days Won

    19

Community Answers

  1. ghost82's post in GPU Passthrough not working after updating to 6.11.5 was marked as the answer   
    Nov 24 16:04:25 Tower kernel: pci 0000:11:00.0: vgaarb: setting as boot VGA device you need vbios passed
     
    Nov 24 16:04:25 Tower kernel: pci 0000:11:00.0: BAR 0: assigned to efifb you need video=efifb:off in syslinux
     
    Multifunction should be setup for gpu passthrough
     
    q35 should be preferred instead of i440fx.
  2. ghost82's post in UBUNTU VM INSTALLATION "YOU NEED TO LOAD THE KERNEL FIRST" was marked as the answer   
    Sounds like corrupted iso, download the iso again and check its checksum before using it to make sure it's not corrupted.
    Check also that disk and ram have proper values.
  3. ghost82's post in NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1) - Multiple Monitor Support Not Working was marked as the answer   
    I would suggest to completely uninstall nvidia drivers with ddu and try to install them again, maybe testing different versions, starting with the version that works on bare metal. Make sure to first delete all nvidia devices (even hidden ones) in windows device manager too.
    The vbios should be ok, it contains valid legacy and efi vbioses.
  4. ghost82's post in Unable to connect to VM via remote desktop after motherboard update was marked as the answer   
    You are probably using virbr0 network looking at your ip. Just change network from virbr0 to bridged br0.
  5. ghost82's post in 虚拟机突发无法操作 was marked as the answer   
    You have no vm saved with that uuid.
    You can:
    1. Open the vm in xml mode, replace this line:
    <uuid>xxxxxxxxxxxxxxxxxxxxxxxx</uuid> with:
    <uuid>7c611489-4ea2-11ed-9e19-52540070811c</uuid>  
    Or, delete the vm from the gui without deleting vdisk(s) and create a new vm pointing at existing vdisk(s)
  6. ghost82's post in How do you KVM to a new VM? was marked as the answer   
    Correct, the built-in vnc is available only with virtual vga, so if you passthrough a gpu the built-in option will not be there.
    You could add a virtual vga as primary display with novnc and the passed through gpu as secondary, but this defeats the purpose of passing through a gpu, better to use a vnc server/rdp solution inside the vm if you need remote access.
  7. ghost82's post in Issues while passing two usb devices with same ID through to VM. was marked as the answer   
    check this, by SimonF:
     
  8. ghost82's post in Unraid 6.11 - Having trouble passing through Intel UHD Graphics 630 to a VM on HP 290 G2 MT Server was marked as the answer   
    Try to add to syslinux config:
    video=efifb:off  
    Add also this:
    modprobe.blacklist=i2c_i801,i2c_smbus  
    Try to pass one of these attached vbios.
     
    i915ovmf.rom i915ovmf-simple.rom
  9. ghost82's post in Should I even unraid? was marked as the answer   
    I switched from bare metal to vms on my 2 servers 5 years ago and I never regret it; note that this is valid for any linux host running qemu/libvirt/kvm for virtualization, with unraid only being much more user friendly.
    The switch allowed me to:
    1. install a mac os vm with gpu passthrough that works a lot better than the old genuine macbook pro
    2. install windows 11 on an unsupported hardware, thanks to tpm emulation build in qemu
    3. run whatever os I want: at the current time I have a windows 11 vm (work and gaming), a mac os vm (work) and a kali linux vm (programming)
     
    About your hardware it's perfectly fine for virtualization, with that 11700K you should have no issues in passing through the secondary 3070 gpu to vm(s).
    Unraid needs at least one array to work: this means that one hard drive is dedicated to the array: since you only have one hd, the nvme drive, you are somehow limited in dedicating that nvme drive for the array, and save and run virtual disks for vms on the array.
    To start experimenting this is enough.
     
    In my case, at the current time, in one of my server, I have:
    1. a motherboard with 2 sata controllers built-in
    2. 4 hard drives: 3 rotationals (1 for array, 2 for smb shares), 2 ssds, one for windows 11, one for mac os); linux vm is installed on a vdisk on the array (I don't need too much performance for this vm); the 2 ssds are attached to the 2nd sata controller which is passed through to the vms; the hd array and the other 2 rotational drives for smb are attached to the 1st sata controller
     
    I have 2 pcie gpus, one dedicated for the host (it's a very old nvidia gpu) and one 6900xt which I'm passing through to vms. But this only because I don't have an igpu, this is not your case.
     
    As far as performances I'm getting quite close to bare metal ones.
  10. ghost82's post in Boot from NVME was marked as the answer   
    Here:
    <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <boot order='1'/> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev>  
  11. ghost82's post in GPU Throttling while in Game? Low FPS Low GPU usage was marked as the answer   
    Is it an hp z420? --> update, yes it is, you wrote it
     
    If it is, I'm reading that you should have in your bios some settings about "performance profile" or "power regulator settings": check if you have some dynamic settings there and switch to static high performance, if you have that setting. Check all the other power settings in bios, especially related to pcie (if any...).
    I'm reading that some hp servers should have power saving modes in bios that could throttle pcie devices.
  12. ghost82's post in Newbie VM Confusion was marked as the answer   
    If the vm is properly configured, without anything else running in the host (or nearly nothing running on the host) you could take into account that the vm will perform bare metal -5 - -10%; so if you assign let's say 4 cores/4 threads to the vm, it will perform like a real 4 cores/4 threads minus 5-10% in performance; this is a rough estimation.
    this is only a matter of money, but make sure the cpu(s) supports vt-d and vt-x, so you will be able to passthrough hardware (descrete gpu?) through vfio to the vm.
     
    About the gpu, to give you an example, I have the latest 6900xt which gives a score of about 140.000 in Geekbench (it's mac os); online results with the same gpu are higher too (about 180.000), but this may depend on cpu bottleneck as I have 2 old sandy bridge xeon cpus. I would say that also the gpu will perform near bare metal performance, vfio is good; make sure to use q35 machine type for the vm so it will be more compatible with pcie passthrough.
  13. ghost82's post in VM GPU Passthrough, EPYC 7551, Nvidia 2080 on Supermicro H11SSL-i was marked as the answer   
    Check: 
    BIOS >> Advanced >> NB Configuration >> IOMMU
    --> is enabled, not auto, not disabled
  14. ghost82's post in Windows VM looking for USB device, won't start (no device in config) was marked as the answer   
    Open the vm in the xml mode and check at the bottom if there is something related to that usb device; if it's there manually delete the offending block of code.
  15. ghost82's post in GPU acceleration in Monterey VM was marked as the answer   
    Monterey has no drivers for 1070, so there's no way to get acceleration or any video output from that gpu; stick to high sierra (with installed nvidia drivers), change gpu with a compatible one, or do not use gpu passthrough.
  16. ghost82's post in Grafigkarte NVIDIA 1080 TI was marked as the answer   
    Monitor is attached to the gtx, but the gpu is bound to vfio, so it's perfectly normal that at some time during unraid boot and after some video output the screen will look like it's frozen. The gpu is isolated and the os cannot use it for its video output anymore, because you set it to be reserved for something else (vm for example).
    Connect to unraid from another device and you will find that it's booting and it's not crashing.
  17. ghost82's post in My qcow2 VM image for Home Assistant seems corrupted, thoughts? was marked as the answer   
    data should be in the larger partition, nbd0p8, try to mount it somewhere.
  18. ghost82's post in [SOLVED][6.10] Cannot boot VM Windows Server 2003 Installation ISO was marked as the answer   
    You setup the disk with virtio: no windows version includes virtio drivers.
    I suggest to setup the vm with legacy devices, sata, ide, e1000, etc, at least for the basic devices.
  19. ghost82's post in Using A Passthru GPU But Still Would Like To Access Via VNC From Unraid - How ? was marked as the answer   
    It should be possible if you add an emulated gpu as primary in the xml.
    vmvga should be preferred as model for compatibility (it's like 'vmware compatible').
    This will result in something like this:
    <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='vmvga' vram='9216' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> vram can be tweaked if necessary.
    address 00:02.0 should be used for the emulated primary gpu.
     
    Note that this will set the primary gpu to the emulated one, so the os will use it by default, defeating the purpose of the discrete gpu passthrough..but this depends on the user case.
     
    I would try to avoid this and solve the issues with a vnc server/teamviewer or whatever you want installed inside the vm.
  20. ghost82's post in Upgraded to 6.10.3 - internal error: process exited while connecting to monitor for VM was marked as the answer   
    You need disk type='block' (not file) for the passed through disk (by-id).
  21. ghost82's post in GPU Passthrough issue: BAR 1: can't reserve was marked as the answer   
    syslinux config, add to the append line:
    video=efifb:off  
  22. ghost82's post in VM with another NIC in host-only network for NFS was marked as the answer   
    Hi, how many physical nics do you have in the system?
    Easiest and fastest way, if you have 2 nics (at least).
    Configure both nics in unraid for bridge (br0 and br1).
    Let's say you have eth0 and eth1: eth0 bridged to br0, eth1, bridged to br1.
    eth0 having internet access, br0 will have internet access too, so use br0 in the vm; configure eth0/br0 (eth0 in the host, br0 in the vm) with dhcp from router, or assign manually ips in the network 192.168.172.0/24.
    eth1 without internet access (no cable plugged in the adapter), br1 will not have internet access, use additional br1 in the vm; configure eth1/br1 (eth1 in the host, br1 in the vm) manually to have ips in the network 10.1.1.0/24.
     
    If you have only one nic (eth0):
    eth0 having internet access, br0 will have internet access too, so use br0 in the vm; configure eth0/br0 (eth0 in the host, br0 in the vm) with dhcp from router, or assign manually ips in the network 192.168.172.0/24.
     
    For the second nic I think you can create a virtual network (vnet)?you could use also virbr0 which has ips 192.168.122.0/24; for custom ip addresses you need to define the new network in a new xml and enable it.
     
    Or
    For the second local network (10.1.1.0/24) you may create a dummy nic in the host (dummy1) and bridge it (br1), and assign manually the ips: I never tried in unraid (I don't know if unraid has included the dummy kernel module), but in other generic linux oses it's feasible.
    Depending on your case I can try to see if it works in unraid too.
     
    For this second case, in a generic linux host, it works like this with systemd-networkd:
     
    in /etc/systemd/network/
     
    file bridge1.netdev:
    [NetDev] Name=br1 Kind=bridge  
    file bridge1.network:
    [Match] Name=br1 [Link] MACAddress=4e:c0:b1:12:13:a2 [Network] Address=10.1.1.1/24 [Route] Gateway=10.1.1.1 Metric=2048  
    file dummy1.netdev:
    [NetDev] Name=dummy1 Kind=dummy  
    file dummy1.network:
    [Match] Name=dummy1 [Network] Bridge=br1 DHCP=No  
  23. ghost82's post in Passthrough issues was marked as the answer   
    Hi,
    you need to:
    1. setup the gpu as multifunction in the vm <-- to be done
    2. it should be isolated (bound to vfio) <-- done
    3. allow unsafe interrupts may be required in unraid <-- done
    4. newest drivers should be installed <-- cannot say anything on this
    5. modification to syslinux config may be required (es: video=efifb:off) <-- (to be done)
    6. q35+ovmf should be preferred <-- give it a try
    7. video rom should be dumped from your gpu and not downloaded somewhere <-- cannot say anything on this
     
    Setup a q35+ovmf virtual machine with vnc, with all the advices above, enable remote desktop inside the vm, shutdown the vm, enable gpu passthrough, boot and connect directly to the vm with remote desktop with a second external device to install the drivers; look at the system devices for errors if it doesn't work.
  24. ghost82's post in HP Z230 Can't get Win10 VM booting to installer. Ubuntu VM works fine. was marked as the answer   
    Try to download again the windows 10 iso and/or check its hash to see if it's corrupted.
  25. ghost82's post in Issue creating VMs on Unraid 6.10 was marked as the answer   
    Issue is with audio at 00:1f.3 you are trying to passthrough.
    You attached nothing to vfio at boot, all devices that you want to passthrough should be attached to vfio at boot.
    Currently, audio is in the same iommu group with:
    00:1f.0 ISA bridge [0601]: Intel Corporation C236 Chipset LPC/eSPI Controller [8086:a149] (rev 31) Subsystem: Dell Device [1028:07c5] 00:1f.2 Memory controller [0580]: Intel Corporation 100 Series/C230 Series Chipset Family Power Management Controller [8086:a121] (rev 31) DeviceName: Onboard SATA #1 Subsystem: Dell Device [1028:07c5] 00:1f.3 Audio device [0403]: Intel Corporation 100 Series/C230 Series Chipset Family HD Audio Controller [8086:a170] (rev 31) Subsystem: Dell Device [1028:07c5] 00:1f.4 SMBus [0c05]: Intel Corporation 100 Series/C230 Series Chipset Family SMBus [8086:a123] (rev 31) Subsystem: Dell Device [1028:07c5] Kernel driver in use: i801_smbus Kernel modules: i2c_i801  
    You may want to apply the acs override patch in unraid to see if it can break iommu group 7 and separate the audio.
     
    Or.....do not passthrough audio at all.