Jump to content

ghost82

Members
  • Posts

    2,726
  • Joined

  • Last visited

  • Days Won

    19

Posts posted by ghost82

  1. Try to increase the ram of the vm, 1gb doesn't seem appropriate, set it to 4gb, at least.

    Check the full report (screenshot) to see if it points to anything useful.

    If it doesn't work try to disable the nic, i.e. delete:
     

        <interface type='bridge'>
          <mac address='52:54:00:53:33:b7'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </interface>

     

  2. Well, if you don't want too much troubles, go with nvidia, or go with amd series 6000 (could be expensive depending on your budget).

    Note that a kernel fix is included in unraid that should fix some older amd gpus, but this may work or not, depending on brand, firmware, revision, etc...

    Quick search on google or even here in the forum for "amd gpu reset bug" and you will find a lot of info.

    If you are going to buy a second hand nvidia gpu, it should be preferred that updated drivers still exist: this because old nvidia gpus without newer drivers (i.e. with older drivers) are not able to be passed through in a vm, unless you modify the xml to hide the hypervisor.

    Only with newer nvidia drivers (by "newer" I mean from v. 465), nvidia allowed for its consumer gpus (geforce, titan) to be passed through in vms.

  3. 5 hours ago, Lolight said:

    as it's been shown to be the case by the above-mentioned anti-Unraid redditor

    Can you send me, in pm if you want, the reddit link?Just curious about what it's written.

    Update: found, but it doesn't seem sponsored in any way. It seems a simple review, not good, not too bad.

  4. On 11/6/2022 at 1:21 PM, alturismo said:

    i added it upper devices start tag where its also persistennt (if i put it in the end inside the devices block its wiped out)

     

    On 11/6/2022 at 1:21 PM, alturismo said:

    image.png.c1c6ceed0f613abe56c54de24fcdadda.png

     

    my closing tag from devices is different as you see upper </devices> and not <devices/>

    Sorry ignore this, the position you wrote is the right one!

     

    As far as the other issue, I'm sorry I didn't try but only reported some findings :(

     

    • Thanks 1
  5. 13 hours ago, 00100100 said:

    How to I edit my unraid config to disable it from using the GPU on boot up if that is an issue?

    It's normal that you have some video output when unraid boots, vfio attaches after.

     

    13 hours ago, 00100100 said:

    Am I editing my xml incorrectly above?

    Yes, multifunction is applied correctly

     

    13 hours ago, 00100100 said:

    Did I hurt something by letting the the VM boot directly from the PC outside of KVM?

    You didn't

     

    Attach diagnostics and the vbios file you are using. Note that if you dump the vbios using gpuz you still needs to remove the header.

  6. 7 hours ago, Mattyice said:

    everything works except for the network settings

    Is the issue that you don't have internet inside the vm or the message guest agent not installed?

    If it's the latter, just install qemu guest agent into your linux vm.

    Package name could differ for different linux distribution, for example it can be qemu-guest-agent.

    After installation enable it, for example:

    systemctl enable qemu-guest-agent
    systemctl start qemu-guest-agent

    Then check if it's running correctly, for example:
     

    systemctl status qemu-guest-agent

     

    If you don't have internet change network type from e1000 to virtio or virtio-net or e1000-82545em

  7. 18 minutes ago, alturismo said:

    may a question ahead, while i trigger it inside the VM, would this also trigger it then ?

     

    Are you asking if it will work if you hybernate the vm from inside the guest instead of from the host?My reply is...I don't know :D

    But several users reported it working with virsh commands; dompmsuspend and dompmwakeup are virsh commands to be given from the host and the guest requires the guest agent installed.

     

    here are posts where I get some info:

    https://www.reddit.com/r/VFIO/comments/568mmt/saving_vm_state_with_gpu_passthrough/

     

    reddit.thumb.png.378da268817fe19504a0f7ae43590e75.png

    • Like 1
  8. 6 minutes ago, alturismo said:

    nice idea, sadly not practical here with GPU passthrough's in my VM's

     

    the VM's like to freeze ... and even if not, they stay vfio bounded as the VM is not completely off, so i cant set them in persistence mode ... so the mashine has more power consumption then otherwise ... ;)

    mmm...this shouldn't happen..if hibernation is set to disk, the vm should report as shutdown and the gpu should be free for other uses.

    Did you enable sustend to disk in the xml?

    Check this, might help:

    https://forums.unraid.net/topic/130134-switching-from-gpu-passthrough-local-to-vnc-in-linux-pop/?do=findComment&comment=1184943

     

    • Thanks 1
  9. In addition to hot22shot suggestion, which I think it's necessary, otherwise you could get a code 12 error in windows, pay attention to the layout in the guest os; you can't have the audio of the gpu 2 in the same bus and slot of the video of gpu 1. Moreover addresses and multifunction are in the wrong place

    So change with this:

        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0' multifunction='on'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x1'/>
        </hostdev>

     

×
×
  • Create New...