-
Posts
2,726 -
Joined
-
Last visited
-
Days Won
19
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by ghost82
-
-
Try to increase the ram of the vm, 1gb doesn't seem appropriate, set it to 4gb, at least.
Check the full report (screenshot) to see if it points to anything useful.
If it doesn't work try to disable the nic, i.e. delete:
<interface type='bridge'> <mac address='52:54:00:53:33:b7'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface>
-
Check in the guest vm if the virtual network adapter is there, open a terminal and type:
lspci
then press enter.
It should list the adapter at 02:01.0 if you didn't change your xml.
-
Well, if you don't want too much troubles, go with nvidia, or go with amd series 6000 (could be expensive depending on your budget).
Note that a kernel fix is included in unraid that should fix some older amd gpus, but this may work or not, depending on brand, firmware, revision, etc...
Quick search on google or even here in the forum for "amd gpu reset bug" and you will find a lot of info.
If you are going to buy a second hand nvidia gpu, it should be preferred that updated drivers still exist: this because old nvidia gpus without newer drivers (i.e. with older drivers) are not able to be passed through in a vm, unless you modify the xml to hide the hypervisor.
Only with newer nvidia drivers (by "newer" I mean from v. 465), nvidia allowed for its consumer gpus (geforce, titan) to be passed through in vms.
-
Consider, when searching for gpus, the amd reset bug, i.e. older amd gpus (<series 6000) could not reset properly on vm shutdown/restart and this needs the whole server to be restarted.
-
Not when the vm is running.
Moreover, when the vm is shutdown you need to detach the gpu from vfio driver to make it available again in the host.
-
5 hours ago, Lolight said:
as it's been shown to be the case by the above-mentioned anti-Unraid redditor
Can you send me, in pm if you want, the reddit link?Just curious about what it's written.Update: found, but it doesn't seem sponsored in any way. It seems a simple review, not good, not too bad.
-
On 11/6/2022 at 1:21 PM, alturismo said:
i added it upper devices start tag where its also persistennt (if i put it in the end inside the devices block its wiped out)
On 11/6/2022 at 1:21 PM, alturismo said:Sorry ignore this, the position you wrote is the right one!
As far as the other issue, I'm sorry I didn't try but only reported some findings
-
1
-
-
13 hours ago, 00100100 said:
How to I edit my unraid config to disable it from using the GPU on boot up if that is an issue?
It's normal that you have some video output when unraid boots, vfio attaches after.
13 hours ago, 00100100 said:Am I editing my xml incorrectly above?
Yes, multifunction is applied correctly
13 hours ago, 00100100 said:Did I hurt something by letting the the VM boot directly from the PC outside of KVM?
You didn't
Attach diagnostics and the vbios file you are using. Note that if you dump the vbios using gpuz you still needs to remove the header.
-
7 hours ago, Mattyice said:
everything works except for the network settings
Is the issue that you don't have internet inside the vm or the message guest agent not installed?
If it's the latter, just install qemu guest agent into your linux vm.
Package name could differ for different linux distribution, for example it can be qemu-guest-agent.
After installation enable it, for example:
systemctl enable qemu-guest-agent systemctl start qemu-guest-agent
Then check if it's running correctly, for example:
systemctl status qemu-guest-agent
If you don't have internet change network type from e1000 to virtio or virtio-net or e1000-82545em
-
-
attach diagnostics
-
You edit it from the unraid gui, but editing the file is right too..can you attach the file?I think there could be some incorrect format.
-
As you can see from your command output efifb attaches to your gpu. By looking at your syslog you are not applying the correct kernel arguments you pasted in your post #1
-
18 minutes ago, alturismo said:
may a question ahead, while i trigger it inside the VM, would this also trigger it then ?
Are you asking if it will work if you hybernate the vm from inside the guest instead of from the host?My reply is...I don't know
But several users reported it working with virsh commands; dompmsuspend and dompmwakeup are virsh commands to be given from the host and the guest requires the guest agent installed.
here are posts where I get some info:
https://www.reddit.com/r/VFIO/comments/568mmt/saving_vm_state_with_gpu_passthrough/
-
1
-
-
6 minutes ago, alturismo said:
nice idea, sadly not practical here with GPU passthrough's in my VM's
the VM's like to freeze ... and even if not, they stay vfio bounded as the VM is not completely off, so i cant set them in persistence mode ... so the mashine has more power consumption then otherwise ...
mmm...this shouldn't happen..if hibernation is set to disk, the vm should report as shutdown and the gpu should be free for other uses.
Did you enable sustend to disk in the xml?
Check this, might help:
-
1
-
-
Attach a diagnostics and the output of command "cat /proc/iomem"
-
In addition to hot22shot suggestion, which I think it's necessary, otherwise you could get a code 12 error in windows, pay attention to the layout in the guest os; you can't have the audio of the gpu 2 in the same bus and slot of the video of gpu 1. Moreover addresses and multifunction are in the wrong place
So change with this:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x1'/> </hostdev>
-
6 hours ago, ab5g said:
Absolutely no issues with 6.10.3.
This could suggest a kernel issue:
6.10.3 --> kernel 5.15.46
6.11.x --> kernel 5.19.x
-
me too, and it could be a great idea to share results anonimously.
For what it worth all I wrote has already been written here in the forum.
-
1
-
-
Restore the syslinux parameters you need and in particular video=efifb:off
Set Pcie acs override to "both", restart the server and see if iommu group 16 is splitted with your video and audio splitted in a group without anything else.
-
Attach new diagnostics
Fix the syslinux line, video=efifb:off is repeated 2 times.
-
Read above, it seems virtualization has been resetted to off in your bios, or you changed it.
-
One step back...sorry, looking at your diagnostics it seems iommu is not available (?), make sure virtualization is enabled in bios (vtx+vtd/svm+amd-v)
-
Tools -> System Devices; put checkmarks and reboot
NVIDIA Corporation GP107 [GeForce GTX 1050] (rev a1) - Multiple Monitor Support Not Working
in VM Engine (KVM)
Posted
I would suggest to completely uninstall nvidia drivers with ddu and try to install them again, maybe testing different versions, starting with the version that works on bare metal. Make sure to first delete all nvidia devices (even hidden ones) in windows device manager too.
The vbios should be ok, it contains valid legacy and efi vbioses.