ghost82

Members
  • Posts

    2079
  • Joined

  • Last visited

  • Days Won

    12

ghost82 last won the day on January 8

ghost82 had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ghost82's Achievements

Rising Star

Rising Star (9/14)

401

Reputation

29

Community Answers

  1. No because there is no bootloader able to run it and apple m1 arm cpu is proprietary, so qemu is not able to emulate it.
  2. Maybe related to this...(?) https://mail.coreboot.org/hyperkitty/list/seabios@seabios.org/thread/72LFLT7KFMWE4GVZHWF4G34PKLVG5LRD/ Never used seabios, but what do you see when you press esc to access seabios?What boot options do you have?
  3. I would say no (or at least not so many advantages), efi is just more recent than legacy bios, but I would prefer ovmf uefi because most recent oses 'prefer' uefi. This case is very particular, it's very strange that it boots bare metal and fails with that bcd error in a vm...
  4. Hi! What do you mean by "error"? If you are referring to your screenshot, that are not errors, but it's a verbose mode that tells you that the bdsdxe driver is first loading and then starting Boot0002 entry. If your windows 11 starts if booted directly from ovmf menu, try to add the boot order line to the nvme block, something like this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1f' function='0x5'/> </source> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </hostdev> Obviously apply it to the proper block. Delete also this to make it work: <boot dev='hd'/>
  5. My first question is how windows is installed on that nvme, is it uefi or legacy bios? You are using seabios for the vm (legacy bios), switch to ovmf if it was installed in uefi mode. This error can show if bios is wrong.
  6. There are kernel panic related to amd gpu. You are running unraid 6.9.2 and you are talking about "6900", is the 6900xt?That is pretty new and if you're using an old unraid version drivers may not play well, try to upgrade to 6.10.1. Once upgraded check that gpu (audio, video, etc) is bound to vfio.
  7. Hi, from what I can see you did it pretty well (happy that someone reads and tries himself/herself before posting something), efifb off (maybe not needed since you are booting unraid as legacy, but let it where it is..), multifunction for the gpu, gpu isolated, allow unsafe interrrupts. So..I can see a couple of things that could cause the issue, I would try in order (Parsec vm): 1. change the bus of the gpu (audio and video) in the target vm: you are setting it at bus 0 in a q35 machine, bus 0 means "built-in", i.e. attached to pcie-root, but the gpu is not built in and the driver could cause issues. Change from: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x26' slot='0x00' function='0x0'/> </source> <rom file='/mnt/vm/domains/VBios/NVIDIA.GTX980Ti.mod.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x26' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> To: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x26' slot='0x00' function='0x0'/> </source> <rom file='/mnt/vm/domains/VBios/NVIDIA.GTX980Ti.mod.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x26' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x06' function='0x1'/> </hostdev> 2. Double check again the vbios file you are passing through: dump your own and/or check the header to remove with hex editor. 3. Some builds do not work properly when the gpu to passthrough is in the first top gpu slot: if you can move it to another slot.
  8. v. 2.55 was designed for oc 0.7.7 so it fits, but if you broke your efi in some way it's not a configurator that it will recover. However you can reaplce your actual opencore image with a fresh one: https://github.com/SpaceinvaderOne/Macinabox/raw/master/bootloader/OpenCore-v16.img.zip Extract from zip and replace in /mnt/user/domains/Macinabox Monterey/Monterey-opencore.img
  9. None? Really...only textedit should be recommended. However, if you want to use opencore configurator with the risk of breaking your efi, the version you want to use depends on the opencore bootloader version. Original macinabox includes opencore v.0.7.7 so opencore configurator 2.55.0.0 should work as expected: https://mackie100projects.altervista.org/occ-changelog-version-2-55-0-0/ Obviously, if you updated opencore bootloader you should read the configurator changelog and download a version compatible with the version of the bootloader.
  10. Thanks for testing, I read again the link in the first post and that user solved not by putting it on bus 0, but the opposite....sorry....you already had it on a bus different than 0, so no more idea...but luckily having wifi is not a must for this vm.
  11. About the wifi issue, maybe changing the target bus, as pointed by you in your first message, could fix it. Unfortunately you deleted the diagnostics file so I cannot check at what address the wifi is, but from your gif it should be at source 03:00.0 and target 07:00.0. You have another device passed through at source 01:00.0 (maybe an usb controller?). If the wifi is at 03:00.0 in the host you can try to make it "built-in" in the vm (i.e. change the target address to bus 0), if it will work or not it will depend on how the windows driver for the wifi behaves. So you could change from this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> to this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </hostdev> put it in bus 0, slot 2 which should be free
  12. I think I found the issue: when you edit a vm in the xml view make 2 changes: on the top of the xml, change this line: <domain type='kvm'> with this: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> Then, add the block at the bottom. Once you made your changes save the xml and it will be saved correctly. Just tried this with unraid 6.10.1 and for some odd behavior it strips the xmlns line with the validation schemas...Even more odd is that it exits like if you saved it, but it didn't. Using virsh edit command fails too (obviously because the xml lacks the validation schemas too), but at least it points to an understandable error.
  13. @astronax I think I found the culprit for the issue "xml is not saving". I had a spare usb key with unraid 6.10.1, I had also an additional pendrive, so made the array on that usb pendrive just to test the qemu/libvirt behavior. Just a note on my above post: one, before using the virsh command should export nano as default editor with this command (in terminal): export EDITOR='/usr/bin/nano' then tun the virsh command. -------- However it is not needed to run the virsh command and unraid gui in xml view can be used. The issue is that the domain type line is stripped by unraid. When you view your xml in unraid make 2 changes: 1. on the top you will see a line with this: <domain type='kvm'> Change it to: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> Then to the bottom add: <qemu:capabilities> <qemu:del capability='usb-host.hostdevice'/> </qemu:capabilities> before the </domain> tag. This time it will save. I found this because using the virsh command failed to validate too, because qemu schemas was not defined. PS: not sure if this will solve your bluetooth/wifi issue, just try...
  14. It seems another user has having issues in saving custom things in his xml, starting in 6.10...can you try the virsh command, i.e. in terminal 'virsh edit 'name of vm'' to see if it works there?
  15. Don't take it bad I should had ask... It seems you are not alone, it seems someone else is having issue in saving the xml with custom things...I cannot access my unraid and test myself, sorry, I should had test myself first. You can open a bug in the proper section. In the meantime, yes you can try the virsh command to edit your vm, from an unraid terminal: virsh edit 'name of the vm' make your edits then press ctrl+o to save it. Try to rerun the command and check if your edits are saved.