ghost82

Members
  • Posts

    1215
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by ghost82

  1. you can use opencore configurator to mount efi partitions with no issues, just don't open the config.plist with the configurator.
  2. btw there's no need to edit anything in the default config.plist, it should be able to boot correctly. You will need to edit it if you customize the hardware, for example if you start to passthrough devices and you need patches/injections to make it to work.
  3. Depending on the method you chose the container has to download about 11 GB or few hundreds of MB, then it has to initialize the installation. No need to fix anything, just wait.
  4. Correct Not the install media, but the opencore vdisk (downloaded fresh) and boot from it. You should be able to see the mac os disk and boot it; once booted mount both the efi (that of the mac os disk and that of the fresh opencore bootloader), replace the config.plist inside /EFI/OC folder (copied from the fresh opencore bootloader). My advice is to use textedit if you want to edit the config.plist. Others suggested using opencore configurator v. 2.19.1.0 but I don't think it's the correct version, since it was released in december 2020 and the opencore image I pulled to macinabox is v. 0.7.0, released in june 2021. I prefer textedit, so I can blame only myself if I mess the configuration.
  5. If you don't want to start again from scratch you may follow these steps: https://forums.unraid.net/topic/84601-support-spaceinvaderone-macinabox/?do=findComment&comment=1041416
  6. I added a link in my reply above, read my comment in that link and also next answers if you want to use the configurator. SIO used opencore configurator but missed to emphasize that you need a very specific version of the configurator if you want to use it.
  7. 2. the installation iso is uefi capable (FS0 is there), so no need to change to seabios, the windows installer and the virtio iso are already attached to the virtual sata controller, no need to change the controller to which the vdisk1 is attached. Issue is somewhere else.
  8. Hi! I looked at your xml and apart the boot order lines everything seems ok. Not sure about the ram, you specified 2 GB, which is the minimum requirement for windows 10, if you can you should increase that value a bit..Also for the cpu, I would try to assign 2 cores if it's possible..Note that ram/cpu changes may not be needed it's just a thing to try. Once in the uefi shell (the screenshot you attached) type "exit", the system should reboot and the "press any key to boot from cd" should appear. What happens if you press a key? From your description it reverts back to the uefi shell?
  9. no, apart the last two lines indicating that the vm is receiving a shutdown command from libvirt for whatever reason...
  10. Does this refer to unraid booted in legacy bios mode? Can you attach the output of "cat /proc/iomem" ?
  11. Screenshot above doesn't show any error. So you are able to boot unraid in legacy bios mode, run the windows 10 vm and you get a video signal, at 800x600, but in the gpu device status some problem is reported (which problem)? 1. does your vm is configured with ovmf? 2. are you passing through all the components of the gpu?video, audio, (usb controller)?If you don't know what I'm talking about attach the diagnostics file. 3. are you using a proper vbios for the gpu?Dump your own, do not use a downloaded one.
  12. The other one IS in use (I assume ethernet at 5:00.0 is eth1). Any reason you are trying to passthrough the ethernet?Could you consider using the virtual bridged network br1 (the xml snippet does so..)? So either passthrough the ethernet by attaching to vfio, or use a bridge: the difference is the vm in the first case will see the real ethernet card, in the other case it will create a virtual controller that will use the bridge connectivity. Passthrough case: Correct, but starting from unraid 6.9, the setup in System devices page will bind the device both on vendor/device ids and domain:bus:slot.function, so in your case it will be setup like: BIND=0000:05:00.0|8086:10d3 The issue here is that the ethernet will always be grabbed by unraid even if it's inactive. And apparently you cannot isolate from the gui. If I were you I woud: 1. backup the unraid usb, so to restore if something goes wrong 2. open config/vfio-pci.cfg with a text editor in the unraid usb stick 3. Add BIND=0000:05:00.0|8086:10d3 If you have multiple devices isolated just append them, for example: BIND=0000:05:00.0|8086:10d3 0000:04:00.0|10de:100c 0000:04:00.1|10de:0e1a Save and reboot, and your eth1 should now be isolated, without loosing eth0 connectivity for unraid.
  13. Sorry but I don't understand: Bios with uefi enabled, unraid boots, otherwise bios screen uefi is disable in bios, and unraid boots?it's not what you were writing in the sentence above. which problems?uefi/legacy?...
  14. If you are booting unraid in uefi mode, most probably the gpu is in use by unraid itself, by efifb, if you want to check run "cat /proc/iomem" in terminal and check why qemu cannot write to that addresses highlighted in yellow. If you find efifb is using that, one way is to disable efifb in syslinux: video=efifb:off Note that you will need remote access to unraid because you will not have a video output anymore.
  15. I think virtio-scsi controller is not supported by mac os, delete it, you are not using it. This is a odd number of cores, apple may not like it. Either use a proper number of cores (16/18/24), or follow the examples reported here (irregular topology, use 24 and disable 4 cores): https://github.com/Leoyzen/KVM-Opencore I assume you copied the efi folder to the mac os disk before deleting the opencore disk.
  16. I think you can see the version during boot: Example image: If you need latest/custom/ specific version of seabios, just compile it and set the vm xml to point to the compiled bios: <os> ... <loader>/path/to/compiled/seabios/bios.bin</loader> ... </os>
  17. Yes, sometimes creating a new vm from scratch is faster and easier Happy you solved in some way at the end.
  18. Download gparted live iso and add that iso to the vm you are using, boot from it (either change the boot order in the vm settings or in the vm bios). You will boot into the gparted live iso and you could resize the partitions you want. I just used the gparted live iso to: - convert mbr to gpt - create a bios_grub partition to boot legacy bios + gpt - create an efi partition to migrate from legacy bios to uefi - move efi partition from "right" to "left" - resize (increase) the ext4 partition from 50 GB to 150 GB gparted is very easy to use, it has a nice gui. I don't know if there is any difference, but I prefer to use qemu-img to increase the disk size: qemu-img resize path/to/raw/img/vdisk.img +100G Will increase the size of a raw img by +100GB Make a backup first!playing with partitions can destroy all your data.
  19. It's a crash in qemu itself, you can fill a bug in qemu bugtracker. I think noone here can help since unraid simply has qemu Maybe qemu doesn't like the pcie riser.
  20. But in your xml only 04:00.0 is passed through. It could be related to properly resetting the gpu. Try to replace in your xml this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> With this: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/> </hostdev> Moreover, you have both vnc and gpu passthrough: I'm not sure it can be done, some report that it works, some other that it's not working. I would delete also this: <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> If you want vnc install a vnc server inside the windows os or use remote desktop. Since you have a br0 network the vm is reachable from the same lan.
  21. I don't understand..did you change the boot options by enabling csm? Does your unraid usb stick have the EFI folder?(note: EFI and not -EFI). I would just backup the config folder in the unraid usb stick, delete all, copy new files on it, run the bootable script and allow uefi boot when asked, copy back the config folder, disable csm in the bios. Is unraid your only os or are you booting other bare metal oses?If so, if they were installed with legacy bios you won't be able to boot them with csm disabled, until you convert the partition scheme from mbr to gpt and make the fat32 efi partition with bootloader file(s) in it.
  22. Because you didn't apply the acs override patch, read my answer above; once acs override is applied reboot the server.
  23. post your diagnostics file. You also have 21:00.3 And you need acs override enabled (both --> meaning downstream,multifunction), because your gpu is inside a iommu group which contains other devices different than the gpu.
  24. Try this: <controller type='scsi' index='4' model='virtio-scsi'/> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host11'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='4' bus='0' target='0' unit='0'/> </hostdev> Since you have the controller with index=4 I think you need to attach the device to controller=4. Let me know.