DrMucki

Members
  • Posts

    36
  • Joined

  • Last visited

Everything posted by DrMucki

  1. Thank you. I will give it a try.... plz stand by 🙂 this will take a while.... Doing this while in homeoffice.... but now breakfast....
  2. I just trief I did it on the unraid server using the terminal... I just copied your aml file to the efi folder unmounted and rebooted.... Am i doing correct... I always switch back to vnc, booting back the vm, do the changes in OC, powering down Vm, edit the VM to passthrough with vbios, helper script, starting the machine... but than the only way is Teamviewer to check... and this is not coming up again so with your 1st file nu success .Where to check whether the path is correct? Can i do this within the vm ( i dont think so because im not getting i the vm with the card passing through) Meanwhile I will try the other file you provided... Thanks for your help so far....
  3. Just started from scratch and the vm is working again. I also got it managed to mount the disk, but this ended up with a stable apple logo... so start from scratch looks only an hour ..... But I did not get it managed to get it started with the amd passing through. Here is what I did, for trying it to get to work: 1. found the Device ID from my GPU: Device: Curacao PRO [Radeon R7 370 / R9 270/370 OEM] [6811] 2. discovered the firmware path: cat /sys/bus/pci/devices/0000:01:00.0/firmware_node/path \_SB_.PCI0.GPP0.X161 3. Edited the provided SSDT-GPU-SPOOF.dsl file a) changed the firmware path , wherefore I changed twice the "EXTERNAL" and the "Scope" changed the spoofed "device id" to 0x11 , 0x68 and renamed the model (Did I forget anything here or did I make a mistake? I provided the file at the end...) 3. Downloaded MaciASL and double clicked my .dsl file and saved it as aml file 4. opened Openconfigurator and mounted the partition and opened it. Looked for the config.plist and double clicked it to open it with OC 5. On the first page I saw several aml files, so I dragged and dropped my aml. file here and saved the config.plist, and unmounted the partition (I am not sure for this step, so I decided to do as described, may be it is wrong....) 6. Did a shutdown and started the vm again... I did not change anything in the VM , because I want to see if the VM comes up. It does and it started without any problems. Teamviewer starts up automatically and I was able to get in via TeamViewer . OK Shut Down.... 7. Edited the VM: changed graphics to AMD Radeon, added VBIOS, Saved changes, runned helper script and started the VM again. VM is starting but no possibility to get in via TeamViewer 8. changing back to VNC (after editing the known bug in the xml file..) brings back the machine... but with no amd- What am I doing wrong? Thank you for your help! SSDT-GPU-SPOOF.aml SSDT-GPU-SPOOF.dsl config.plist config.original.plist
  4. i totally messed up my config.plist the vm is not booting anymore. I got a copy of my old config.plist, but i don't now how to mount the proper image and where it is.... I don't get into the boot menu, because i configured it in opencore to boot directly into the image.... Any help appreciated
  5. Thank you very much ! i only saw the first link you provided but the second one was the so called "missing link"
  6. I am trying to passthrough a Radeon R7 370 GPU (2GB) (Vendor ID 1002:6811) to a MAC OS Catalina created VM with Macinabox. The VM was created successfully and runs fine with standard options (2 Cores, 4 GB). I installed Teamviewer to get to the machine from elsewhere. With just VNC installed as graphics everything works. BUT: When trying to passthrough the GPU, the VM is not booting up correctly. Teamviewer is not coming up. Of cause i used the script Helper and used a freshly dumped VBIOS for the card. (by the way a windows VM works fine with the VBIOS). When changing back to VNC it ends up with "Guest has not initialized Display (yet)" To get rid of this you have to edit the XML manually and not starting the Script afterwards: You have to change the XML in this part: </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x01' function='0x0'/> CHANGED to bus='0x00' slot='0x02' function='0x0' </video> but you must not run the Helper script afterwards. Then we have normal state again... This seems to be a Bug ... it always happens if you once trying to change the graphics. But the main thing is that i am not able to passthrough the AMD GPU! I attached 2 log files (one when using vnc graphics, one using the GPU passthrough) For the last one i also attached the XML. Ending up with a few questions: 1: Is there a possibility to get this working with my AMD GPU? 2. Will there be fix for the helper Script thta chages the bus to a different value (did somebody reported that as a bug?) 3. If nothing works is there a suggestion for a 4 GB GPU which works with macinabox and windows without any problems? Thank you very much for your advices in advance Marc logVNC.txt logGPU.txt GPU.xml
  7. May I ask a simple question please? Every time I start the VM (Catalina) I have to choose within a menu to start the proper partition (UEFI Boot Menu?) I can choose between the Catalina Partition, Recovery,UEFI Shell,Shutdown and RESET NVRAM. Is there any possibility to get rid of this that the vm ist stating without interacting. Otherwise I cannot use anything else than the built im VNC.... Thank you for your help
  8. Trying to dump a bios from a Ryzen 4650g APU was not possible. It was the only graphics in the system and was bound to the vfio, bit it still gave errors... is it possible to dump the vbios from An apu? Has anybody tried to do so? Can anybody help or provide a dump of a Ryzen 4650g APU? I added a VBIOS from my "old" AMD Radeon R7-370+ with 2GB AMD Radeon R7-370_2G.rom
  9. Hi there. I saw many videos from space invader and read all along this Forum but I did not find a solution so far. I am just testing Unraid 6.8.3. and everything works fine. I set up a windows 10 VM but wanted to improve the video quality, for using this VM for games and video editing purposes. My question is : Is it possible to passthrough the graphics from the AMD Ryzen APU to the windows VM? I have got a AMD Ryzen 5 Pro 4650g APU with graphics in chip. and wanted to passthrough the graphics to a VM.. there is no further GPU in the server. The System Devices shows the card as: IOMMU group 14:[1002:1636] 05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Renoir (rev d9) it is the only device in this group. Ok so far dos good. I tried to change the graphics from VNC to AMD Renoir (05:00.0) and started the VM. The machine is not getting online. I cannot connect via Teamviewer or VNC. the log file (1) is like is at the end... Ok i do not have a bios file. there is none "on tech power up". So I tried to dump the bios file with the script provided by space invader. Installed the script and it gave an error that bios couldn't be dumped and I should install vife config plugin to bind the graphics to the vfio. I installed the plugin and run the script which binds the card to vfio. I rebooted the system (btw I only use web access, there is no monitor attached to the server) After retrying dumbing the bios again gave the same error: (Error 1). I tried to restart the VM with this binding and it resulted in the same problem. The vm started but was not accessible. Any help is welcome.... Or do I have to just buy another gpu?? Thank you for your help in Advance! Marc Error 1: "Script location: /tmp/user.scripts/tmpScripts/Dump VBIOS/script Note that closing this window will abort the execution of this script You have selected this device to dump the vbios from 05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Renoir (rev d9) This does look like a valid GPU to me. Continuing ......... Checking if location to put vbios file exists Vbios folder already exists I will try and dump the vbios without disconnecting and reconnecting the GPU This normally only works if the GPU is NOT the Primary or the only GPU I will check the vbios at the end. If it seems wrong I will then retry after disconnecting the GPU Defining temp vm with gpu attached Domain dumpvbios defined from /tmp/dumpvbios.xml Starting the temp vm to allow dump Domain dumpvbios started Waiting for a few seconds ..... Stopping the temp vm Domain dumpvbios destroyed Removing the temp vm Domain dumpvbios has been undefined /tmp/user.scripts/tmpScripts/Dump VBIOS/script: line 298: rom: Permission denied Okay dumping vbios file named AMD-APU-Ryzen4650G.rom to the location /mnt/user/isos/vbios/ cat: rom: No such file or directory Um.... somethings gone wrong and I couldn't dump the vbios for some reason Sometimes when this happens all we need to do to fix this is 'stub' or 'bind to the vfio' the gpu and reboot the server This can be done in Unraid 6.8.3 with the use of the vfio config plugin or if you are on Unraid 6.9 or above it can be done directly from the gui in Tools/System Devices .....So please do this and run the script again" LOGFILE 1: 2020-12-30 14:03:05.783+0000: starting up libvirt version: 5.10.0, qemu version: 4.2.0, kernel: 4.19.98-Unraid, hostname: Tower LC_ALL=C \ PATH=/bin:/sbin:/usr/bin:/usr/sbin \ HOME='/var/lib/libvirt/qemu/domain-32-Windows 10' \ XDG_DATA_HOME='/var/lib/libvirt/qemu/domain-32-Windows 10/.local/share' \ XDG_CACHE_HOME='/var/lib/libvirt/qemu/domain-32-Windows 10/.cache' \ XDG_CONFIG_HOME='/var/lib/libvirt/qemu/domain-32-Windows 10/.config' \ QEMU_AUDIO_DRV=none \ /usr/local/sbin/qemu \ -name 'guest=Windows 10,debug-threads=on' \ -S \ -object 'secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-32-Windows 10/master-key.aes' \ -blockdev '{"driver":"file","filename":"/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \ -blockdev '{"driver":"file","filename":"/etc/libvirt/qemu/nvram/f0686497-8ef9-51e9-56fa-27f8c5a495d6_VARS-pure-efi.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \ -machine pc-i440fx-4.2,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \ -cpu host,hv-time,hv-relaxed,hv-vapic,hv-spinlocks=0x1fff,hv-vendor-id=none \ -m 4096 \ -overcommit mem-lock=off \ -smp 8,sockets=1,cores=8,threads=1 \ -uuid f0686497-8ef9-51e9-56fa-27f8c5a495d6 \ -display none \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=33,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=localtime \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 10/vdisk1.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \ -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=libvirt-3-format,id=virtio-disk2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/Win10_1909_German_x64.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \ -device ide-cd,bus=ide.0,unit=0,drive=libvirt-2-format,id=ide0-0-0,bootindex=2 \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.173-2.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.0,unit=1,drive=libvirt-1-format,id=ide0-0-1 \ -netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:db:42:da,bus=pci.0,addr=0x2 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=38,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -device ich9-intel-hda,id=sound0,bus=pci.0,addr=0x9 \ -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 \ -device vfio-pci,host=0000:05:00.0,id=hostdev0,bus=pci.0,addr=0x5 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2020-12-30 14:03:05.783+0000: Domain id=32 is tainted: high-privileges 2020-12-30 14:03:05.783+0000: Domain id=32 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2020-12-30T14:03:19.211200Z qemu-system-x86_64: warning: guest updated active QH