Jump to content

ghost82

Members
  • Posts

    2,726
  • Joined

  • Last visited

  • Days Won

    19

Posts posted by ghost82

  1. On 5/15/2024 at 10:17 PM, plantsandbinary said:

    but did either of you use Macinabox to generate the Mac VM you're using?

    No, I'm currently running my primary mac os vm on another linux host and manually generated the xml; macinabox was useful to start, but I have too many mods now. I'm currently using smbios data of imacpro1,1.

    Serial number must be correct as far as the smbios model you set, but apple should not be able to validate when you bought the device.

  2. As far as findmy and the black screen I'm quite sure it depends on gpu rendering.

    On my vm everything is fine (I'm on Monterey and cannot upgrade because of my cpus), I'm passing through a 6900xt gpu and enabled unfairgva in opencore because also TV app showed a black screen. Now everything is fine.

     

    Senzanome.thumb.png.307967cdc751af3022ed61dfea096757.png

  3. As far as the location map: my though is that the black screen could be because of gpu rendering requirement: are you connecting via vnc?This could explain the black screen.

    As far as the not able to get location, I know location can be found only with wifi protocol, not with ethernet. No wifi, no location, and maybe external wifi (a usb wifi dongle for example) wont work at all to get the location because of how it works in mac os in general, but only a proper wifi pcie module passed to the vm from the host.

    As far as imessage I have no idea, I don't use it at all, can you check both devices are connected to apple account?To be sure, logout, login again and retest.

    And also does your en0 connection is seen as built-in?Check in ioreg explorer or with hackintool

    See following image, both en0 and en1 builtin:

     

    Hackintool_3.png.c7089ed4d615ddf9f745db30f6ca4769.png

  4. Mac address change is not a must, but recommended.

    En0 must be built-in.

    Your issue was that from the apple side you login multiple times (too many) to your account with different devices. In real, you changed only the configuration of the vm, but each change resulted in a new device seen from the apple side, so they locked the account.

    Glad that you solved the issue.

  5. 44 minutes ago, plantsandbinary said:

    but he didn't say more than this

    First of all, logging into an apple account from a mac os virtual machine is not illegal and is not against apple terms and conditions; what you can't do is virtualizing a mac in a host which is not a mac. Note that qemu is available also for mac os hosts.

    I'm using my apple account in a qemu vm (other than my real macbook pro) from more than 6 years now and, apart being banned/flagged at the beginning (which required calling an operator), because basically I didn't know what I were doing, I never had a single issue after that.

    If you want to log with an apple account, first you need to setup your hardware such that the en0 is seen as built in, you should use a mac address belonging to apple hardware, choose proper smbios data, serial number. Do not change serial number or other smbios data once you are logged in, because apple will see logins, coming from different devices, and for "security reasons" it will flag your account. If you want to experiment with a mac os vm do it offline, Once you know what you're doing freeze the configuration and go online.

    Once you know what you're doing, call apple and give them the code (if you have one) to unlock the account; it could be that the operator has no idea of what you're saying and doesn't even know what is the code you have to unlock the account; if this is the case call again to speak to another operator untill you find one that knows what you're talking about..I had to call 3 times to find a smart operator that unlocked my account in less than 10 seconds..

    Do not tell them you are virualizing a mac on a non apple hardware, you can say you are virtualizing it on a mac host, so..??They have to unlock your account.

  6. 17 hours ago, The Transplant said:

    I did manage to restore the older file and get it running.  But I had made some changes and want to get the newer image restored so will work through your suggestions above

    If an older backup works, just replace the img file with the newer one and it should boot.

    • Like 1
  7. Hello @The Transplant

    I never used the backup plugin, so I'm sorry but I don't know how this plugin works.

    Anyway...

    18 hours ago, The Transplant said:

    But the newest img file does not have an xml or an fd file associated with it

     

    An img file is the disk file.

    The xml file contains the emulated hardware and layout of the virtual pc.

    The edk2 ovmf code fd file is the bios of your virtual pc.

    The edk2 ovmf vars fd file is the nvram of your virtual pc.

    You need these 4 things to boot a qemu vm.

     

    18 hours ago, The Transplant said:

    But the newest img file does not have an xml or an fd file associated with it

     

    You should be able to boot the img by recreating the other 3 files; you can copy the ovmf code fd file from another vm (it's the bios, the bios is the same for every vm), and you also can use the ovmf vars fd file copied (better) from the same vm (the vars file can contain nvram variables, such as boot order).

    As far as the xml you can create a new vm, so to generate a new xml and change the relevant part of this new xml to point to your img disk file, and fd files; if hardware layout is different than the old xml (for example, you add a new usb virtual controller which was not there in the original vm), windows should be smart enough to not hang. Obviously you need to know if your original vm was a uefi or bios vm, q35 or i 440fx, so you can create the new vm accordingly, using ovmf or seabios nad q35 or i440fx virtual chipsets.

     

    19 hours ago, The Transplant said:

    My fd file is not named ovmf_vars.fd but 20240206_0300_5ab648ed-0c63-aa51-400c-277ece7bd277_VARS-pure-efi.fd.  I assume that doesn't matter?

    Correct, it doesn't matter, unraid applies a prefix with the uid of the vm to the filename.

     

    19 hours ago, The Transplant said:

    /mnt/user/domains/Outlook/vdisk1.img - the image is currently in a backups folder - so I will move it to the corresponding folder in domains and leave this as is.

    Either edit the path to point to that of where the img is, or move the img to that path.

     

    19 hours ago, The Transplant said:

    Should I do anything with the 20240206_0300_5ab648ed-0c63-aa51-400c-277ece7bd277_VARS-pure-efi.fd file that is currently in the backups folder?

    This file should contain some nvram variables, but it shouldn't matter, make sure to check the xml to point to this file.

    If it doesn't boot you may need to enter the ovmf bios screen (by pressing esc key on vm boot repeatedly) and change the boot order.

  8. 10 hours ago, austin said:
    2024-01-13T22:46:40.073081Z qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.01H:ECX.pcid [bit 17]

    These are warnings and that features are simply ignored because your native cpu doesn't support them.

     

    10 hours ago, austin said:

    The next "weird" thing was that the EFI partition didn't include the NvVars file

    You don't need that file, that is for emulated nvram, and you will not emulate it.

     

    10 hours ago, austin said:

    OpenCore configurator it said this is an older efi, i just ignore this and continued following along.

    This is the issue, if you use the configurator with an older or newer version of opencore it will mess your configuration file.

    What is not working is opencore because it's not configured properly/wrongly or because the image you are using is too old for the os you're trying to boot.

     

    10 hours ago, austin said:

    Again, ignore and copied it to the correct partition.

    I suggest to not copy the efi folder to the macos disk, but let it stay alone in its own disk and configure the vm boot from it, so that the original efi in the macos disk will be simply ignored, otherwise if you mess things with the efi it will be more difficult to mount that partition.

  9. As you can see network types are "PCNet32"; most probably the os has few drivers for network adapters, maybe only that one.

    You can try to manually edit your xml and put 'pcnet' as model type for network, instead of 'e1000' or whatever it is.

     

          <model type='pcnet'/>

     

  10. From your xml you are using legacy bios boot (seabios).

    The issue could be related to legacy bios vs the gpu you are passing through (the 3060 is a recent gpu with uefi support), not able to display legacy boot.

    You should convert your vm to uefi bios (ovmf): this means convert the installed vm to uefi and also modify the xml template to boot with uefi-ovmf bios).

    --

    To confirm the culprit you can temporary switch from gpu passthrough to vnc, and I'm quite sure that with vnc you will see the verbose boot.

  11. Try this:

    1- add:

    video=efifb:off

    in the syslinux configuration. You find it under: Main - Boot Device - Flash - Syslinux Configuration

    Add video=efifb:off to the Unraid OS label, in the 'append line', so it results like this:

    append video=efifb:off vfio_iommu_type1.allow_unsafe_interrupts=1 isolcpus=1-8,13-20 iommu=pt avic=1

     

    This may not be necessary since the boot vga is not the one you are trying to pass, but let's decrease the possibilities of errors. If it will work you could try to remove efifb:off and see if it still works.

     

    2- modify your vm in avanced view (xml mode), replace the whole xml with this:

    <domain type='kvm'>
      <name>Windows 11</name>
      <uuid>2b7d02e7-ce93-6934-5afb-641e9b93ab6e</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows11.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>33554432</memory>
      <currentMemory unit='KiB'>33554432</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>16</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='1'/>
        <vcpupin vcpu='1' cpuset='13'/>
        <vcpupin vcpu='2' cpuset='2'/>
        <vcpupin vcpu='3' cpuset='14'/>
        <vcpupin vcpu='4' cpuset='3'/>
        <vcpupin vcpu='5' cpuset='15'/>
        <vcpupin vcpu='6' cpuset='4'/>
        <vcpupin vcpu='7' cpuset='16'/>
        <vcpupin vcpu='8' cpuset='5'/>
        <vcpupin vcpu='9' cpuset='17'/>
        <vcpupin vcpu='10' cpuset='6'/>
        <vcpupin vcpu='11' cpuset='18'/>
        <vcpupin vcpu='12' cpuset='7'/>
        <vcpupin vcpu='13' cpuset='19'/>
        <vcpupin vcpu='14' cpuset='8'/>
        <vcpupin vcpu='15' cpuset='20'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-7.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi-tpm.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/2b7d02e7-ce93-6934-5afb-641e9b93ab6e_VARS-pure-efi-tpm.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv mode='custom'>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough' check='none' migratable='on'>
        <topology sockets='1' dies='1' cores='8' threads='2'/>
        <cache mode='passthrough'/>
        <feature policy='require' name='topoext'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/domains/iso/Win11_22H2_English_x64v2.iso'/>
          <target dev='hda' bus='sata'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/domains/iso/virtio-win-0.1.240.iso'/>
          <target dev='hdb' bus='sata'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/cache/domains/loaders/spaces_win_clover.img'/>
          <target dev='hdc' bus='sata'/>
          <boot order='1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0xb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0xc'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0xd'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
        </controller>
        <controller type='pci' index='7' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='7' port='0xe'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
        </controller>
        <controller type='pci' index='8' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='8' port='0xf'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='usb' index='0' model='qemu-xhci' ports='15'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:1f:a0:a3'/>
          <source bridge='br1'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <tpm model='tpm-tis'>
          <backend type='emulator' version='2.0' persistent_state='yes'/>
        </tpm>
        <audio id='1' type='none'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x33' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x33' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x2f' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x32' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x33' slot='0x00' function='0x2'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x2'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x33' slot='0x00' function='0x3'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x3'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>

     

    3- Reboot the server nad start Unraid OS (NO GUI)

    4- try to run the vm

     

    I don't see any other configuration error; basically, it could hang at reboot because the drivers could expect a gpu multifunction device, like in bare metal, and the gpu in your vm was not configured to be a multifunction device.

    I hope it's not dependant on the clover bootloader you set in the vm (it should not...), which I think it's not needed anymore with recent windows.

     

    It could also be related to the passed through nvme controller, sometimes passing both gpu and nvme could create issues.

  12. None of the vms you have have the e1000 type network emulated card.

    Open the vm in advanced mode (xml view), find the network block, for example:

        <interface type='bridge'>
          <mac address='52:54:00:ce:dc:cb'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>

     

    and change the model type line by hand, like this:

          <model type='e1000-8254em'/>

     

    Save and boot the vm.

    If it still doesn't work reattach diagnostics and let us know which vm you are working on of the 10 vms you have.

    Paste also the output of this terminal command, from inside the virtual machine:

    lspci

     

  13. 6 hours ago, n0rx said:

    Do you still think the "BAR 1: assigned to efifb"

    No, I've seen that line in other logs, most probably efifb attaches early and then it is detached because of the syslinux directive. If you look at the memory I'm quite sure efifb will not be there.

     

    6 hours ago, n0rx said:

    Do you think re-installing Unraid might resolve the issue?

    I think not..

     

    In my opinion it is either related to the gpu itself (note that I'm not saying the gpu isn't working) or the motherboard (and that's why I suggested a bios update, because agesa was updated).

  14. 34 minutes ago, takkkkkkk said:

    I'm used to simple windows ways where mounting an image/iso would mean windows would simply create "C:" or "D:" so that I can access it, I never thought of being asked of "where do you want to mount to", it just doesn't really click to me that it wouldn't automatically get mounted as another unassigned devices. Once it gets mounted, does it act as folder within share? this concept seems really unusual to me...

    The tutorial refers to mount the img disk in the host (unraid). In linux in general, you create an empty folder and you mount the img inside that folder (the mounting point): files on the disk will be shown inside the mounting point, and you will have read/write permissions.

    If you mount the img in windows I think you will have only read permissions, but I may be wrong.

×
×
  • Create New...