Jump to content

ghost82

Members
  • Posts

    2,725
  • Joined

  • Last visited

  • Days Won

    19

Posts posted by ghost82

  1. 15 hours ago, rvijay007 said:

    Is this an issue because both GPUs are on the same IOMMU group?

    Probably yes

     

    15 hours ago, rvijay007 said:

    Is there any safe way to break up the cards into different IOMMU groups, but keep the multiple functions the cards on the same IOMMU group?

    Why you want this?Even if audio and video are on different iommu there should be no issue...

    Just enable in the config acs override patch, set it to 'both' (meaning downstream,multifunction), reboot the server, check again your iommu groups, reassign devices to vms if needed.

    Set the gpu in the target as multifunction as you did.
     

  2. Qemu 7.2 released, important notes for mac os:

     

    Add TCG & KVM support for MSR_CORE_THREAD_COUNT

     

    Commit 027ac0cb516 ("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") added support for the MSR_CORE_THREAD_COUNT MSR to HVF. This MSR is mandatory to execute macOS when run with -cpu host,+hypervisor. This patch set adds support for the very same MSR to TCG as well as KVM - as long as host KVM is recent enough to support MSR trapping. With this support added, I can successfully execute macOS guests in KVM with an APFS enabled OVMF build, a valid applesmc plus OSK and -cpu Skylake-Client,+invtsc,+hypervisor

     

    Opencore already added a workaround several years ago, but now it's built in.

  3. Is it error code 56?

    Localized windows isos could have issues with network drivers; if you search in internet you can read that it's still not clear on what it depends, windows or qemu.

    For q35 machine type a fix is to use machine version 5.1, but it seems it doesn't fix it for i440fx type...

    You could try this "funny" fix I found, maybe it will work:

    follow exactly these instructions, it worked at least for 2 people:

    https://github.com/virtio-win/kvm-guest-drivers-windows/issues/750#issuecomment-1345280986

  4. Hi JonathanM, I don't think they are mutually exclusive, one can have an hostdev block for the m.2 controller and have a vdisk attached to a virtual controller, obviously saved somewhere other than the m.2.

    This discussion has no replies because there are no data to read, what he should expect to obtain, if he's only saying "I have a problem".....

     

     

  5. 11 minutes ago, partyx said:

    we found that only selecting the 2 interfaces on BUS 2 works

    Thanks for reporting. It can be quite tricky even to pass the controllers so consider yourself lucky to have 2 nics available in the vm.

    It could be an issue with initialization for the driver, since you have 4 controllers, coupled 2 by 2 in a multifunction device; having one of the multifunction controller missing may cause issue for the driver or for the firmware.

     

    So, having only one nic available for the vm (in your last screenshot) was probably due to the error in the vfio-pci.cfg:

    BIND=14e4:165f 0000:01:00.1|14e4:165f 0000:02:00.0|14e4:165f 0000:02:00.1 

    Most probably only the controller on 02:00.0 was available, but changing the sintax into vfio-pci.cfg made both available.

  6. Nica that it works now.

    Just for completeness, before making it going I can see following errors:

    8 hours ago, partyx said:

    BIND=14e4:165f 0000:01:00.1|14e4:165f 0000:02:00.0|14e4:165f 0000:02:00.1

     

    it's wrong, should be:

    BIND=0000:01:00.1|14e4:165f 0000:02:00.0|14e4:165f 0000:02:00.1|14e4:165f

     

    5 hours ago, partyx said:

    image_2022-12-16_121632284.thumb.png.a3a43cefd42bfaa29cb37b8d0d4d88c3.png

     

    Name of the file vfio.pci.cfg is wrong, should be vfio-pci.cfg

  7. Nice that you fixed it!

    testdisk is very powerful, I remember I used it on a physical damaged hd to recover data.

     

    14 hours ago, chonnymon said:

    do you have any advice for how I can prevent this happening again? or is there anything that could periodically back up the VM?

    I think this happened because of one of these 2 reasons:

    1. improper shutdown of vm (like a forced shutdown) or any other thing caused inside the guest, filesystem got corrupted;

    2. damaged physical hd where the vdisk is saved

     

    I don't know of any plugin to backup vms (I don't use any), but you could simply shutdown the vm and copy/backup the vdisk image, better on another physical hd.

    All the data are in the vdisk, this can be used when creating a new vm or you can simply mount partitions contained in the vdisk to access data.

    You could automate the backup with a script + cron job, logic should be:

    1. check if the vm is running

    2. if it's running, shutdown the vm

    3. copy/backup the vdisk image

  8. Syslinux config looks good to me, but I would set it to:

    pcie_acs_override=downstream,multifunction

    Reboot the server.

    Go to your iommu groups through the unraid gui and see if you can put checkmarks on ehternet controllers that you want to passthrough, save and reboot the server.

    Now go to your vm and see if something shows.

    Note that you can't passthrough ethernet controllers that are in use by unraid (which seems the case, since you are setting in unraid eth0 -->eth3), boxes will show greyed out in iommu groups.

    --

    The above instructions may not work..

    Be sure to have another device to attach the unraid usb in case something goes wrong and restore a backup

    Not sure but you may need to set the config manually since all the controllers have the same vendor/device id.

    1. backup the unraid usb, so to restore if something goes wrong

    2. open config/vfio-pci.cfg with a text editor in the unraid usb stick

    3. Add BIND=0000:01:00.1|14e4:165f 0000:02:00.0|14e4:165f 0000:02:00.1|14e4:165f

    Note that I didn't write 01:00.0 which should be eth0, needed and reserved for unraid

     

    Save and reboot, and your eth1-->eth3 (3 ports) should now be isolated, without loosing eth0 connectivity for unraid.

  9. 5 minutes ago, BUSTER said:

    the vbios that i created myself with gpuz didn't work

    Did you hex edit the gpu-z dumped file to remove the nvflash header?

    When dumped with gpu-z rom files contain the so-called nvflash header, because with that header you will be able to flash that rom with the nvflash utility into the gpu.

    With vms we do not need to flash the gpu, but qemu doesn't like that header, so you have to remove it; the rom must start with 0x55AA (hex)

    • Like 1
  10. 12 minutes ago, chonnymon said:

    Sorry if this is obvious question, to attempt repair with fsck would the command just go in unraid terminal

    I did it in the past but I can't remember, sorry; in real I'm not sure even if fdisk or fsck can do it..

    If partitions are still on the vdisk you may be able to repair/re-create the partition table: maybe also gpart can do it, see here (gpart/parted should be available in unrid, and note that the link is for real hds, /dev/hda, etc., but you can apply that commands to vdisk images too):

    https://ubuntuforums.org/showthread.php?t=370121

     

    Just experiment with tools and see if you go somewhere, that's why it's important to have a backup of the original corrupted vdisk, so you can copy it again and start over.

    • Thanks 1
  11. 13 minutes ago, chonnymon said:
    root@Meshify:~# fdisk -l /mnt/user/domains/Sharepoint/vdisk1.img
    Disk /mnt/user/domains/Sharepoint/vdisk1.img: 200 GiB, 214748364800 bytes, 419430400 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes

     

    If you are sure the output is only this your vdisk got corrupted as it doesn't list any partition on it.

    The output should list something like:

    [root@tower kali]# fdisk -l /media/6TB/kali/Kali.img
    Disk /media/6TB/kali/Kali.img: 150 GiB, 161061273600 bytes, 314572800 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: DF66D708-DD93-4359-A4C5-C03FBBE746CE
    
    Dispositivo                 Start      Fine   Settori   Size Tipo
    /media/6TB/kali/Kali.img1 1026048 314570751 313544704 149,5G Linux filesystem
    /media/6TB/kali/Kali.img3    2048   1026047   1024000   500M EFI System
    
    Partition table entries are not in disk order.

     

    You can try to reapir the disk with fdisk or any other command line tool to repair filesystems, make a backup of the disk before attempting to repair it.

    • Thanks 1
  12. On 11/23/2021 at 7:19 AM, runamuk said:
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x09' slot='0x00' function='0x0' multifunction='on'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
        </hostdev>

     

    From a general point of view this is also wrong.

    multifunction tag should be not in the source block but in the target address block.

    Obviously, for a pcie gpu, in a q35 machine type, target address bus should be other than 0, the gpu is not built-in, it should be attached to a pcie-root-port (index=1...n), i.e. bus=1...n

  13. On 11/23/2021 at 7:19 AM, runamuk said:
        <type arch='x86_64' machine='pc-q35-6.1'>hvm</type>

     

    This xml wont work, you can't change only the machine type to q35 and use pci-root.

    q35 has a different layout, based on pcie-root on bus 0, with pcie-root-port (s) attached to bus 0, with devices attached to either pcie-root (built-in devices) or pcie-root-port (s).

    Fastest way is to re-create the vm with q35 machine type.

     

  14. 13 hours ago, BUSTER said:

    and found another Vbios file

    Just to add more info to this, when using a downloaded vbios file, always make sure that it corresponds to your gpu, especially for:

    - amount of memory

    - gpu clock

    - memory clock

    - memory type

    - (memory brand)

    - if running a vm with ovmf gpu must support UEFI

     

    It's possible to use a different vbios (by different I mean same gpu model, but different brand) other than that of the real gpu, but the gpu will fail if these things do not match with the actual gpu.

  15. 6 hours ago, Manthony said:

    I am also seeing the XML file seemingly revert that section when I reopen it

    Make sure you have this at the top of your xml:

    <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

    otherwise the qemu:override section is considered not valid.

     

    6 hours ago, Manthony said:

    I also got the "hostdev0 not valid" issue

    hostdev0 is an alias for the device; if not specified in the xml it is assigned automatically by libvirt/qemu, so yours can be different than hostdev0 (look at the libvirt/qemu logs in your diagnostics and you will find what is your alias); you can assign manually your alias with what you want in the hostdev section of your xml.

    Quote

    The <qemu:device> sub-element groups overrides for a device identified via the alias attribute. The alias corresponds to the <alias name=''> property of a device. It's strongly recommended to use user-specified aliases for devices with overridden properties.

     

    Note also that you need at least libvirt 8.2.0 to use qemu:frontend

×
×
  • Create New...