-
Posts
2,725 -
Joined
-
Last visited
-
Days Won
19
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by ghost82
-
-
Qemu 7.2 released, important notes for mac os:
Add TCG & KVM support for MSR_CORE_THREAD_COUNT
Commit 027ac0cb516 ("target/i386/hvf: add rdmsr 35H MSR_CORE_THREAD_COUNT") added support for the MSR_CORE_THREAD_COUNT MSR to HVF. This MSR is mandatory to execute macOS when run with -cpu host,+hypervisor. This patch set adds support for the very same MSR to TCG as well as KVM - as long as host KVM is recent enough to support MSR trapping. With this support added, I can successfully execute macOS guests in KVM with an APFS enabled OVMF build, a valid applesmc plus OSK and -cpu Skylake-Client,+invtsc,+hypervisor
Opencore already added a workaround several years ago, but now it's built in.
-
Is it error code 56?
Localized windows isos could have issues with network drivers; if you search in internet you can read that it's still not clear on what it depends, windows or qemu.
For q35 machine type a fix is to use machine version 5.1, but it seems it doesn't fix it for i440fx type...
You could try this "funny" fix I found, maybe it will work:
follow exactly these instructions, it worked at least for 2 people:
https://github.com/virtio-win/kvm-guest-drivers-windows/issues/750#issuecomment-1345280986
-
Does it change anyyhing if you switch to machine version 5.1?
-
Hi JonathanM, I don't think they are mutually exclusive, one can have an hostdev block for the m.2 controller and have a vdisk attached to a virtual controller, obviously saved somewhere other than the m.2.
This discussion has no replies because there are no data to read, what he should expect to obtain, if he's only saying "I have a problem".....
-
11 minutes ago, partyx said:
we found that only selecting the 2 interfaces on BUS 2 works
Thanks for reporting. It can be quite tricky even to pass the controllers so consider yourself lucky to have 2 nics available in the vm.
It could be an issue with initialization for the driver, since you have 4 controllers, coupled 2 by 2 in a multifunction device; having one of the multifunction controller missing may cause issue for the driver or for the firmware.
So, having only one nic available for the vm (in your last screenshot) was probably due to the error in the vfio-pci.cfg:
BIND=14e4:165f 0000:01:00.1|14e4:165f 0000:02:00.0|14e4:165f 0000:02:00.1
Most probably only the controller on 02:00.0 was available, but changing the sintax into vfio-pci.cfg made both available.
-
If the vm is setup correctly for all the controllers passed it could be an issue with configuration (?).
I know @Ford Prefect is a sort of guru for network related things, maybe he knows what's wrong.
-
11 hours ago, Manthony said:
logs continue to show many timeouts
I'm not confident it will work, but you could try to add to your syslinux config:
iommu=soft
and/or
pci=noats
-
Nica that it works now.
Just for completeness, before making it going I can see following errors:
8 hours ago, partyx said:BIND=14e4:165f 0000:01:00.1|14e4:165f 0000:02:00.0|14e4:165f 0000:02:00.1
it's wrong, should be:
BIND=0000:01:00.1|14e4:165f 0000:02:00.0|14e4:165f 0000:02:00.1|14e4:165f
5 hours ago, partyx said:Name of the file vfio.pci.cfg is wrong, should be vfio-pci.cfg
-
Nice that you fixed it!
testdisk is very powerful, I remember I used it on a physical damaged hd to recover data.
14 hours ago, chonnymon said:do you have any advice for how I can prevent this happening again? or is there anything that could periodically back up the VM?
I think this happened because of one of these 2 reasons:
1. improper shutdown of vm (like a forced shutdown) or any other thing caused inside the guest, filesystem got corrupted;
2. damaged physical hd where the vdisk is saved
I don't know of any plugin to backup vms (I don't use any), but you could simply shutdown the vm and copy/backup the vdisk image, better on another physical hd.
All the data are in the vdisk, this can be used when creating a new vm or you can simply mount partitions contained in the vdisk to access data.
You could automate the backup with a script + cron job, logic should be:
1. check if the vm is running
2. if it's running, shutdown the vm
3. copy/backup the vdisk image
-
OVMF from edk2 repository, version stable 202211:
https://github.com/tianocore/edk2/
OVMF from audk repository, based on edk2 Stable 202211:
-
https://download.gigabyte.com/FileList/Manual/mb_b450m-ds3h-v2_e.pdf
SVM mode
https://www.youtube.com/watch?v=Q-_3Hzl2gvI
search also for iommu and set it to enable if you want to passthrough devices
-
Look my edited post above and go/try the manual way
-
Syslinux config looks good to me, but I would set it to:
pcie_acs_override=downstream,multifunction
Reboot the server.
Go to your iommu groups through the unraid gui and see if you can put checkmarks on ehternet controllers that you want to passthrough, save and reboot the server.
Now go to your vm and see if something shows.
Note that you can't passthrough ethernet controllers that are in use by unraid (which seems the case, since you are setting in unraid eth0 -->eth3), boxes will show greyed out in iommu groups.
--
The above instructions may not work..
Be sure to have another device to attach the unraid usb in case something goes wrong and restore a backup
Not sure but you may need to set the config manually since all the controllers have the same vendor/device id.
1. backup the unraid usb, so to restore if something goes wrong
2. open config/vfio-pci.cfg with a text editor in the unraid usb stick
3. Add BIND=0000:01:00.1|14e4:165f 0000:02:00.0|14e4:165f 0000:02:00.1|14e4:165f
Note that I didn't write 01:00.0 which should be eth0, needed and reserved for unraid
Save and reboot, and your eth1-->eth3 (3 ports) should now be isolated, without loosing eth0 connectivity for unraid.
-
5 minutes ago, BUSTER said:
the vbios that i created myself with gpuz didn't work
Did you hex edit the gpu-z dumped file to remove the nvflash header?
When dumped with gpu-z rom files contain the so-called nvflash header, because with that header you will be able to flash that rom with the nvflash utility into the gpu.
With vms we do not need to flash the gpu, but qemu doesn't like that header, so you have to remove it; the rom must start with 0x55AA (hex)
- 1
-
12 minutes ago, chonnymon said:
Sorry if this is obvious question, to attempt repair with fsck would the command just go in unraid terminal
I did it in the past but I can't remember, sorry; in real I'm not sure even if fdisk or fsck can do it..
If partitions are still on the vdisk you may be able to repair/re-create the partition table: maybe also gpart can do it, see here (gpart/parted should be available in unrid, and note that the link is for real hds, /dev/hda, etc., but you can apply that commands to vdisk images too):
https://ubuntuforums.org/showthread.php?t=370121
Just experiment with tools and see if you go somewhere, that's why it's important to have a backup of the original corrupted vdisk, so you can copy it again and start over.
- 1
-
13 minutes ago, chonnymon said:
root@Meshify:~# fdisk -l /mnt/user/domains/Sharepoint/vdisk1.img Disk /mnt/user/domains/Sharepoint/vdisk1.img: 200 GiB, 214748364800 bytes, 419430400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
If you are sure the output is only this your vdisk got corrupted as it doesn't list any partition on it.
The output should list something like:
[root@tower kali]# fdisk -l /media/6TB/kali/Kali.img Disk /media/6TB/kali/Kali.img: 150 GiB, 161061273600 bytes, 314572800 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: DF66D708-DD93-4359-A4C5-C03FBBE746CE Dispositivo Start Fine Settori Size Tipo /media/6TB/kali/Kali.img1 1026048 314570751 313544704 149,5G Linux filesystem /media/6TB/kali/Kali.img3 2048 1026047 1024000 500M EFI System Partition table entries are not in disk order.
You can try to reapir the disk with fdisk or any other command line tool to repair filesystems, make a backup of the disk before attempting to repair it.
- 1
-
Paste the output of these two commands:
fdisk -l /mnt/user/domains/Sharepoint/vdisk1.img
qemu-img info /mnt/user/domains/Sharepoint/vdisk1.img
-
1 minute ago, Syrincs said:
Do you mean this?
yes, change it to q35-5.1, save and reboot
-
-
On 11/23/2021 at 7:19 AM, runamuk said:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0' multifunction='on'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev>
From a general point of view this is also wrong.
multifunction tag should be not in the source block but in the target address block.
Obviously, for a pcie gpu, in a q35 machine type, target address bus should be other than 0, the gpu is not built-in, it should be attached to a pcie-root-port (index=1...n), i.e. bus=1...n
-
On 11/23/2021 at 7:19 AM, runamuk said:
<type arch='x86_64' machine='pc-q35-6.1'>hvm</type>
This xml wont work, you can't change only the machine type to q35 and use pci-root.
q35 has a different layout, based on pcie-root on bus 0, with pcie-root-port (s) attached to bus 0, with devices attached to either pcie-root (built-in devices) or pcie-root-port (s).
Fastest way is to re-create the vm with q35 machine type.
-
13 hours ago, BUSTER said:
and found another Vbios file
Just to add more info to this, when using a downloaded vbios file, always make sure that it corresponds to your gpu, especially for:
- amount of memory
- gpu clock
- memory clock
- memory type
- (memory brand)
- if running a vm with ovmf gpu must support UEFI
It's possible to use a different vbios (by different I mean same gpu model, but different brand) other than that of the real gpu, but the gpu will fail if these things do not match with the actual gpu.
-
6 hours ago, Manthony said:
I am also seeing the XML file seemingly revert that section when I reopen it
Make sure you have this at the top of your xml:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
otherwise the qemu:override section is considered not valid.
6 hours ago, Manthony said:I also got the "hostdev0 not valid" issue
hostdev0 is an alias for the device; if not specified in the xml it is assigned automatically by libvirt/qemu, so yours can be different than hostdev0 (look at the libvirt/qemu logs in your diagnostics and you will find what is your alias); you can assign manually your alias with what you want in the hostdev section of your xml.
QuoteThe <qemu:device> sub-element groups overrides for a device identified via the alias attribute. The alias corresponds to the <alias name=''> property of a device. It's strongly recommended to use user-specified aliases for devices with overridden properties.
Note also that you need at least libvirt 8.2.0 to use qemu:frontend
-
I would use an utility such as gparted, you can download free a boot live iso.
Create a new vm with the vdisk as a secondary disk, booting the gparted live iso.
Then, partition the additional space and merge it to the actual 30G partition.
Yes, you need to extend partition.
Multiple GPUs connecting to Multiple VMs - Issues Running Simultaneously
in VM Engine (KVM)
Posted
Probably yes
Why you want this?Even if audio and video are on different iommu there should be no issue...
Just enable in the config acs override patch, set it to 'both' (meaning downstream,multifunction), reboot the server, check again your iommu groups, reassign devices to vms if needed.
Set the gpu in the target as multifunction as you did.