-
Posts
2,726 -
Joined
-
Last visited
-
Days Won
19
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by ghost82
-
-
Add an hostdev block, set the source address of the device you want to passthrough, set the target address of the passed through device in the vm, attach the device in the target guest to bus 0 for machine type i440fx, attach the device to bus 0 --> x for machine type q35 (bus 0 in q35 is like a "built-in device", but this is not your case; for q35, if bus is different than 0 check that you have a pcie-root-port with the index number equal to that of the target bus).
For a q35 machine the block will be something like this:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x41' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </hostdev>
1. Your source address is 41:00.0 (bus=41, slot=0, function=0)
2. Target address is 08:00.0 (bus=8, slot=0, function=0)
3. Check that pcie-root-port with index=8 exists:
<controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xd' hotplug='off'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller>
-----
For i440fx the block will be something like this:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x41' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev>
1. Your source address is 41:00.0 (bus=41, slot=0, function=0), the same obviously
2. Target address is 00:05.0 (bus=0, slot=5, function=0): i440fx has only bus 0
If you get errors like double address in use, or something similar, check that the target address is not already in use by something else, in this case change the bus number (for q35) or the slot number (for i440fx).
- 1
-
9 hours ago, sterling90 said:
Are you thinking it could be a bios issue or a motherboard issue?
Bios config or bugged bios.
-
Use the attached vbios for the fighter version.
It seems another issue on that reddit post, yours seems like the reset bug, but it shouldn't be, never found a 6000 series gpu with that bug, and my concern is about the mb bios...
-
On 9/28/2022 at 12:56 AM, Everend said:
With a family PC they can power cycle it without me...
With a VM I can remote manage from work through VPN...
It is possible to powerdown the whole server from the virtual machine too if it is of interest, with a qemu hook.
This, for example:
#!/bin/bash if [[ $1 == "Monterey" ]] && [[ $2 == "stopped" ]] then shutdown -h now fi
Basically, when virtual machine with name Monterey is stopped (shutdown), the whole server will shutdown too with command shutdown -h now.
Obviously, to boot again the server you need physical access.
Out of interest you can also autostart a vm when the server boots.
On 9/28/2022 at 1:12 AM, Everend said:Even if we have a mini-PC for surfing, is it a thing to create a VM and somehow surf through that to protect / isolate the mini-PC from malicious sites?
Consider a vm as a real pc, so yes, this is possible; depending on what you want a vm could not be strictly necessary, for example pihole is a plugin for unraid.
On 9/28/2022 at 1:12 AM, Everend said:My current hardware doesn't support VMs so I don't have experience with them at all.
Things to check are vt-x and vt-d support; vt-d is higly recommended too, to be able to passthrough hardware to the vm.
On 9/28/2022 at 1:12 AM, Everend said:If I have keyboard, mouse, monitor plugged into the Unraid machine then VM would work the same as a family PC. But if they are in the other room, can I do that?
If I understood well, for this to be possible you need the keyboard/mouse and gpu passed through to the vm: this will work like a desktop pc.
If they are in another room they have other devices in their hands (a client), another pc? you can access the vm with remote desktop from the client and manage all from the client. Depending on the use, a gpu passed through to the vm is recommended for graphics acceleration even if the vm is accessed remotely.
Good alternatives to remote desktop are moonlight, parsec, nomachine.
On 9/28/2022 at 1:12 AM, Everend said:With non-tech wife/kids do I want to bother with VM or is that just making things too complicated?
It depends on how you configure the server; for them it could be the same as using a real pc; as I wrote it is possible to autostart the vm on server boot and shut it down on vm shutdown.
If the vm is configured with keyboard/mouse and gpu passthrough attached to a monitor they will not notice they're running a virtual machine.
-
You could also check if there are some beeps from the internal speaker, if you have it, and check also the bios post code on the motherboard, that 2-digits red display: sometimes it can point in the right direction to know why it's happening.
-
Here:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <boot order='1'/> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev>
-
16 hours ago, sterling90 said:
I can only do a GPU passthrough when there is no VFIO bind and no amdgpu blacklist
Do it the right way:
1. bind audio and video to vfio
2. set gpu as multifunction and pass a rom file, your settings are wrong, change from this:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </hostdev>
to this:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <rom file='/path/to/Powercolor.RX6600XT.8192.210701.rom'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </hostdev>
Replace /path/to/Powercolor.RX6600XT.8192.210701.rom with the correct path. Check that your rx6600xt is the red devil version.
3. Reboot and try
----
If you still get errors like:
Refused to change power state from D0 to D3hot
Unable to change power state from D3cold to D0
you have something wrong in your bios configuration, or the bios is simply bugged: in this case I suggest to check the vfio group on reddit to see if anyone else has this issue for that motherboard and to contact Gigabyte directly, since your motherboard is still supported and you run the latest F7 available bios.
-
yes, it could be a private ticket which only you and the official support can see.
-
Can you check the link? Cannot find anything.
-
Nothing more to advise, sorry, all I know is that my networks are available as soon as I start any vm, virtio, e1000, vmxnet3...
-
Nice that we found the solution, never give up!Still a mistery why the benchmark apps overcome that limit...
PS: no problem for the ordered ssd, one ssd more will find its use for sure
-
Is it an hp z420? --> update, yes it is, you wrote it
If it is, I'm reading that you should have in your bios some settings about "performance profile" or "power regulator settings": check if you have some dynamic settings there and switch to static high performance, if you have that setting. Check all the other power settings in bios, especially related to pcie (if any...).
I'm reading that some hp servers should have power saving modes in bios that could throttle pcie devices.
-
I think the important thing is to make trim to work for the virtual disk.
virtio, and in particular virtio-blk should support trim, try to add discard=unmap:
<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback' discard='unmap'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk>
Check available size of the real disk in unraid, copy a big file inside the vm on the virtual disk.
Check again available size of the real disk: should have decreased.
Delete the file you copied from inside the vm.
Check available size of the real disk: if it increases again trim is working correctly.
This is what JorgeB pointed in his reply.
-
59 minutes ago, Mr.Will said:
I tried adding rotation_rate='1' as you suggest but it errors saying it only works with SATA, SCSI or IDE
true, sorry, virtio is not compatible with rotation_rate
-
My guess is that this shouldn't be related to vms...what about disabling bonding (bond0) and use only br0 bridge?
-
15 minutes ago, robd said:
edit - I used the kombustor benchmark app within MSI afterburner, and it's able to max out the GPU. So I guess that rules out VM config/setting that is limiting the GPU usage, and it's all pointing at the golf sim app.
That is making things more difficult, because apparently the gpu works well, try the advices given above, read all again because I edited my posts.
-
13 minutes ago, robd said:
Can you explain further item #2?
This is my gpu, audio and video parts passed:
<hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </source> <rom file='/opt/gpu-bios/6900xt.rom'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </hostdev>
Real hardware addresses (seen by the host) are:
video at 06:00.0audio at 06:00.1
--> add multifunction='on' in the address line, so that I can set the addresses in the guest at:
video: 03:00.0
audio: 03:00.1
Same bus (03), same slot (00), different functions (0 and 1)
13 minutes ago, robd said:HWiNFO app shows GPU Performance Limiters.... Utilization is yes
For what I'm reading this is normal if the gpu isn't under load. Are all the cables connected to the power port(s) of the gpu and does your psu have enough power for all?
13 minutes ago, robd said:Any workaround if there are application limitations if it doesn't like to be run in a VM, or am I just SOL?
It should just exit or crash, nor limiting performances, I was talking about bad programming, if this is the case..
-
What about other softwares?like benchmark softwares or other games?
It could simply be that that simulator is not programmed well to be run in a vm?
As far as the vm settings I can only suggest to:
1. use q35 machine type instead of i440fx for better pcie compatibility
2. put the gpu in a multifunction device, video and audio parts in same bus, same slot, different function.
3. check for irq conflicts inside the guest and use msi fix to switch from irqs to msi(x) if there are irq conflicts
(4). Use ovmf instead of seabios (but you need to convert the disk or reinstall windows), so that the gpu will use the uefi vbios instead of the legacy one.
-
I read that the fenvi t919, or better the BCM94360CD chipset, can be problematic in some smbios, like the mac pro 7,1, which you are using or the imac pro 1,1, or anything newer than the imac 15,1.
The ideal smbios for this chipset should be that of the imac 15,1 (monterey not offically supported for this model).
Someone reported that smbios of imac 17,1, officially supported by monterey is still working with that chipset.
-
mmm... if the tv was on before starting the vm and it didn't make any difference I'm not confident that the dummy plug will change things
Did you try to force dgpu acceleration for that app/game in windows settings?
For example:
-
18 hours ago, Kilrah said:
it likely won't be something made by Apple
I would say, this for sure!
How many developers abandoned their projexts because of continuous apple changes, and not for profitable advantages for the end user, but just for locking more their systems...you have no idea of how many wireless dongles I changed in 5 years, just because they weren't working in minor os revisions.
Want to buy apple?just use with its software and take into account that apple software support will end very fast, apple business is selling hardware, remember...
-
Try to change to this:
<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio' rotation_rate='1'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk>
to force to be seen as ssd.
- 1
-
10 minutes ago, robd said:
GPU usage NEVER goes above 15%.
10 minutes ago, robd said:To add, this is a headless setup and I’m using Moonlight to connect and stream video
This could be the issue, when the os is running headless I think graphics acceleration will be disabled, all will be loaded on the cpu side.
If you have a monitor around, just plug it in the gpu output and turn it on, then stream via moonlight and see if there's any difference.
If it makes a difference you can think about buying a dummy hdmi (or whatever connection has your gpu) to simulate an attached monitor.
-
On 10/1/2022 at 7:18 AM, chocorem said:
that is the issue, as soon as I pass the GPU though, I get a black screen .... os is not loading
Enable remote access to the mac os vm, just to be sure you only have a black screen but the guest os is booting; if it is booting you need to modify the config.plist of opencore and add a boot-arg so nvidia drivers will be used.
QuoteThe WebDrivers in Sierra and High Sierra also support another boot argument called nvda_drv_vrl=1
Unraid OS version 6.11.1 available
in Announcements
Posted · Edited by ghost82
I'm thinking if it is a good idea to use only one kernel and in particular one of the 5.19.x series..I'm seeing lots of kernel panics on users' logs especially related to amd gpus passed through because of the kernel, for example:
I was also experiencing some minor issues on other boxes with kernels 5.18.x and 5.19.x and decided to skip these kernels. All good with 5.15.x or 5.10.x; 5.15.x will be supported till the end of 2023, 5.10.x till the end of 2026.
Is it too much effort to let the users to choose between 2 kernel versions?Maybe support the latest one and the 5.10.x lts?