Ancalagon

Members
  • Posts

    35
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ancalagon's Achievements

Rookie

Rookie (2/14)

5

Reputation

1

Community Answers

  1. This appears to only be a problem with VNC. If I switch to SPICE, it scales correctly.
  2. I have an Ubuntu Linux VM that I've been using at 1920x1080 resolution, but I'd like it to be 3840x2160 for full resolution on my 4K monitors. The default QXL video driver doesn't support resolutions that high. But if I change the video model type to virtio in either XML or virt-manager, I can select resolutions up to 5120x2160. The only problem is selecting 3840x2160 or higher has a bug where it scales wrong, making the screen too large and cutting off the right third of the screen. VNC will scroll up and down to access the bottom area of the cutoff screen. But it's impossible to access the horizontal portion that's cut off. It behaves the same no matter what VNC client I use. Also, the mouse cursor alignment is not scaled wrong, so clicks don't line up with where the mouse cursor appears, rather with the area of the screen that should be under the mouse if scaled properly. This makes it difficult to confirm the resolution or switch it back once confirmed, except with the keyboard. Has anyone else experienced this? Which part of the virtualization stack would be responsible for this bug? The virtio video driver? And could that component be upgraded hopefully with a fix? 1920x1080 3840x2160 (right side cut off) Note, the "Scale" option doesn't make a difference. 100% and 200% both behave the same way. I'd like 4K resolution with 200% scaling when using on my smaller 4K laptop monitor and lower scaling on my larger 4K external display.
  3. After reinstalling Windows 11 on a new VM recently, I realized that the settings for timing out the display no longer lock the console as well. It used to lock the console 1 minute after the display timeout. The power setting for my VM is set to never sleep (since the Unraid host doesn't sleep anyway) and no matter what I change the display timeout to, the screen is not locked after waking the screen. In Settings -> Accounts -> Sign-in options, there's the setting If you've been away, when should Windows require you to sign in again?, but the only options are Never or When PC wakes up from sleep. Neither of these options include when the display times out! On my laptop, this setting is selected as Every Time, although it's disabled from changing with the message Windows Hello is preventing some options from being shown. From searching around, it seems like this may have something to do with the VM not supporting S0 low power idle sleep mode. You can see the listed modes with the powercfg /a command. Seems like Microsoft has just broken this functionality for a VM's hardware configuration. The only workaround I've been able to find is to use the old Screen Saver settings to select On resume, display logon screen and set a Wait timeout to 1 minute after the display timeout (to replicate the old lock behavior), with the screen saver itself set to (None). Has anyone else run into this issue and found a better workaround?
  4. @celborn sorry for the late reply. The bottom 2 of those 4 devices are correct. The top 2 are not though. You need the other 2 below that are on 04:00.2 and 04.00.3. This is mine:
  5. I've been attempting to do the same for the 7950x iGPU. I've been able to pass through the Radeon RX 6800 XT and RX 7900 XT GPUs, but I haven't had any luck or found any specific info on the Ryzen 7000 iGPUs. I've added the GPU and audio device to a Windows 11 VM and adjusted the XML so they're on the same bus and slot as a multfunction device. If I remote into the VM, I can see the GPU with a code 43 error. I was able to workaround this same error by using a vBIOS rom for the 7900 XT. I tried exporting the 7950x iGPU vBIOS with GPU-Z, but just got an error message: "BIOS reading not supported on this device". I'm not sure if this is an issue with the iGPU or GPU-Z. I also couldn't find a vBIOS online anywhere. My other thought was maybe besides the VGA and audio devices, it may require the other three devices on the same bus and slot: Encryption controller: Advanced Micro Devices, Inc. [AMD] VanGogh PSP/CCP USB controller: Advanced Micro Devices, Inc. [AMD] Device 15b6 USB controller: Advanced Micro Devices, Inc. [AMD] Device 15b7 I tried adding the encryption controller, but it just shows up as not having a driver (and it won't let me install the AMD chipset drivers in a VM). I didn't try the USB controllers as well. The second controller actually has my UnRAID flash drive, so I'd have to find another USB port on an unused controller to be able to add it.
  6. I find that this app always requires a "force update" in order to update. Is this because the Docker repository seems to delete older images and only keeps a single latest? Is there any way to avoid this?
  7. I've found that if the NIC reconnects to the network for whatever reason (network goes out, Ethernet unplugged, etc.) and receives a different IP address than when Unraid first booted, then the mDNS and TLS certificate are not bound to the new IP address. This is not usually a problem, as I have DHCP static IP assignment set up on my router. But this can occur if I'm changing that IP assignment, for example, and reboot the router to have it assign the new IP. Or I've also seen it where the NIC failed to get an IP on first boot, resulting in a 169.254.x.x IP address. If I unplug and reconnect the Ethernet cable, when the NIC gets the expected IP, the mDNS (e.g. tower.local) hostname is still bound to the 169.254.x.x IP (which fails to load the web interface). If I go to the new IP address manually I also get a certificate warning, as it's using a self-signed certificate, rather than the myunraid.net certificate, for the fresh IP address. Rebooting the server is required to resolve this.
  8. The size of their cards is bonkers now too. For one of my machines, Nvidia cards don't come close to fitting in the case. I also prefer AMD to be able to run macOS VMs (hopefully eventually supported for RX 7000 series).
  9. Come to think of it, I don't think I have. I have seen the progress circle with the Windows 11 logo. That may have been after booting in CSM mode. I think it usually goes straight to the Windows login screen like you said though. I've been using Q35-7.1 myself.
  10. Great to hear. After I got this working, I was messing around with it more to see if there were other ways I could get it working without needing the vbios, but never found one. But afterwards reverting the changes, it wasn't consistent getting it to work again, even after using the same steps (booting into CSM mode first, etc.). But once it works, it seems to continue working, restarting the VM multiple times and never had it stop working. When it wasn't working (always code 43), I found booting the VM with only the GPU passed through without other passed through hardware worked. Whether that was a coincidence or not, I can't say. It's definitely more finicky than the RX 6800 XT was.
  11. I'm using the original vbios dump, no edits. Yes, I am binding all 4 devices to vfio. After adding them to the VM, I'm manually editing the XML to put them on the same bus and slot as a multifunction device. Here's the GPU portion of my XML: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/domains/GPU ROM/AMD.RX7900XT.022.001.002.008.000001.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x2'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0c' slot='0x00' function='0x3'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </hostdev> Have you been able to boot the VM after booting the host in legacy CSM mode?
  12. GPU resizable BAR support is coming in Linux kernel 6.1. It'd be great to support this natively in Unraid, somehow in the VM GUI or a setting or at least a documented script or how-to.
  13. I got this working! First, I was able to successfully pass through the GPU after switching to legacy CSM boot. I was then able to save my vbios rom. Then after adding the vbios rom to the VM XML and switching back to EFI boot, the GPU passthrough works! I can also confirm the GPU is properly reset when rebooting the VM as well.