Jump to content

Duglim

Members
  • Posts

    4
  • Joined

  • Last visited

Duglim's Achievements

Noob

Noob (1/14)

6

Reputation

  1. Hi, made the switch from the old plugin the the new ich777 version. Worked perfect. I'm still on Unraid 6.12.5 and would like to update to the new 6.12.8. With the old plugin, we had to wait until the plugin adapted new Unraid versions. Is it save to update Unraid and will the new plugin still work on the Unraid 6.12.8 linux kernel 6.1.74? Thx, Duglim
  2. Thx, I was able to fix it. I had to restart each container one time from the "advanced view" edit screen, that fixed it. Strange. Now it's working again on autostart/restart.
  3. I recently updated from 6.11.5 to 6.12.2. Now I noticed an issue/bug with the Docker networking: - I have several containers setup, that are not using their own network, but using the network from another container (in my case, they are all using the Deluge-VPN container). They are set to "network type = none" and "--net=container:binhex-delugevpn". This setup is running flawless since month. - Siche 6.12.2 update, there seems to be an issue. All containers without own network are shown WITH network/ports in Unraid. They show the same network/ports from other containers, that are used. Very weird... At first, I assumed a simple visualization error in the interface. But if you shutdown one of the containers with the duplicate network/ports, the ports are also closed on their real containers. In my case, if I shutdown binhex-radarr the unraid UI is not reachable anymore. So it MUST be more than an UI issue. I created an screenshot, to show the duplication of the ports/networks: Please help, Duglim
  4. I want to thank you for providing this solution and information in this thread! Was reading for a while now and fully managed to get iGPU passthrouh working yesterday. My used Hardware: - Intel Core-i5-13600 CPU with Alder Lake-S GT1 iGPU - Gigabyte Aorus B760i DDR4 Mainboard - NO dedicated GPU - Unraid 6.11.5 I will tell you my journey and steps i took: I started with not having an SR-IOV Option in my BIOS. So I contacted Gigabyte Support and asked them about SR-IOV enabled Bios. Honestley, I assumed, my journey would end here, but after 7 days, the Gigabayte Support send me a custom BIOS with SR-IOV Option. I'm still impressed with such an incredible support, simply wow! After flashing the BIOS and enabling SR-IOV, I installed this SR-IOV-Plugin and 2 VF iGPUs came up. Device 2.0 for host, 2.1 and 2.2 for VMs. I was able to bind the 2 VFs to VFIO without any issue. The GPU-Statistics-Plugin could not show any usage data anymore. I figured out, this was due to using logical iGPU adresses instead of physical. Also the Plugin was just looking for one iGPU. But there is a fork plugin from SimonF in the Unraid Apps which does the trick. So installed SimonFs GPU-Statistics-Plugin and iGPU usage data for the host iGPU was available again. Please note, that this only worked after bindung all VF iGPUs to VFIO. As soon as there are more than 1 iGPU visible to the host, usage data in GPU-Statistics breaks and only Core Freqency works. Edited my existing Windows 11 VM template to add the VFIO iGPU 2.1 as 2nd VGA device. I did NOT add a VGA bios. Booted the VM and installed the Intel GPU drivers. The GPU was detected correctly and was already working. I also set up Remote Desktop and Parsec (host mode with virtual display driver) Since I dont need the VNC Driver and Virtal Adapter, I removed both from the VM XML (needed to do this in XML-view, since editing the VM in Simple View Mode messed up some PCI-devices. Last but not least I also bound the Onboard Soundcard to VFIO and passed to to the VM (again in XML-mode) and finally had sound in Parsec. So, please keep up this good work and discussion here, it was very helpful and provided fresh knowledge to me. Here are both XML snippets I added to the VM-Template: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1f' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x00‘ function='0x0'/> </hostdev> First hostdev is the iGPU. second the onboard soundcard. Greetz, Duglim
×
×
  • Create New...