scorcho99 Posted July 23, 2022 Share Posted July 23, 2022 I have on my server a nvidia card as primary and in the secondary slot an old R7 250. Both cards pass through and work for VMs fine for me in 6.9.2. When I updated to 6.10.3 I noticed that the unraid console unexpectedly switched to the R7 250 secondary card (this was confusing since boot appeared frozen), which didn't happen on 6.9.2. I bound the AMD card to vfio on boot and that kept the unraid host console on the primary nvidia card, but in either case pass through of the AMD card no longer works. The VM starts, but there is no video output and the CPU used by the VM remains pegged. I have to force power it off. Rolling back fixes it. I don't use rom files on either card. I boot in CSM mode. All I have done to get passthrough working was add video=efifb:off,video=vesafb:off to syslinux, but that was actually for the nvidia GPU in primary slot passthrough. This particular AMD card has never had reset problems even on older versions of unraid for me. I'm not sure what changed or how to fix this. I know GPU drivers were integrated at some point, probably with 6.10. Are there drivers I can blacklist to get things behaving as they used too? Quote Link to comment
PeteyBoPetey Posted July 23, 2022 Share Posted July 23, 2022 I've had the exact same problem for 2 months now. Unraid 6.10 wouldn't boot, a BIOS update fixed that, but that's when all my GPU problems started. Second GPU became the primary GPU. I was able to get a VM to start and give video output on the primary GPU, but would stop after a minute and the VM log would say "could not power on, device stuck in D3". I think I have fixed it. In the BIOS menu and APM Advanced Power Management was set to disabled. I suspected that the GPU was turning off for some reason, but with APM disabled I assumed it wasn't due to APM powering it down. After trying everything else I randomly tried enabling APM and now my primary GPU outputs the BIOS and boot screen. So I think setting APM to enable was the fix. Just gotta redo my cooling loop to be 100% sure. It's counter intuitive, because you'd think that the default state of APM would be to leave everything powered on? 1 Quote Link to comment
scorcho99 Posted July 23, 2022 Author Share Posted July 23, 2022 Unfortunately, my bios doesn't seem to have that setting. And my issue presents a little differently. I don't get the "could not power on, device stuck in D3" error in the log. I vaguely recall that when you enable ASPM is basically has the motherboard handle it. When it's disabled, it isn't necessarily disabled, it is just handed off to the booted OS to handle instead. I could be wrong on that, I haven't thought much about those settings in awhile. It looks like a OVMF/UEFI VM does actually get video out. From experience, this means the R7 360 was 'touched' by the OS during boot which always screws up the vbios and breaks passthrough. Since I'm booting in CSM mode, it screws up seabios VMs but not UEFI ones. I'm not sure why the card is getting touched at all still though, since it is bound to vfio. But I think I have a couple ideas. Quote Link to comment
scorcho99 Posted July 25, 2022 Author Share Posted July 25, 2022 Well, I'm stumped on this. I tried blacklisting the amdgpu and radeon drivers, binding to vfio and neither helped. Then I rolled back to 6.9.2 and used space invader one's guide to dump the vbios, confirmed it worked in 6.9.2 and updated. No difference. So it seems like I'm stuck with only OVMF VMs if I want to pass through this card in 6.10.3. But that breaks some other things with the VM so I think I'm stuck on 6.9.2. Quote Link to comment
SoleInvictus Posted August 7, 2022 Share Posted August 7, 2022 That makes three of us! I'm having the exact same issue and ran through the same sequence of troubleshooting steps. 6.9.2 for me until it's rectified. Quote Link to comment
scorcho99 Posted August 7, 2022 Author Share Posted August 7, 2022 I have since tried 6.10.0 and 6.10.2 (based on the reading the release notes there was a change in the default passthrough method for 6.10.3). Unfortunately, there was no difference. I guess I'm going to remain on 6.9.2. Quote Link to comment
techhit Posted August 9, 2022 Share Posted August 9, 2022 Same experience for me Quote Link to comment
sausuke Posted August 9, 2022 Share Posted August 9, 2022 (edited) I thought upgrading to 6.10.2 breaks my GPU Passthrough too. I just want to comment here because it was fine in my Unraid. Using same vbios from 6.9.2 with 2x 3070. I thought it's not booting but I have dual monitor so it was booting on my 2nd monitor which having raspberry pi 400 attached (the one configuring Unraid) so I don't see it on my other monitor booting. I got on this thread because I want to test windows 11 2h22 and 6.10 have profile for the VM. I have old windows 10 VM and it works fine in 6.10 so I thought the problem is specific for windows 11 installation and doesn't do GPU passthrough Edited August 9, 2022 by sausuke Quote Link to comment
constellation Posted July 30, 2023 Share Posted July 30, 2023 Any updates on this? I did a bigger jump from 6.9.2 to 6.12.3 and getting the exact same issues. GPU passthrough is effectively dead making my VM's absolutely useless for their purpose. Quote Link to comment
scorcho99 Posted August 2, 2023 Author Share Posted August 2, 2023 No updates for me, still on 6.9.2. I may test 6.12 but I doubt it will be an improvement. I will also try an nvidia card in that slot as I now have a spare, but that is academic knowledge. I can probably sort of make the VM work with OVMF I guess but I'll lose easy snapshots. Quote Link to comment
Zeze21 Posted August 4, 2023 Share Posted August 4, 2023 I am on this as well... but with the slight advantage that some gpus work. I have a Setup with 4 gpus 2 1030s 1 rtx 2070 Super and 1 3060 gtx. The thought behind this is that one of the 1030 is for my wifes vm. The other 1030 is for unraid itself (whatever unraid needs a gpu for) The 2070 is for my vm. The 3060 (because of the 12 gb) is for stable diffusion docker. So now i had quite an odysee of upgrading downgrading and lastly sticking with Unraid 6.12.3, but the downside is.... the 2070 does not work with my vm anymore.... neither does the 3060 BUT(!) the 1030 does..... I am stumped... I mean it's better than nothing but it is certainly NOT what i'd prefer.... Quote Link to comment
sublimejackman Posted August 4, 2023 Share Posted August 4, 2023 (edited) Completely dead in the water here. I didn't notice that the GPU wasn't passing thru until after I started adding a new drive, unable to roll back. My only use case for Unraid requires GPU support within VMs. My server is effectively useless Edited August 4, 2023 by sublimejackman Quote Link to comment
Zeze21 Posted August 6, 2023 Share Posted August 6, 2023 I have added a bug report for this issue, if others who are also affected by this want to comment there, i am sure this issue will get more attention and hopefully a solution! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.