sturmstar Posted August 6, 2019 Share Posted August 6, 2019 (edited) 36 minutes ago, belliash said: This is strange. I also had this problem,but bios f42a with agesa 1.0.0.3ab resolved it for me. Did you alter any settings in Bios or inside Unraid ... like vm settings? Do you boot unraid in UEFI or legacy mode? Edited August 6, 2019 by sturmstar Quote Link to comment
belliash Posted August 6, 2019 Author Share Posted August 6, 2019 After flashing BIOS all settings restore to default values, so I had to reconfigure everything from scratch. I use legacy mode to boot Linux and UEFI mode with OVMF to boot Windows VM. Works here as expected. Quote Link to comment
sturmstar Posted August 6, 2019 Share Posted August 6, 2019 1 minute ago, belliash said: After flashing BIOS all settings restore to default values, so I had to reconfigure everything from scratch. I use legacy mode to boot Linux and UEFI mode with OVMF to boot Windows VM. Works here as expected. Very strange... this is exactly my setup too. Which GPU do you use for booting / unraid - before it gets assigned to a vm? Because I have to change my primary GPU in UEFi to 2nd Pcie because this is my smaller Card (1050ti) So unraid starts up with this gpu in legacy and auto starts the first vm with this assigned gpu (GPU bios was necessary) The second Gpu (first pcie) is a 1070 which I can assign freely without need for a gpu bios. This setup works perfectly with f6 hmm... I don't get it Quote Link to comment
belliash Posted August 7, 2019 Author Share Posted August 7, 2019 (edited) I dont use the same GPU for post and Windows. I got separate cards for Linux and Windows. Have you read the reddit post? Everything is described there. Edited August 7, 2019 by belliash Quote Link to comment
helin0x Posted August 8, 2019 Share Posted August 8, 2019 The bios has now been released for my board, will give it a bash and report back. I use single GPU, only difference. Quote Link to comment
sturmstar Posted August 8, 2019 Share Posted August 8, 2019 1 minute ago, helin0x said: The bios has now been released for my board, will give it a bash and report back. I use single GPU, only difference. ok, I'm curious Quote Link to comment
guerlando Posted February 18, 2020 Share Posted February 18, 2020 On 7/8/2019 at 1:49 AM, Caduceus said: I recently built an unRaid server with almost this exact setup. Same motherboard, same cpu, I have the 1660ti (Gigabyte 1660ti mini-itx) setup in the primary pcix16 slot) but I have an old Radon 5770 in the second slot. The only caveat is, I need to run the F5 version of the bios. I had upgraded to F32 and F40. Both gave me serious issues that would not allow Windows VMs to start with those bios revisions. I did a full post in the VM Troubleshooting section. I hope this helps. You can see my IOMMU groups here:Post with IOMMU groups On 7/8/2019 at 9:29 AM, Caduceus said: Hi, I've read your posts about the IOMMU groups but I need help understanding them. In group 12 you have your Radeon + 1 USB Controller + Ethernet In group 13 you have your NVIDIA + 1 USB I believe Radeon is in the x8 and NVIDIA is in the x16, rigth? In which group is the x1 which is located in the middle of them? I just need to know if it's not in the same group as the NVIDIA (group 13), because I want to pass Strong GPU to a VM but keep my Wifi Card + Weak GPU on the host. Is it possible? Also, can you tell me which USB ports are in each controller? If you dind't notice, then no problem. The main thing here is finding about the 1x pci slot's group. If you could tell me what is a PCIe GPP Bridge, PCIe Dummy Host Bridge, Data Fabric device and etc that would also be very helpful. Thank you so much! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.