[SOLVED] Ryzen - Primary GPU passthrough


gelmi

Recommended Posts

Hi,

I have a problem with GPU passthrough. I am using Unraid 6.3.5 (trial version).

I have only one GPU - RX 550. I am trying to pass it through to a VM with Ubuntu, but no luck so far. I do not have additional GPU at home to test. I have tried 1st and 2nd PCIe slot (with and without ACS patch). I am on newest BIOS 0902 (ASUS X370 Pro with Ryzen 1600 and 16GB RAM, RX 550).

GPU displays Unraid text console on monitor when I turn on PC.

When I create VM with VNC, I can see it is booting from ISO, but when I choose RX 550 gpu and start VM, nothing happens - I can still see login prompt on the monitor.

Any ideas for to pass through only one GPU in my case?

 

Info:

Model: Custom
M/B: ASUSTeK COMPUTER INC. - PRIME X370-PRO
CPU: AMD Ryzen 5 1600 Six-Core @ 3200
HVM: Enabled
IOMMU: Enabled
Cache: 576 kB, 3072 kB, 16384 kB
Memory: 16 GB (max. installable capacity 64 GB)
Network: bond0: fault-tolerance (active-backup), mtu 1500 
 eth0: 100 Mb/s, full duplex, mtu 1500
Kernel: Linux 4.9.30-unRAID x86_64

IOMMU:

IOMMU group 0
	[1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
IOMMU group 1
	[1022:1453] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1453
IOMMU group 2
	[1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
IOMMU group 3
	[1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
IOMMU group 4
	[1022:1453] 00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1453
IOMMU group 5
	[1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
IOMMU group 6
	[1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
	[1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1454
	[1022:145a] 29:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a
	[1022:1456] 29:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Device 1456
	[1022:145c] 29:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 145c
IOMMU group 7
	[1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452
	[1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1454
	[1022:1455] 2a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455
	[1022:7901] 2a:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
	[1022:1457] 2a:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Device 1457
IOMMU group 8
	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59)
	[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
IOMMU group 9
	[1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1460
	[1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1461
	[1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1462
	[1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1463
	[1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1464
	[1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1465
	[1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1466
	[1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1467
IOMMU group 10
	[1022:43b9] 03:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43b9 (rev 02)
	[1022:43b5] 03:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43b5 (rev 02)
	[1022:43b0] 03:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b0 (rev 02)
	[1022:43b4] 1d:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02)
	[1022:43b4] 1d:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02)
	[1022:43b4] 1d:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02)
	[1022:43b4] 1d:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02)
	[1022:43b4] 1d:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02)
	[1022:43b4] 1d:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02)
	[1b21:1343] 25:00.0 USB controller: ASMedia Technology Inc. Device 1343
	[8086:1539] 26:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
IOMMU group 11
	[1002:699f] 28:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Lexa PRO [Radeon RX 550] (rev c7)
	[1002:aae0] 28:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Device aae0

 

 

 

Edited by gelmi
Link to comment
  • 2 weeks later...
  • 1 month later...

@gelmi How were you able to solve it?

 

I use a Ryzen and the Asus X370 Pro as well. The only difference is my GPU: NVidia GTX 1080Ti

When I start the VM I get a black screen on my primary GPU (unRaif 6.4.0-rc).

 

Would appreciate if you could let me know how you solved it.

Edited by Kenishi
Link to comment

Wow, that was a fast answer, thanks a lot. Really appreciate it! 

 

I'm trying to use Windows 10.  

Tried installing Ubuntu and opensuse as well, but run into the same issue.

 

Did it work for you without any special changes in the settings, syslinux configuration or xml of the VM? 

Link to comment

Thanks a lot for your answer. 

 

The differences I had was

* BIOS was set to: OVMF

* Machine was set to: i440fx-2.7

 

I created a new machine with your settings but got the same problem. As soon as I start the VM, my screen goes black and nothing come up. 

 

You only have one GPU, right? 

Do you also see the Linux terminal on the screen and when you boot up the VM, the same screen goes black and then shows the VM content after that?  

 

I wonder why it's working for you with almost the same setup as I have. Something must be different on my side which prevents the graphics card to function correctly. 

 

When I connect to the Windows VM using a remote connection, I see my graphics card in the device manager but with a warning sign. "The device doesn't work properly."

 

So it somehow recognise it, but it doesn't work as it should. 

Edited by Kenishi
Link to comment

I have 2 GPUs: first PCIe for RX560 and second PCIe for GTS 450. I can use either of them for Windows and Ubuntu VM.

Try to reboot PC and start VM one more time. Maybe your GPU suffer from reset bug, so it can be initialized only once in VM after boot.

If this does not work, share some more infrormation about your configuration: BIOS version, which PCIe slot do you use, exact RC version. Did you try to dump GPU ROM and feed it with VM configuration?

  • Upvote 1
Link to comment

On my Asus Prime x370 Pro, I have updated BIOS to the latest 3203, and in this BIOS I enable SVM in order to allow Virtual Machines to run. In the Unraid syslinux.cfg I specify two options for my Ryzen 1600x: rcu_nocbs=0-11 processor.max_cstate=1. With the latest beta of Unraid 6.4, it runs with my Nvidia 1050Ti as passthrough for any OS. For Windows 10, I use OVMF and No to Hyper-V.

  • Upvote 1
Link to comment

@pederm

Thanks a lot for sharing your setup.

Really appreciate it and will try updating my BIOS and the settings right now.

 

Some questions while I'm trying it out:

- Do you have a second GPU?

- Is the Nvidia 1050Ti your primary GPU? 

- If so, does your screen with the terminal on it goes black and then shows the Windows 10 VM?

 

@gelmi:

My PC specs:

- Mainboard: Asus Prime x370 PRO (BIOS Version 0902 <= most stable on my system)

- CPU: Ryzen 7 1700X

- GPU 1: NVIDIA GeForce GTX 1080 TI (PCIE 1) <= Two monitors connected

- GPU 2: NVIDIA GeForce GTX 1080 TI (PCIe 2) <= One monitor connected

- RAM: Corsair Vengeance LPX 64GB (4 x 16GB) DDR4-3000

 

Yes, I already did dump my GPU ROM, using the commands on unRaid as well as in Windows using CPU-Z. Wrote it into the XML file. No luck yet.

 

My second graphics card on PCIe 2 works. But the primary one, on PCIe 1, won't work at all. 

I see the terminal on my screen. As soon as I start the VM, the screen with the terminal goes black and that's it...

 

Really appreciate your help and all your hints!

Link to comment

@pederm That's why it's working for you I think. You're not using your primary GPU for the VM. And that's exactly what I try to achieve - using my primary GPU for the VM. 

 

Do you know if it's possible to use three GPUs with the X370-PRO? 

Edited by Kenishi
Link to comment

OK, that is strange. So, let me get this straight:

+ You have two identical GPUs, first in slot 1 and  second in slot?

+ The GPU in slot 1 does suffer from black screen problem, but the second GPU (identical NVIDIA) in second slot works?

+ Also, when you switch the cards, the card from slot 2, whey you put it into slow 1 does not work, right?

 

If that is correct, maybe try to put card from slot 1 to slot 3 (only 8x) just for testing, so the cards are in slots 2 and 3. Check which of them will not work with VM. Maybe it is not the issue of a PCIe slot 1, but rather that the card cannot be passed to VM after it was initialized for host PC unraid console?

Do you run Unraid UEFI or EFI boot?

 

  • Upvote 1
Link to comment
Just now, gelmi said:

+ You have two identical GPUs, first in slot 1 and  second in slot?

2

 

You're exactly right. Both are exactly the same GPU model. 

 

Just now, gelmi said:

+ The GPU in slot 1 does suffer from black screen problem, but the second GPU (identical NVIDIA) in second slot works?

+ Also, when you switch the cards, the card from slot 2, whey you put it into slow 1 does not work, right?

 

 

Yes, I tried switching them already. The primary GPU (which is in PCIe 1) doesn't work. Doesn't matter which of the two is in there.

The one in the second slot works fine.

 

Just now, gelmi said:

If that is correct, maybe try to put card from slot 1 to slot 3 (only 8x) just for testing, [...]

Maybe it is not the issue of a PCIe slot 1, but rather that the card cannot be passed to VM after it was initialized for host PC unraid console?

2

 

Sadly I can't use slot 3 because there's not enough space for the GPU. There are cables right underneath of the slot and the GPU is too big. 

 

That's what I think as well. The card gets initialized by unraid console and then can't be passed to the VM.

I wonder if it's possible to run unraid without it initializing the primary GPU. 

 

Just now, gelmi said:

Do you run Unraid UEFI or EFI boot?

 

 

UEFI is disabled (by default) on the bios.

I looked into the settings and it's still deactivated - so no UEFI => I think UEFI is still used even if disabled. Tbh, I'm not quite sure about this.

Edited by Kenishi
Link to comment

I did already try the second solution - it's how I dumped my GPU ROM and edited the XML manually. 

 

About the binding/unbinding solution - well, I tried it but failed.

echo "0000:0a:00.0" > /sys/bus/pci/devices/0000:0a:00.0/driver/unbind

shows "No such file or directory". 

Also tried it on "/sys/bus/pci/drivers" but there it says "No such device".

(0a:00.0 is my primary GPU on PCIe slot 1)

 

I slowly begin to give up. :(

 

I'm on latest unRAID 6.4.0-rc14. Set it up today from scratch.

Edited by Kenishi
Link to comment

You are a genius!

You my dear friend are my hero.

 

I owe you so much! Send me your mail address by PM and I'll send you an Amazon voucher or anything else you use to buy online.

 

Starting only one VM with both GPUs worked. 
Then I stopped the VM, assigned one gpu to another VM and started again. And BAM it worked!

I'll write a start script which does that with an "empty VM" automatically, so I don't have to do it myself.

  • Upvote 1
Link to comment

Glad I could help. Script is a good way to go. The problem is with (re)initialization of NVIDIA cards. NVIDIA does not like when consumer GPUs are used for virtualization. They would like you to use their enterprise cards. I still have to restart or sleep/wake up Unraid system in order to reboot linux VM that uses my old GTS 450.

  • Upvote 1
Link to comment
  • 1 year later...

Yes, you can run Unraid headless. I have 2 external GPUs and sometimes one is for 1st VM, the other is for 2nd VM, both VMs are running and Unraid still works. The only thing is that after I shut down VM I cannot reassign my primary GPU for Unraid console, but this is due to my GPU. Some GPUs can be switched back via script. I need to restart my tower when I want to use the GPU by Unraid console.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.