B550? Maybe? Maybe not?


Recommended Posts

I was looking for upgrading my rig, to a ASUS ROG Strix B550-F without Wifi, with an AMD 3600x (maybe a 3700x).

I both have Plex, Nextcloud and some other containers in docker, and uses a Ubuntu VM as my daily driver.

 

But I've hear people not recommend the B550 for such uses.

 

If i use both the NVME, the SATA ports are reduced. But in my case I have two expansion cards for PCIE x1 with 8xSATA-ports, so that doesn't matter for me.

What does matter is people saying that there are problems with passthrough and since I use it as a daily driver, this might be a problem.

 

Can anyone elaborate on that? Should I buy another Motherboard? I've seen people recommend the x570 so that maybe is a possibility?

 

 

Edited by Michelle Bausager
Edit: (maybe a 3700x)
Link to comment
  • 1 month later...

I would also like to know this. Rocking  a b550 with 3600.


Update:

I have now setup a unraid server with Asus B550-f gaming non wifi

ryzen 3600 non x.

msi gtx 1080 ti

16 gb ram

Running beautifully. Got windows vm up and running. There are two usb controllers so it is possible to use VR also.

 

I followed the remote gaming on unraid, and everything just worked.

 

 

Edited by Dodisbeaver
Answer for question
Link to comment
  • 4 weeks later...
13 hours ago, Big Dan T said:

Just curious what are you CPU & MB temps on the 3600 ?

 

D.

Hey!

 

With some stress and vms running I am at around 70 C on the cpu. I have not yet got the temps for MB. I have not found out yet how to get them to show correctly. Idle at around 30 -35 C

Edited by Dodisbeaver
update to answer
Link to comment
19 hours ago, Dodisbeaver said:

Hey!

 

With some stress and vms running I am at around 70 C on the cpu. I have not yet got the temps for MB. I have not found out yet how to get them to show correctly. Idle at around 30 -35 C

Cheers, mine is idling around 50c with a load of Dockers running and a single windows 10 vm. The motherboard is about the same. I wasn’t sure if that was high or not, but most people are saying that normal and ryzen runs hot

Edited by Big Dan T
  • Like 1
Link to comment
On 2/9/2021 at 11:04 AM, Dodisbeaver said:

I would also like to know this. Rocking  a b550 with 3600.


Update:

I have now setup a unraid server with Asus B550-f gaming non wifi

ryzen 3600 non x.

msi gtx 1080 ti

16 gb ram

Running beautifully. Got windows vm up and running. There are two usb controllers so it is possible to use VR also.

 

I followed the remote gaming on unraid, and everything just worked.

 

 

When you say it's running beautifully .... Have you been able to separate the IOMMU groups out so you can individually pass through selectively what you want?

I.e. what are the chances of running a VM with usb, gpu1, sata ssd off main board, M2 off main board all passed through, whilst Unraid used gpu2, sata HDDs off main board as well as off a hba card in pcie slot?

  • Like 1
Link to comment

I am running Windows vm with a cache pool of two ssds. I am passing through an extra ssd with games on via unassigned drives . Gpu passed through also but i am yet to test if the other gpu is working if running two vms. I am also passing through one of the two usb controllers on the mb. I have an extra usb pci card also passed through. I have not tested passing the nvme yet. But i dont think that will be a problem. Unraid is running headless with efifb=off in system conf. Ioummu is enabled in bios. And asc is enabled in unraid. I am actually surprised that this worked flawlessly. But it was not without problems. Had a hard time getting the pci usb working in the vm.

Link to comment

I'm not running a b550, but a x570 board. It's my second x570 board. I also had a few conversations with b550 users.

From my expierence, take a look on different boards and look for the ones that do seperate the devices in iommu groups in a good way.

But keep in mind that a bios update can change everything in that matter.

 

Also I don't recommend to bind usb controllers to vfio on boot. Just leave them as they are and attach your usb devices as you want. Then, take a look on the site for system devices and make sure that all devices needed for vm's are seperated from the controller handling the unraid stick. You maybe need to switch certain devices to different ports. Finally add the controller(s) to your vm's and enjoy.

There should be no problem as the controllers do support flr. :) 

 

 

Edited by giganode
  • Like 1
Link to comment
23 minutes ago, giganode said:

I'm not running a b550, but a x570 board. It's my second x570 board. I also had a few conversations with b550 users.

From my expierence, take a look on different boards and look for the ones that do seperate the devices in iommu groups in a good way.

But keep in mind that a bios update can change everything in that matter.

 

Also I don't recommend to bind usb controllers to vfio on boot. Just leave them as they are and attach your usb devices as you want. Then, take a look on the site for system devices and make sure that all devices needed for vm's are seperated from the controller handling the unraid stick. You maybe need to switch certain devices to different ports. Finally add the controller(s) to your vm's and enjoy.

There should be no problem as the controllers do support flr. :) 

 

 

Do you know why binding usb controllers to vfio is a bad idea? I am very new to all unraid. So I may be too impulsive with doing things.

Link to comment

A recent b550 user i spoked with had problems booting the vm with a usb controller which was bound to vfio on boot.. After deletion it started right away.

Another aspect, if you bind a pci device on boot it becomes completely unavailable to the system. So wenether a situation might occur where you need it you gotta unbind it und reboot the server.

 

Edited by giganode
  • Like 1
Link to comment

 

 

 

On 3/11/2021 at 7:05 AM, Dodisbeaver said:

I am running Windows vm with a cache pool of two ssds. I am passing through an extra ssd with games on via unassigned drives .

 

When you say you are passing it through...how? If it appears in unassigned drives then does that not mean that it is NOT passed through? What happens to the drive in unassigned drives list when you start the VM? Does it dissappear as it at that point gets passed through?

When I checked the IOMMU groups on my B550 Asus ROG Strix E board, the IOMMU groups do not look good. Will explain more below.

 

On 3/11/2021 at 7:05 AM, Dodisbeaver said:

 I am also passing through one of the two usb controllers on the mb. 

 

I checked my IOMMU groups and the two controllers I think you speak of are listed like so for me. This is with the latest BIOS from February and with ACS patch under VM manager settings set to disabled:

 

IOMMU group 22:[1022:149c] 09:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

 

You can see that they both fall under the same IOMMU group. I have seen countless threads/posts/videos all saying that if things appear in the same IOMMU group, that it means you have to pass the whole thing through. I don't really understand what people are saying here, but I think they mean that under VM settings, when you select something to pass through from that group, then if the one thing you want is say a soundcard but it also has say 2 x GPUs, then you must also select to pass through those GPUs under the GPU selection part. I saw one of spaceinvader's videos where he was passing through a GPU, and he said something like "you must pass through the HDMI sound as well for this GPU as you can see it is in the same IOMMU group". So he went to the soundcard part of the VM settings and passed through the sound from the GPu as well, and then just disabled it inside device manager for the Windows VM, and only left the actual soundcard he wanted enabled in device manager.

Are you suggesting that the above is not required and that you can selectively just pick what you want? Have I misunderstood?

 

On 3/11/2021 at 7:05 AM, Dodisbeaver said:

 I have not tested passing the nvme yet. But i dont think that will be a problem. 

 

The NVME top slot in my Asus ROG Strix E board is inside its own IOMMU group so should not be an issue.

 

On 3/11/2021 at 7:05 AM, Dodisbeaver said:

Unraid is running headless with efifb=off in system conf. Ioummu is enabled in bios. And asc is enabled in unraid. I am actually surprised that this worked flawlessly. But it was not without problems. Had a hard time getting the pci usb working in the vm.

 

I am still reading up on how to pass a GPU through. Seems complex with many different settings and tweaks required. When you say you had ACS set to enabled...which setting? There are 4:

PCIe ACS override: disabled / downstream / multifunction / both

 

When I tried each one it did separate things a little more, but even on both I think it was limited because IOMMU groups are still shared. I will post below with all the groupings.

 

On 3/11/2021 at 5:07 PM, giganode said:

Also I don't recommend to bind usb controllers to vfio on boot. Just leave them as they are and attach your usb devices as you want.

 

As above, do you not need to pass through the entire IOMMU group?

 

On 3/11/2021 at 5:07 PM, giganode said:

Then, take a look on the site for system devices and make sure that all devices needed for vm's are seperated from the controller handling the unraid stick. You maybe need to switch certain devices to different ports. Finally add the controller(s) to your vm's and enjoy.

There should be no problem as the controllers do support flr. :) 

 

 

Did you not just contradict yourself? You say to not bind them, but then do bind the entire controller? Which is it? What is flr?

 

 

On 3/12/2021 at 9:27 AM, giganode said:

Another aspect, if you bind a pci device on boot it becomes completely unavailable to the system. So wenether a situation might occur where you need it you gotta unbind it und reboot the server.

 

On boot of what? The VM or the physical bare metal unraid server? Slightly confused by the above statement. I think you mean if you bind it to the VM as a passthrough device, it becomes unavailable to unraid once the VM has booted up. Well...yes, but why would you pass through anything to the VM that you later may want? 

 

On 3/12/2021 at 9:27 AM, giganode said:

Biggest difference:

 

Binding a device on boot results in binding whatever is in the iommu group.

 

A pcie passthrough does only passthrough the chosen device, not the whole iommu group.

 

 

How does one chose to do either one? I see no such method available to differentiate in the VM manager settings or VM template settings. I just get a drop down box of a list of different devices I can select so does this mean I am only doing PCI-e passthrough?

 

 

 

Edited by jaybee
Link to comment

The below shows the IOMMU groupings for the Asus B550-E ROG Strix Gaming Motherboard.

IOMMU set to enabled in BIOS and ACS override setting in unraid VM settings set to disabled. Lines separate each group. Out of the box the below presents the following issues possibly:

 

1: The disks (bolded below) all come under the same IOMMU group. I was expecting that the onboard SATA disks (in my case Samsung SSDs below) would be separate to the HBA card mechanical spinners. I assumed this meant that the VM could not have an individual SSD passed through to it as the entire drive.

 

2: The USB controller 3.0 and 2.0 (bolded below) both appear under the same IOMMU group. I assumed this meant that I could not separate the unraid flash drive to the USB ports I would want to pass through to the VM.

 

 

IOMMU group 0:[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

 

IOMMU group 1:[1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

 

IOMMU group 2:[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

 

IOMMU group 3:[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

 

IOMMU group 4:[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

 

IOMMU group 5:[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

 

IOMMU group 6:[1022:1483] 00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

 

IOMMU group 7:[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

 

IOMMU group 8:[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

 

IOMMU group 9:[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

 

IOMMU group 10:[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

 

IOMMU group 11:[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

 

IOMMU group 12:[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

 

IOMMU group 13:[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)

[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)

 

IOMMU group 14:[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0

[1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1

[1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2

[1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3

[1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4

[1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5

[1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6

[1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7

 

IOMMU group 15:[1987:5012] 01:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)

[N:0:1:1] disk Sabrent__1 /dev/nvme0n1 1.02TB

 

IOMMU group 16:[1022:43ee] 02:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43ee

Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Bus 001 Device 002: ID 8087:0029 Intel Corp.

Bus 001 Device 003: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller

Bus 001 Device 004: ID 05e3:0610 Genesys Logic, Inc. 4-port hub

Bus 001 Device 005: ID 05e3:0610 Genesys Logic, Inc. 4-port hub

Bus 001 Device 006: ID 0781:5580 SanDisk Corp. SDCZ80 Flash Drive

Bus 001 Device 007: ID 0409:005a NEC Corp. HighSpeed Hub

Bus 001 Device 008: ID 051d:0002 American Power Conversion Uninterruptible Power Supply

Bus 001 Device 009: ID 1a40:0101 Terminus Technology Inc. Hub

Bus 001 Device 010: ID 0557:8021 ATEN International Co., Ltd Hub

Bus 001 Device 011: ID 04b4:0101 Cypress Semiconductor Corp. Keyboard/Hub

Bus 001 Device 013: ID 045e:0040 Microsoft Corp. Wheel Mouse Optical

Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

[1022:43eb] 02:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43eb

[1:0:0:0] disk ATA SAMSUNG SSD 830 3B1Q /dev/sdb 128GB

[2:0:0:0] disk ATA SAMSUNG SSD 830 3B1Q /dev/sdc 128GB

[1022:43e9] 02:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43e9

[1022:43ea] 03:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

[1022:43ea] 03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

[1000:0072] 04:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

[7:0:0:0] disk ATA SAMSUNG HD203WI 0002 /dev/sdd 2.00TB

[7:0:1:0] disk ATA Hitachi HDS5C302 A580 /dev/sde 2.00TB

[7:0:2:0] disk ATA Hitachi HDS5C302 A580 /dev/sdf 2.00TB

[7:0:3:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdg 2.00TB

[7:0:4:0] disk ATA ST4000DM000-1F21 CC54 /dev/sdh 4.00TB

[7:0:5:0] disk ATA WDC WD100EZAZ-11 0A83 /dev/sdi 10.0TB

[7:0:6:0] disk ATA WDC WD100EZAZ-11 0A83 /dev/sdj 10.0TB

[8086:15f3] 05:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 02)

 

IOMMU group 17:[10de:2484] 06:00.0 VGA compatible controller: NVIDIA Corporation GA104 [GeForce RTX 3070] (rev a1)

[10de:228b] 06:00.1 Audio device: NVIDIA Corporation GA104 High Definition Audio Controller (rev a1)

 

IOMMU group 18:[10de:1c82] 07:00.0 VGA compatible controller: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] (rev a1)

[10de:0fb9] 07:00.1 Audio device: NVIDIA Corporation GP107GL High Definition Audio Controller (rev a1)

 

IOMMU group 19:[1022:148a] 08:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function

 

IOMMU group 20:[1022:1485] 09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP

 

IOMMU group 21:[1022:1486] 09:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP

 

IOMMU group 22:[1022:149c] 09:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

 

IOMMU group 23:[1022:1487] 09:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

Edited by jaybee
Link to comment
43 minutes ago, jaybee said:

 

 

 

 

When you say you are passing it through...how? If it appears in unassigned drives then does that not mean that it is NOT passed through? What happens to the drive in unassigned drives list when you start the VM? Does it dissappear as it at that point gets passed through?

When I checked the IOMMU groups on my B550 Asus ROG Strix E board, the IOMMU groups do not look good. Will explain more below.

 

 

I checked my IOMMU groups and the two controllers I think you speak of are listed like so for me. This is with the latest BIOS from February and with ACS patch under VM manager settings set to disabled:

 

IOMMU group 22:[1022:149c] 09:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

 

You can see that they both fall under the same IOMMU group. I have seen countless threads/posts/videos all saying that if things appear in the same IOMMU group, that it means you have to pass the whole thing through. I don't really understand what people are saying here, but I think they mean that under VM settings, when you select something to pass through from that group, then if the one thing you want is say a soundcard but it also has say 2 x GPUs, then you must also select to pass through those GPUs under the GPU selection part. I saw one of spaceinvader's videos where he was passing through a GPU, and he said something like "you must pass through the HDMI sound as well for this GPU as you can see it is in the same IOMMU group". So he went to the soundcard part of the VM settings and passed through the sound from the GPu as well, and then just disabled it inside device manager for the Windows VM, and only left the actual soundcard he wanted enabled in device manager.

Are you suggesting that the above is not required and that you can selectively just pick what you want? Have I misunderstood?

 

 

The NVME top slot in my Asus ROG Strix E board is inside its own IOMMU group so should not be an issue.

 

 

I am still reading up on how to pass a GPU through. Seems complex with many different settings and tweaks required. When you say you had ACS set to enabled...which setting? There are 4:

PCIe ACS override: disabled / downstream / multifunction / both

 

When I tried each one it did separate things a little more, but even on both I think it was limited because IOMMU groups are still shared. I will post below with all the groupings.

 

 

As above, do you not need to pass through the entire IOMMU group?

 

 

Did you not just contradict yourself? You say to not bind them, but then do bind the entire controller? Which is it? What is flr?

 

 

 

On boot of what? The VM or the physical bare metal unraid server? Slightly confused by the above statement. I think you mean if you bind it to the VM as a passthrough device, it becomes unavailable to unraid once the VM has booted up. Well...yes, but why would you pass through anything to the VM that you later may want? 

 

 

How does one chose to do either one? I see no such method available to differentiate in the VM manager settings or VM template settings. I just get a drop down box of a list of different devices I can select so does this mean I am only doing PCI-e passthrough?

 

 

 

Okay where to start.

 

I am not doing much else than using my server as a smb and gaming vm.

The ssd  that I am "passing" through is an unassigned device and it does not dissappear when using the vm.

It is probably set up the wrong way but it works. The error logs are kinda clear of errors and i do not want to touch anything that works.

A recent problem I had and the solution was the fresco usb controller I have. I think that somehow I need to have a display connected to the gpu for it to work.

I do not have a dummy plug so I use a cheap raspberry pi-esque monitor.

 

Below are my iommu setup. I am not passing through the ssd in a sense that i am using the sata controller. I just use the ssd as a secondary drive in the settings of the vm.

 

IOMMU group 0:[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

IOMMU group 1:[1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

IOMMU group 2:[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

IOMMU group 3:[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

IOMMU group 4:[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

IOMMU group 5:[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

IOMMU group 6:[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

IOMMU group 7:[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

IOMMU group 8:[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

IOMMU group 9:[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

IOMMU group 10:[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

IOMMU group 11:[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

IOMMU group 12:[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)

[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)

IOMMU group 13:[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0

[1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1

[1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2

[1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3

[1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4

[1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5

[1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6

[1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7

IOMMU group 14:Capabilities: [168] Secondary PCI Express Capabilities: [188] Latency Tolerance Reporting Capabilities: [190] L1 PM Substates Kernel driver in use: nvme Kernel modules: nvme ">[144d:a804] 01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961/SM963

[N:0:2:1] disk Samsung SSD 960 EVO 250GB__1 /dev/nvme0n1 250GB

IOMMU group 15:[1022:43ee] 02:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43ee

Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Bus 001 Device 002: ID 2109:2820 VIA Labs, Inc. USB2.0 Hub

Bus 001 Device 003: ID 0b05:1939 ASUSTek Computer, Inc. AURA LED Controller

Bus 001 Device 004: ID 05e3:0610 Genesys Logic, Inc. 4-port hub

Bus 001 Device 005: ID 0951:1607 Kingston Technology DataTraveler 100

Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

Bus 002 Device 002: ID 2109:8820 VIA Labs, Inc. USB3.1 Hub

IOMMU group 16:[1022:43eb] 02:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43eb

[1:0:0:0] disk ATA KINGSTON SA400S3 B1H5 /dev/sdb 480GB

[2:0:0:0] disk ATA ST4000DM004-2CV1 0001 /dev/sdc 4.00TB

[3:0:0:0] disk ATA Samsung SSD 840 DB6Q /dev/sdd 250GB

[4:0:0:0] disk ATA ST2000DM008-2FR1 0001 /dev/sde 2.00TB

[5:0:0:0] disk ATA SAMSUNG HD154UI 1118 /dev/sdf 1.50TB

[6:0:0:0] disk ATA ST1000DM010-2EP1 CC43 /dev/sdg 1.00TB

IOMMU group 17:[1022:43e9] 02:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43e9

IOMMU group 18:[1022:43ea] 03:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

IOMMU group 19:[1022:43ea] 03:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

IOMMU group 20:[1022:43ea] 03:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

IOMMU group 21:[1022:43ea] 03:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

IOMMU group 22:[1022:43ea] 03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

IOMMU group 23:[1022:43ea] 03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43ea

IOMMU group 24:[10de:13c2] 04:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)

IOMMU group 25:[10de:0fbb] 04:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller (rev a1)

IOMMU group 26:[1b73:1100] 07:00.0 USB controller: Fresco Logic FL1100 USB 3.0 Host Controller (rev 10)

This controller is bound to vfio, connected USB devices are not visible.

IOMMU group 27:[8086:15f3] 09:00.0 Ethernet controller: Intel Corporation Ethernet Controller I225-V (rev 02)

IOMMU group 28:[10de:1b06] 0a:00.0 VGA compatible controller: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] (rev a1)

IOMMU group 29:[10de:10ef] 0a:00.1 Audio device: NVIDIA Corporation GP102 HDMI Audio Controller (rev a1)

IOMMU group 30:[1022:148a] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function

IOMMU group 31:[1022:1485] 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP

IOMMU group 32:[1022:1486] 0c:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP

IOMMU group 33:[1022:149c] 0c:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

This controller is bound to vfio, connected USB devices are not visible.

IOMMU group 34:[1022:1487] 0c:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

Link to comment
On 3/17/2021 at 1:41 PM, jaybee said:

As above, do you not need to pass through the entire IOMMU group?

 

Did you not just contradict yourself? You say to not bind them, but then do bind the entire controller? Which is it? What is flr?

 

On boot of what? The VM or the physical bare metal unraid server? Slightly confused by the above statement. I think you mean if you bind it to the VM as a passthrough         device, it becomes unavailable to unraid once the VM has booted up. Well...yes, but why would you pass through anything to the VM that you later may want? 

 

How does one chose to do either one? I see no such method available to differentiate in the VM manager settings or VM template settings. I just get a drop down box of a        list of different devices I can select so does this mean I am only doing PCI-e passthrough?

 

First of all, sorry for the late answer... had a lot to do in last time. That also seems to be the reason why I screwed up here.

 

I need to clarify this:

 

I completely mixed something up there. So let me try to solve this. I will also remove the wrong information in my post above.

 

1.  Binding a pcie device to vfio on boot results in unavailability for the whole iommu group for the server.

 

2. A passthrough to a vm does not change anything in that matter, the whole group stays unavailable when the vm is running.

 

3. I recommend not to bind usb controllers to vfio on boot. Whenever you need something connected to that controller, the server can reach it while the

    vm is not started.

    An example: If I need to access the terminal, but don't have a vm running (maybe because of an issue I have to fix to get it back working) I do need

    keyboard input, which in my case is the one keyboard that is normally permanently attached to my vm. I can imagine other situations aswell.

   

    In Addition to that, a b550 user had issues with his controller in a vm, when it was bound to vfio on boot before. Thats the mainpoint why I don't

    recommend it, although it might have been fixed in the meantime. I cannot replicate this issue on my own, cause I only own x570 boards.

    Furthermore, I don't have any device bound to vfio on boot. If a device fails, the bios gets updated or whatever other reason happens, another device

    may get bound to vfio on boot.

 

4. FLR is Function Level Reset - It's a reset that affects only a single function of a pcie device. It must not reset the entire device. The implementation is

    not required by the pcie specifications but it is very helpful and often makes pcie devices in vm instances just work right out of the box just because of

    that feature.

    The problems we had with the navi series is related to the gpus soundcard, which lacks that feature. Gladly, AMD fixed that with big navi.. :) 

 

    In System Devices you can check which one does or doesn't have this feature.

Edited by giganode
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.