Storage Configuration Help


Recommended Posts

Hi guys looking for some advice on how to set up my new server. Currently I am running 3x 12TB + 1x 4TB Ironwolf drives for storage 1x 12TB Ironwolf for parity and 1x Samsung Evo 970 NVME for cache. I have some new parts on the way and not to sure how I should be setting things up. 

So the new hardware I've ordered is 

X399 Aorus Xtreme

1900x Threadripper

3x Corsair MP510 480GB

 

I also have space in my case for 4x 2.5" SSD's or 2.5" HDD's. 

 

Would I be best setting up at 3 NVME drives as cache can I set them up to passthrough the raw drive to a vm to install windows/ubuntu/osx on?

And then there's the space I have for the 2.5" drives is there any point putting ssd's in there if I'm already using the NVME as cache? Are 2.5" mechanical drives going to slow down my array? 

 

I will be running my parity drive(s) and the 2.5" drives from the motherboards sata with the main array being run from a hba if that makes any difference. 

 

 

Link to comment
4 minutes ago, Squid said:

Cheers buddy I had read about the ram issues so opted for relatively slow ram "VENGEANCE® RGB PRO 16GB (2 x 8GB) DDR4 DRAM 3000MHz C15" although this isn't on the list of compatible ram so I may have to return if I have problems with stability. Didn't know about the "Power Supply Idle Control" though good looking out buddy.

Link to comment
2 minutes ago, testdasi said:

Immediate problem: 3x Corsair + 970 Evo = 4 NVMe M.2. There are only 3 M.2 slots on the mobo.

Yeah that would be a problem lol I am repurposing the old unraid server for my daughter to use for school work etc so she gets to keep the 970 Evo with Windoze (yuk).

Link to comment
2 hours ago, TeCH-Guruz said:

Yeah that would be a problem lol I am repurposing the old unraid server for my daughter to use for school work etc so she gets to keep the 970 Evo with Windoze (yuk).

First and foremost there's a bug with multi-drive btrfs cache pool that causes excessive CPU usage under load. Not sure if the Corsair is affected but that's something to keep in mind.

 

Even without the bug, you don't need to have all 3 NVMe in the cache pool. Most users actually don't really need a RAID cache pool (that would be the case for multi-drive cache pool). 480GB is more than enough for cache (e.g. docker image, libvirt and a few vdisks) so you can run single-drive cache pool.

 

Then if you have heavy write activities (e.g. download temp) then you can mount the other NVMe unassigned and leave it empty most of the time, except when it's being written on. That would increase its lifespan. Some may say it's waste to have an NVMe for this but the flip side is it saves you SATA ports (which you would presumably need for slow HDD in the array) and NVMe will be less likely to hang your system under heavy IO as compared to SATA (due NVMe being designed from the ground up to support parallelism).

 

The remaining 480GB can be passed through as PCIe device to your main VM for maximum performance.

Note that pass through means exclusive use by the VM i.e. you won't be able to share it among multiple VM's at the same time (and certainly not used it as cache).

The caveat is I'm not sure if the Corsair Phison controller would be happy with passing through as PCIe device. The only issues I know of are with SM2263 controller and Intel 660p. I think the Phison problem has been resolved but don't have 1 to test.

 

How big is the 970 Evo? Have you considered using it in the Unraid server e.g. pass it through to the VM (because I know for sure 970 Evo can be passed through).

 

With regards to 2.5" HDD, they will be slower but not because 2.5" is on its own slower. There are fast 2.5" HDD but most 2.5" HDD are 5400rpm and the highest capacity ones (e.g. the 5TB Seagate BarraCuda) are SMR so double whammy.

Note that you need to check your case support for thick (10mm and 15mm) 2.5" drives. Many cases are designed to support only 7mm thickness as that's the common SSD thickness.

  • Like 1
Link to comment
39 minutes ago, testdasi said:

First and foremost there's a bug with multi-drive btrfs cache pool that causes excessive CPU usage under load. Not sure if the Corsair is affected but that's something to keep in mind.

 

I will have to have a read about this and see what I can find out good to know though. 

39 minutes ago, testdasi said:

Even without the bug, you don't need to have all 3 NVMe in the cache pool. Most users actually don't really need a RAID cache pool (that would be the case for multi-drive cache pool). 480GB is more than enough for cache (e.g. docker image, libvirt and a few vdisks) so you can run single-drive cache pool.

 

This is how I am currently using the 970 Evo

39 minutes ago, testdasi said:

Then if you have heavy write activities (e.g. download temp) then you can mount the other NVMe unassigned and leave it empty most of the time, except when it's being written on. That would increase its lifespan. Some may say it's waste to have an NVMe for this but the flip side is it saves you SATA ports (which you would presumably need for slow HDD in the array) and NVMe will be less likely to hang your system under heavy IO as compared to SATA (due NVMe being designed from the ground up to support parallelism).

 

 

I'm not entirely sure I follow what your suggesting here. 

 

39 minutes ago, testdasi said:

The remaining 480GB can be passed through as PCIe device to your main VM for maximum performance.

Note that pass through means exclusive use by the VM i.e. you won't be able to share it among multiple VM's at the same time (and certainly not used it as cache).

The caveat is I'm not sure if the Corsair Phison controller would be happy with passing through as PCIe device. The only issues I know of are with SM2263 controller and Intel 660p. I think the Phison problem has been resolved but don't have 1 to test.

 

From what I've seen the board I'm getting has pretty good IOMMU groups so I should be able to pass 2 of the nvme's through without issue. I mean I've got no clue if the Corsair drives are compatible but it's worth a shot. Worse case there's no real issue keeping the 970 Evo and putting one of the corsair nvme's in the old setup. The 970 Evo is 500GB. The only reason I went for the Corsair drive is because they are seriously cheap like £38 each worst case I'm sure I can find some use for them. 

 

39 minutes ago, testdasi said:

With regards to 2.5" HDD, they will be slower but not because 2.5" is on its own slower. There are fast 2.5" HDD but most 2.5" HDD are 5400rpm and the highest capacity ones (e.g. the 5TB Seagate BarraCuda) are SMR so double whammy.

Note that you need to check your case support for thick (10mm and 15mm) 2.5" drives. Many cases are designed to support only 7mm thickness as that's the common SSD thickness.

 

Plenty of space as far as thickness goes. The case I'm using is the Thermaltake Tower 900 which is huge and really meant for water-cooling but I'm filling the rear with hard drives instead of radiators. I will have space for 20x 3.5" drives in the rear + 2x 3.5" parity drives in the front + 4x 2.5" in the front. So I'm not really worried about storage capacity just couldn't think of a use case for 4x SSD's. From what I've seen of the IOMMU groups I might struggle with passing one of the NVME drives though to a VM so at least one of them will have to be cache. 

 

Another idea is could I not use 1 nvme windows vm, 1 nvme ubuntu vm, 1 nvme in my main array with appdata/isos/system on it. Then set up a cache array of 4x SSD's?  

Link to comment
14 hours ago, TeCH-Guruz said:

I'm not entirely sure I follow what your suggesting here. 

From what I've seen the board I'm getting has pretty good IOMMU groups so I should be able to pass 2 of the nvme's through without issue. I mean I've got no clue if the Corsair drives are compatible but it's worth a shot. Worse case there's no real issue keeping the 970 Evo and putting one of the corsair nvme's in the old setup. The 970 Evo is 500GB. The only reason I went for the Corsair drive is because they are seriously cheap like £38 each worst case I'm sure I can find some use for them. 

Plenty of space as far as thickness goes. The case I'm using is the Thermaltake Tower 900 which is huge and really meant for water-cooling but I'm filling the rear with hard drives instead of radiators. I will have space for 20x 3.5" drives in the rear + 2x 3.5" parity drives in the front + 4x 2.5" in the front. So I'm not really worried about storage capacity just couldn't think of a use case for 4x SSD's. From what I've seen of the IOMMU groups I might struggle with passing one of the NVME drives though to a VM so at least one of them will have to be cache. 

Another idea is could I not use 1 nvme windows vm, 1 nvme ubuntu vm, 1 nvme in my main array with appdata/isos/system on it. Then set up a cache array of 4x SSD's?  

If your mobo IOMMU group is like mine (likely) then IIRC the short M.2 slot, bottom (4th) PCIe slot, x1 slot and the various LAN adapters are in the same group.

The other 2 M.2 slots and the 2nd PCIe slot are in the same group.

Each of the remaining PCIe slot is in its own group.

 

In short, without ACS Override, you can only pass through 2 M.2 to the same VM.

If you want to pass each M.2 to a separate VM then you need ACS Override multifunction, which isn't a bad thing - all the ACS Override security concerns are irrelevant to home users.

 

4x 2.5" can be used for plenty of things (especially if they can accommodate 15mm thick drives). For example

  • SATA SSD for SSD in array or even a pseudo array using mergerfs to pool. The mergerfs option allows trim.
  • 2.5" form factor U.2 SSD (e.g. Optane and mostly enterprise class SSD). They are relatively cheaper on the used market due to a lot lower demand.
  • 5TB Seagate Barracuda mounted as unassigned (and pooled using mergerfs if multiple of them) for slow storage.

I have run all 3 configs at various points. The key is whether you have a need for it or not.

 

Cache pool of 4xSSD will have to run RAID-0 which I generally don't recommend or RAID-10 which waste 50% space.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • Like 1
Link to comment
4 minutes ago, testdasi said:

If your mobo IOMMU group is like mine (likely) then IIRC the short M.2 slot, bottom (4th) PCIe slot, x1 slot and the various LAN adapters are in the same group.

The other 2 M.2 slots and the 2nd PCIe slot are in the same group.

Each of the remaining PCIe slot is in its own group.

 

In short, without ACS Override, you can only pass through 2 M.2 to the same VM.

If you want to pass each M.2 to a separate VM then you need ACS Override multifunction, which isn't a bad thing - all the ACS Override security concerns are irrelevant to home users.

Looking at info I've found online it does seem our motherboards are very similar in IOMMU groupings.

"The only issues are the NICs, Front Panel USB, 1 nvme and PCIe x1 are all grouped with Group 0.

2 other nvmes are grouped together but separate from everything else."

 

4 minutes ago, testdasi said:

4x 2.5" can be used for plenty of things (especially if they can accommodate 15mm thick drives). For example

  • SATA SSD for SSD in array or even a pseudo array using mergerfs to pool. The mergerfs option allows trim.
  • 2.5" form factor U.2 SSD (e.g. Optane and mostly enterprise class SSD). They are relatively cheaper on the used market due to a lot lower demand.
  • 5TB Seagate Barracuda mounted as unassigned (and pooled using mergerfs if multiple of them) for slow storage.

I have run all 3 configs at various points. The key is whether you have a need for it or not.

 

Cache pool of 4xSSD will have to run RAID-0 which I generally don't recommend or RAID-10 which waste 50% space.

I will have to look into mergerfs looks like I have some more reading to do. I don't remember seeing any U.2 connectors on the motherboard although I didn't really look into that at all so could have missed it. 

 

To be honest I probably don't "need" any other them since I don't really do much with the server I have a couple of VM's that I fire up on occasion (windoze/ubuntu/osx). Other than that its pretty much just used to media plex/transmission/sonarr/radarr/lidarr the usual suspects. 

 

Link to comment
On 4/10/2020 at 9:43 AM, testdasi said:

If your mobo IOMMU group is like mine (likely) then IIRC the short M.2 slot, bottom (4th) PCIe slot, x1 slot and the various LAN adapters are in the same group.

The other 2 M.2 slots and the 2nd PCIe slot are in the same group.

Each of the remaining PCIe slot is in its own group.

 

In short, without ACS Override, you can only pass through 2 M.2 to the same VM.

If you want to pass each M.2 to a separate VM then you need ACS Override multifunction, which isn't a bad thing - all the ACS Override security concerns are irrelevant to home users.

 

4x 2.5" can be used for plenty of things (especially if they can accommodate 15mm thick drives). For example

  • SATA SSD for SSD in array or even a pseudo array using mergerfs to pool. The mergerfs option allows trim.
  • 2.5" form factor U.2 SSD (e.g. Optane and mostly enterprise class SSD). They are relatively cheaper on the used market due to a lot lower demand.
  • 5TB Seagate Barracuda mounted as unassigned (and pooled using mergerfs if multiple of them) for slow storage.

I have run all 3 configs at various points. The key is whether you have a need for it or not.

 

Cache pool of 4xSSD will have to run RAID-0 which I generally don't recommend or RAID-10 which waste 50% space.

 

Do you happen to have a link to more info about ASC Overrides I can't seem to find it anywhere on here but I'm sure I've seen it in the past since I had to use overrides on my old motherboard. On another not my IOMMU groups look great but I have ran into a problem with the NVME passthrough. Any ideas how I can work out which one is which

 

IOMMU group 15:[1987:5012] 09:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)

IOMMU group 34:[1987:5012] 41:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)

IOMMU group 35:[1987:5012] 42:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)

 

They all look exactly the same I currently I have set as the cache drive and have a bare metal windows install on one of the others I would like to passthrough to a VM. Not sure if I should use the 3rd drive for cache also or if I should use it for just VM's thinking ubuntu + OSX. 

 

 

Link to comment
19 minutes ago, testdasi said:

The ACS Override settings is under Settings -> VM Manager.

 

If I have to guess, 09:00.0 is on the 2280 slot, 41:00.0 is on the top 22110 slot and 42:00.0 is on the bottom 22110 slot.

Cheers buddy that makes sense I've currently got VM's turned off while I repopulate my cache drive. I'll give it a go at stubing them one by one starting with your suggestion. 

 

Would be great if I could passthrough the rgb for my ram any ideas which ones could be the rgb for my ram currently got 8x 8gb installed. Heres a look at my IOMMU groupings currently no ASC overrides on. 

 

IOMMU group 0:[1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 1:[1022:1453] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge

IOMMU group 2:[1022:1453] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge

IOMMU group 3:[1022:1453] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge

IOMMU group 4:[1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 5:[1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 6:[1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 7:[1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 8:[1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B

IOMMU group 9:[1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 10:[1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B

IOMMU group 11:[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59)

[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)

IOMMU group 12:[1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0

[1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1

[1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2

[1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3

[1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4

[1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5

[1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6

[1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7

IOMMU group 13:[1022:1460] 00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0

[1022:1461] 00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1

[1022:1462] 00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2

[1022:1463] 00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3

[1022:1464] 00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4

[1022:1465] 00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5

[1022:1466] 00:19.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6

[1022:1467] 00:19.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7

IOMMU group 14:[1022:43ba] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset USB 3.1 xHCI Controller (rev 02)

[1022:43b6] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset SATA Controller (rev 02)

[1022:43b1] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset PCIe Bridge (rev 02)

[1022:43b4] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)

[1022:43b4] 02:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)

[1022:43b4] 02:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)

[1022:43b4] 02:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)

[1022:43b4] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)

[1022:43b4] 02:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02)

[8086:1533] 03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)

[8086:24fd] 04:00.0 Network controller: Intel Corporation Wireless 8265 / 8275 (rev 78)

[8086:1533] 05:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)

[1d6a:d107] 07:00.0 Ethernet controller: Aquantia Corp. AQC107 NBase-T/IEEE 802.3bz Ethernet Controller [AQtion] (rev 02)

[1b21:2142] 08:00.0 USB controller: ASMedia Technology Inc. ASM2142 USB 3.1 Host Controller

IOMMU group 15:[1987:5012] 09:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)

IOMMU group 16:[1000:0087] 0a:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)

IOMMU group 17:[1022:145a] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function

IOMMU group 18:[1022:1456] 0b:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor

IOMMU group 19:[1022:145c] 0b:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller

IOMMU group 20:[1022:1455] 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function

IOMMU group 21:[1022:7901] 0c:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

IOMMU group 22:[1022:1457] 0c:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller

IOMMU group 23:[1022:1452] 40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 24:[1022:1453] 40:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge

IOMMU group 25:[1022:1453] 40:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge

IOMMU group 26:[1022:1453] 40:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge

IOMMU group 27:[1022:1452] 40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 28:[1022:1452] 40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 29:[1022:1452] 40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 30:[1022:1452] 40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 31:[1022:1454] 40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B

IOMMU group 32:[1022:1452] 40:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge

IOMMU group 33:[1022:1454] 40:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B

IOMMU group 34:[1987:5012] 41:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)

IOMMU group 35:[1987:5012] 42:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)

IOMMU group 36:[1002:67ff] 43:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Baffin [Radeon RX 550 640SP / RX 560/560X] (rev ff)

[1002:aae0] 43:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Baffin HDMI/DP Audio [Radeon RX 550 640SP / RX 560/560X]

IOMMU group 37:[1022:145a] 44:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function

IOMMU group 38:[1022:1456] 44:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor

IOMMU group 39:[1022:145c] 44:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller

IOMMU group 40:[1022:1455] 45:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function

IOMMU group 41:[1022:7901] 45:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

Link to comment
  • 2 weeks later...
On 4/9/2020 at 6:22 PM, testdasi said:

The caveat is I'm not sure if the Corsair Phison controller would be happy with passing through as PCIe device. The only issues I know of are with SM2263 controller and Intel 660p. I think the Phison problem has been resolved but don't have 1 to test.

Just want to confirm for anyone reading this the Corsair MP510 can be passed through to a VM I currently have a bare metal windows install on one of mine passed through to a windows VM. 
 

At the minute I have 1x MP510 as cache 1x Windows bare metal pass through and not sure what to do with the other one.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.