12-Core Ryzen / Asrock X570D4U / 35W TDP / Low Noise


Recommended Posts

After starting to play around with UnRaid a couple of weeks ago I decided to build a proper system. I want to share build progress and key learnings here.

 

 

IMG_2426.thumb.jpeg.607605cea952145079ed88ccae1e6d6b.jpeg

Key requirements have been:

  • AMD system
  • Plenty of CPU cores
  • Low Wattage
  • ECC Memory
  • IPMI
  • Good cooling since the system sits in a warm closet
  • Prosumer build quality

 

Config: Runs 24/7 and is rock stable since day 1.

  • UnRaid OS: 6.10 RC1
  • Case: Fractal Design Define 7
  • PSU: Be Quiet! Straight Power 550W
  • Board: AsRockrack X570D4U w/ Bios 1.20; latest version as of 2021/10
  • CPU: Ryzen 9 3900 (65W PN: 100-000000070) locked to 35W TDP through Bios setting; CPU was difficult to source since it is meant for OEMs only. 
  • Cooler: Noctua NH-L12S
  • Case Fans: 5x Arctic P14 PWM - noise level is close to zero / not noticeable
  • Memory: 64 GB ECC (2x32 GB) Kingston KSM32ED8/32ME @ 3200Mhz (Per Memory QVL)
  • Data disks: 3x 4TB WD40EFRX + 1x 4TB WD40EFRX for Parity (all same disks, same size)
  • Cache 0: 2x 512GB Transcend MTE220S NVME SSDs Raid 1
  • Cache 1: 4x 960GB Corsair MP510 NVME SSDs Raid10. Set up with ASUS Hyper M.2 in PCIE X16 Slot (BIOS PCI Bifurcation config: 4x4x4x4x) 

 

Todos:

  • Replace the 4 SATA cables with Corsair Premium Sleeved 30cm Sata cables
  • Eventually install a AIO water cooler
  • Figure dual channel memory setting out, atm. single channel config. Thats done.
  • Eventually configure memory for 3200mhz, Done.
  • Eventually install a 40mm PWM cooler for the X570. Update: After a few weeks of 24/7 uptime this seems to be unnecessary since the temps of the X570 settled at 68 - 70°
  • Get the IPMI Fan control plugin working

 

Temperatures (in Degree Celcius) / Througput:

  • CPU @ 35W: 38° - 41° Basic usage (Docker / VMs) / 51° - 60° Load 
  • CPU 65W: 78 - 80° Load (This pushes fans to 1300 - 1500 RPM, which lowers the X570 temps to 65°) 
  • Disks: 28° - 34° Load
  • SSDs: 33° - 38° Load
  • Mainboard: 50° in average
  • X570: 67° - 72° during normal operations, 76° during parity check
  • Fan config: 2x Front (air intake), 1x bottom (air intake), 1x rear & 1x top (air out); 800 - 1000 RPM
  • Network Througput: 1 Gbit LAN - Read speed: 1 Gbit / Write speed 550 - 600 Mbit max. (Limited by the UnRaid SMB implementation?). Write tests done directly to shares. So fare meeting expectations. Final Config: 2x1 Gbit Bond attached to a TP-Link TL-SG108E.

 

Learnings from build process:

  • Finding the 65W version of the Ryzen 9 3900 CPU was difficult; finally found a shop in Latvia where I ordered it. Some shops in Japan sell these too.
  • The Case / Board config requires a ATX cable with min. 600mm length
  • IPMI takes up to 3 mins after Power disconnect to become available
  • The Bios does not show more than 2 M.2 SSDs which are connected to the Asus M.2 Card in the x16 slot. However, unRaid has no problem seeing them.
  • Mounting the CPU before mounting the board was a good decision, should have also installed the ATX and 8PIN cable on the board before mounting it, since installing the two cables on the mounted board was a bit tricky
  • Decided to go with the Noctua Top Blower to allow airflow for the components around the CPU socket, seems to work good so far
  • Picked the case primarily because it allows great airflow for the HDDs and a clean cable setup
  • The front Fans may require PWM extension cables for proper cable setup, depending where on the board the Fan connectors are located
  • X570 is hot, however with a closed case airflow seems to be decent (vs. open case) and temps settled at 67° - 68°
  • Removed the fan from the ASUS M.2, learned later that it has a fan switch too. Passive cooling seem to work for the 4 SSDs
  • PCIe Bifurcation works well for the x16 slot, so far no trouble with the 4x SSD config
  • Slotting (& testing) the two RAM modules should be done with the board not mounted yet since any changes to ram slots, or just in's/out's is a true hassle since the slots can only be opened on one side (looking down at the board on the left side, towards external connectors) and the modules have to be pushed rather hard to click in.
  • IPMI works well, still misses some data in the system inventory. However the password can only have a max. length of 16 Byte; used a online generator to meet that. Used a 32 char PW at first instance and locked the account. Had to unlock it with the second default IPMI user (superuser)
    • Asrock confirmed the missing data in the IPMI system inventory. Suggested to refresh the BMC what I didn't do yet. 

 

Performance:

  • With CPU @ 35W the system performs well for day to day tasks, however feels like it could be a bit faster here and there. Nothing serious. VMs are not as fluent as expected. The system is ultra silent. 
  • With CPU @65W the system, especially VMs and docker tasks such as media encoding are blazing fast. VM performance is awsome and a Win10 VM through RDP on a MacBook feels 99% like a native desktop. The app performance in the VM is superiour to usual Laptops from my view, given the speed of the cache drive where I have the VM sitting at and the 12 core CPU. Fans are noticeable but not noisy. 
  • 45W Eco Mode seems to be the sweet spot, comparing performance vs. wattage vs. costs.

 

Transcoding of a 1.7GB 4K .mov file using a Handbrake container:

  • 65W config - 28 FPS / 3mins 30sec - 188W max.
  • 45W (called ECO Mode in Bios) - 25 FPS / 3min 45sec - 125W max.
  • 35W config - 4FPS / 25 mins - 79W max.

 

Power consumption:

 

  • Off  (IPMI on) - 4W
  • Boot 88W
  • Bios 77- 87W
  • Unraid running & ECO Mode (Can be set in Bios) - 48W
  • Unraid running & TDP limited to 35W - 47W
  • Parity check with CPU locked to 35W - 78W

 

Without any power related adjustments and the CPU running at stock 65W the system consumes:

 

  • 80W during boot
  • 50 - 60W during normal operations e.g. docker starts / restarts
  • 84 - 88W during parity check and array start up (with all services starting up too)
  • 184 - 188W during full load when transcoding a 4K video

 

CPU temps at full load went up to 86° (degree celcius).

 

Costs:

 

If I did the math right - the 35W config has less peak power consumption, however since calculations take longer the costs (€/$) are higher, compared to the 65W config. In this case 0.3 (188W over 3,5 Minutes) vs. 2.3 (78W over 25 Minutes) Euro Cent. So one might look for the sweet spot in the middle :)

 

January 2021 - Update after roughly a month of runtime - No issues, freezes etc. so far. The system is rock stable and just does its job.

 

Details regarding IOMMU groupings further below.

 

I will revisit and edit the post while I am progressing with the build.

 

 

 

 

20-12-18 13-41-58 0185.jpg

20-12-18 14-29-35 0189.jpg

20-12-18 15-24-39 0194.jpg

 

20-12-18 14-44-49 0192.jpg

20-12-18 14-35-40 0191.jpg

20-12-18 14-07-23 0187.jpg

20-12-18 14-07-36 0188.jpg

20-12-18 14-02-15 0186.jpg

 

IMG_0228.jpgIMG_2424.thumb.jpeg.1755924725e2061a8b6a457d617cbb63.jpeg

IMG_2430.jpeg

Edited by doesntaffect
Updated 2021/10 - minor config data updates after BIOS update and new photos
  • Like 5
  • Thanks 2
Link to comment
2 hours ago, doesntaffect said:
  • Low Wattage
  •  

 

2 hours ago, doesntaffect said:
  • CPU: Ryzen 9 3900 (65W PN: 100-000000070) locked to 35W TDP through Bios setting; difficult to source
  •  

For best overall power, don't limit TDP. You will indeed limit max draw, at the expense of keeping the rest of the system at full power longer while it waits on the crippled CPU to finish the task.

Link to comment
16 minutes ago, doesntaffect said:

@jonathanmWhich Bios setting would you use to limit draw?

If you want to keep the peak draw low, for heat dissipation, then keep the processor throttled as you described.

 

If you want to conserve energy overall, then allow the processor to work as hard as it can so it completes the tasks sooner. The CPU is only a portion of the system power draw, and if the processor is throttled back it will take longer to accomplish the work given, thus keeping all the other parts of the system at full power for a longer period.

 

As an example, imagine a 4K transcode of a large file. For the sake of the example, let's assume that forcing the processor to a low TDP makes the transcode take twice as long. That means the RAM, drives, motherboard, etc are all kept at high power feeding the CPU the data it's working on for twice as long, instead of allowing everything to go back to a low power state much sooner.

 

Processor TDP is a parameter for specifying how much heat is allowed to be produced over time, typically a concern for laptops and sealed systems that can only dissipate a small amount of heat compared to a desktop with large fans and plenty of space. It is NOT primarily a measure of total energy efficiency. The amount of work a processor can accomplish with a given amount of power is largely determined by the layout and design of the CPU.

  • Like 1
Link to comment
19 minutes ago, doesntaffect said:

My use case is low overall (average) load.

Then the TDP governor wouldn't be in use anyway. It will only kick in when the CPU is running hard for a sustained period, such as...

20 minutes ago, doesntaffect said:

when I transcode 4K footage

So your overall consumption probably wouldn't have a detectable difference no matter whether you set the limit or not. The only thing you would notice is that your transcodes would take noticeably longer if you set a limit.

Link to comment

That is a really nice setup! Looks like the one I am wanting to build. I would just go for the X570D4U-2L2T board I think, as I read that the 'normal' X570 would have some issues, the newer version would not have these issues?

 

Are you using this with a VM for Lightroom, PS,..? What about audio passthrough and what about the overall speed? Can you edit off of that easily?

What are you using the cache 1 for? I had the ASUS expansion card a year ago for my desktop too, but couldn't get it to work on my old mobo, so I had to send it back.

 

Any ideas would be most welcomed!

Link to comment

So far I have no issues with the board. Even the relatively high temps of the X570 seem to be no issue. The maximum I have seen is 76° during Parity check and will copying data back and forth to the NVMEs.

 

I am using 2 Linux (server related) hosts and 1 Windows VM atm. Given that the Windows VM is only using a virtual VGA Adapter the performance is ok. I get between 5 - 7GB / Sec read speed (Raid 10) and 2,5 - 4,2 GB / Sec write speed (Raid 1 & Raid 10). Still testing the network speed which should give max. 1 GBit, given I have a WIFI only network. The UnRaid host is connected to a Fritzbox 1GBit Lan port.

 

Main purpose of the large cache is to host docker container and a large picture db. I use the slower disks primarily as a 1st backup instance. Both cache drives are also being used to store VM disks.

I got the 12 core CPU for encoding purposes and to be able to pin cores more granular to container and VMs. Still I wanted a low power CPU and not a 105W.

 

Bottom line, I think the board (and probably the 10G version) is worth the money. 10G version only if the network allows the speed.

 

The Bios is basically a server grade bios with added desktop (overclocking) features. The 4x Asus card works flawless. Even though the SSDs stay in their termal tolerance range I am thinking to add the fan again, and since its rather noisy add a resistor to lower the fan speed. The SSDs are ok up to 70°, so there is still plenty of head room atm.

 

If a dedicated GPU would accelerate the Desktop VMs I might get one, but only a cheap 2D Card Like the Nvidia GT710 passive. Still need to figure that out.

 

I did a quick disk benckmark with a the Linux Desktop which used a 60G disk on the Raid 10. The 8,7 GB is read, the 1,2GB is write speed.

 

grafik.png.fae34112cba0b0c26d216611f95c3b9b.png

 

Edited by doesntaffect
  • Thanks 1
Link to comment

Thanks a lot for your reply!

 

You write about the large picture DB, is that in Lightroom, or are you using different software.  May I ask how big the database is to have an idea? I have two Master LR Catalogs, both containing around 100k RAW files (.cr2, some .dng and some .psd).

Why did you want a 65w CPU in stead of the 105W CPU? Only for powerconsumption? Is there such a big difference in that? What about performance between the two?

 

Do you have Adobe based programs like LR and PS running on the VM, does it work smoothly on this board?

What about the AUDIO, is it patched through to the client computer (say editing movie files in Premiere Pro)?

 

I don't have 10G network yet, but if there is 10G in the Unraid and in my PC, I should be able to do 2.5-5G speeds over cat5e I read. My gigaswitches in between the Unraid and my desktop, are 1G, but are high quality, so I'm hoping they will be able to sustain the higher speeds too.  If not, I'll have to relocate the Unraid closer to my pc and test it that way.

Link to comment

Main purpose of this system is media and document storage, with the flexibility to add docker container and VMs where I need them.

 

My Picture db is approx. 40K High res JPEGs & RAW files and >1K videos, and growing :) . I am using a Photoprism container to structure the albums and also planing to use the Windows VM to edit the videos & photos. The workflows in this matter are still under development.

 

I am also using a Handbrake container for video transcoding, primarily 4K IPhone footage (.mov) which I transcode into H.264 .mp4. This setup works really well, especially with all the CPU cores I can assign cores to e.g. Handbrake and let it render at 100% and assign cores to VMs and other services and basically nothing conflicts with each other.

 

I dont use any Adobe software at the moment since I dont like their subscription model and I dont use this system for audio editing. I tested basic sound features

 

Your 1G Switch will limit the speed per port to 1G (unless this rule has changed in last 5 yrs :D). What you could do is to bond the ports, however your Switch need to support this feature. 1G (max. ~80-90 Mbyte / second) is enough for my purpose and if I need more throughput I will bond the 2 NICs to get approx. 150 Mbyte / second. Simple Desktopswitches with link aggregration start at approx. 30 EUR.

 

I picked the CPU to  achieve a good core / power consumption ratio since it is running 24/7. And also, because I dont like standard hardware :). Since the board is relatively future proof I will also be able to add future 65W Zen 3 CPUs which AMD is preparing atm (Ryzen 9 5900 for example).

The differences in performance between 3900 and 3900x can be measured but you will not notice them when working in apps.

 

My goal also was to build a clean and well build system as I hate untidy setups or cable mess:).

 

Updated pictures below. The case is really awsome and allows great airflow and a clean setup. If you plan to add a GPU to accelerate your VMs and to use the Asus card you would be limited to 2x x8 PCIe lanes. So you could use 2 PCIe 4.0 SSDs in Raid 1 with the Asus card, which should be fast enough for editing purposes. The x16 slot would be running at x8 speed. See the handbook page 15 for details.

 

 

 

 

2020-12-21_21-04-52_IMG_0299.JPG

2020-12-21_21-04-58_IMG_0300.JPG

2020-12-21_21-05-55_IMG_0305.JPG

2020-12-21_21-06-07_IMG_0306.JPG

2020-12-21_21-06-19_IMG_0307.JPG

2020-12-21_21-09-29_IMG_0308.JPG

2020-12-21_21-09-41_IMG_0309.JPG

2020-12-21_21-09-48_IMG_0310.JPG

  • Thanks 1
Link to comment

Wow, that is very impressive and tidy work! Well done!

I was actually considering buying the Fractal Meshify 2 XL tower case.

 

I have one managed switch (https://www.tp-link.com/en/business-networking/managed-switch/t2600g-18ts/#specifications), which should be able to handle that, but the other one, which it needs to go through, doesn't have that option, so either buy another one, or move the unraid case to another location in the house.. 

 

Link to comment

I think the Meshify allows even a better cable management, especially for the ATX connector since it has one more rubber protected cable grommet on top of the other two upper grommets. With the experience from my Define build I'd recommend the Meshify. :)

 

And, the optics of the Meshify are cooler too. The Switch is fine. Not nowing your home / infrastructure I'd probably throw away the other small switch a get device which allows Link Aggregation. The full PCIe4 x1 slot of the board allows a 10G upgrade at a later stage.

Edited by doesntaffect
  • Thanks 1
Link to comment
  • 2 weeks later...

Thank you for sharing doesntaffect! With regards to the Power Consumption stats, do you also have figures to compare against while its running at 65W?
 

Quote

 

Unraid running & TDP limited to 35W - 47W

Parity check with CPU locked to 35W - 78W

 

 

Power efficiency has become a driving impetus for how I choose to improve upon current build as well, currently using a 35W TDP Athlon 200GE.

Edited by Trunkton
Link to comment

I did a few more tests with 65W respectively allowing the Motherboard (through BIOS config) to set the package power limits.

 

Without any adjustments and the CPU running at stock 65W the system consumes:

 

50W during boot

50 - 60W during normal operations e.g. docker starts / restarts

84 - 88W during parity check and array start up (with all services starting up too)

184 - 188W during full load when transcoding a 4K video

 

CPU temps at full load went up to 86° (degree celcius).

 

I also compared the 35W vs. 45W vs. 65W (unlimited) performance:

 

4K transcoding of a 1.7GB file using a Handbrake container:

 

65W - 28 FPS / 3mins 30sec - 188W max.

45W (Called Eco Mode in Bios) - 25 FPS / 3mins 45sec - 125W max.

35W - 4FPS / 25 mins - 79W max.

 

So bottom line - the average load / idle load does not differ that much, however the max consumption can be limited quite a lot, with the price of much lower performance.

 

One can also see that if the system has to execute other jobs in e.g. Nextcloud the avg. FPS in handbrake drop to 3.xx. Rendering a movie and using Nextcloud at the same time becomes sluggish.

Without rendering a movie the performance is still good.

 

I edited the original post and added a few cost related comments.

 

Edited by doesntaffect
  • Thanks 1
Link to comment
On 12/20/2020 at 6:23 AM, doesntaffect said:

Costs:

 

If I did the math right - the 35W config has less peak power consumption, however since calculations take longer the costs (€/$) are higher, compared to the 65W config. In this case 0.3 (188W over 3,5 Minutes) vs. 2.3 (78W over 25 Minutes) Euro Cent. So one might look for the sweet spot in the middle

Your maths may be correct, but I think you need to redo with a different set of numbers. Subtract the base cost of running from both power equations to see how much it actually costs to transcode that file. So, allowing for a base load of roughly 50W, you are using an extra 135W for 3.5 minutes or 28W for 25 minutes. The system pulls the same base load regardless.

 

I don't think the throttling operations are intelligent, they likely cause way more work than if you just let the CPU do its job.

 

Power throttling is ONLY for heat control, not energy efficiency.

 

Energy efficiency is gained by reducing the number of resources powered up constantly, as long as the system can properly bring them back online in a timely fashion. There are ways of turning off unused slots and controllers, you can use fewer high capacity drives for massive energy savings, and Unraid can spin down unused drives.

 

You should use the fastest processor available in a specific die class, the highest density RAM chips, and the largest hard drives. That will allow the most work to be done with the least amount of electrical use.

 

However... the amount you spend up front for the parts will likely never be recouped in energy savings over the life of the build, so you have to strike a balance over how much your time is worth vs. initial parts cost vs. long term energy use vs. overall environmental impact from producing new parts.

 

Typically buying higher end means better environmental impact, because it takes roughly the same amount of raw materials and labour to produce each physical unit, so the fewer units you consume over your lifetime the better. It's more money out of your pocket, but that's just the way it is right now. Environmentalism plus consumerism = $$$$.

 

Toys, Environment, Money pick any 2.

 

Bottom line, TDP is a very poor way to measure efficiency, as it was never meant to be used that way. It's only use is packaging concerns for heat load over time, with heat sink design.

  • Like 3
Link to comment
38 minutes ago, doesntaffect said:

I added a Cyberpower UPS ValuePRO VP700ELCD Green Power UPS 700VA/390W which was detected immediatly by the integrated APC deamon. Approx. runtime on battery is 40 mins.

With 40 minutes reported, that means roughly 20 minutes before the batteries are starting to accumulate long term damage. I'd set it to start shutting down after 10 minutes on battery at most, preferably 5 minutes. That way you leave margin for possible repeat events in one day, for example if the power goes down, comes back a couple hours later, then goes down again, you should have enough battery to safely shut down. The alternative is to allow enough time after the power comes back to fully recharge, which typically is about a 10 to 1 ratio, if you are on batteries for 20 minutes, it's going to be at least 200 minutes of power to get back to full charge. Ideally you never want to drop below 50% battery.

Link to comment

Fully loaded 47mins being reported. I wonder why the batteries will be damaged when dropping below 50%. My current config is to shutdown after 10mins on Battery or when capacity drops below 50%. With an addtl. Switch and an Odroid C2 the UPS has an average load of 70W. The UnRaid host is running at Eco Mode, which gives a good compromise re temps/performance/consumption.

Link to comment

Wow impressive build! Am considering getting this mobo for my upcoming build as well, pairing it with a 3700x

 

Just wondering have you tried GPU passthrough to VM on this? Are there any challenges with the IOMMU groupings?

Edited by 2haven
Link to comment
On 1/13/2021 at 4:18 PM, doesntaffect said:

I am not using GPU passthrough, however this should work from what I have seen in the config. Didnt digg deep into IOMMU settings, however all hardware which is present, even the IPMI virtual CD drive, can be choosen to passthrough to the VMs. If you have specific questions for settings, please ask. :)

Would be helpful if you could share the IOMMU groupings as I wanted to passthrough a GPU for a gaming VM :DSince this board is pretty new, there isn’t a ton of information out there on this topic..

Link to comment

Hope this is what you are looking for:

 

 

IOMMU group 0:				[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 1:				[1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 2:				[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 3:				[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 4:				[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 5:				[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 6:				[1022:1483] 00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 7:				[1022:1483] 00:03.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 8:				[1022:1483] 00:03.4 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 9:				[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 10:				[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 11:				[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 12:				[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
IOMMU group 13:				[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 14:				[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
IOMMU group 15:			 	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
			 	[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
IOMMU group 16:				[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0
				[1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1
				[1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2
				[1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3
				[1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4
				[1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5
				[1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6
				[1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7
IOMMU group 17:			 	[126f:2262] 01:00.0 Non-Volatile memory controller: Silicon Motion, Inc. SM2262/SM2262EN SSD Controller (rev 03)
				[N:0:1:1]    disk    TS512GMTE220S__1                           /dev/nvme0n1   512GB
IOMMU group 18:				[1022:57ad] 20:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse Switch Upstream
IOMMU group 19:				[1022:57a3] 21:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
IOMMU group 20:				[1022:57a3] 21:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
IOMMU group 21:				[1022:57a3] 21:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
IOMMU group 22:				[1022:57a3] 21:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
IOMMU group 23:				[1022:57a4] 21:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
			 	[1022:1485] 2a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
			 	[1022:149c] 2a:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
				Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
				Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
			 	[1022:149c] 2a:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
				Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
				Bus 003 Device 002: ID 0781:5571 SanDisk Corp. Cruzer Fit
				Bus 003 Device 004: ID 046b:ff01 American Megatrends, Inc. Virtual Hub
				Bus 003 Device 005: ID 046b:ff20 American Megatrends, Inc. Virtual Cdrom Device
				Bus 003 Device 006: ID 046b:ffb0 American Megatrends, Inc. Virtual Ethernet
				Bus 003 Device 007: ID 046b:ff10 American Megatrends, Inc. Virtual Keyboard and Mouse
				Bus 003 Device 042: ID 0764:0501 Cyber Power System, Inc. CP1500 AVR UPS
				Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
IOMMU group 24:				[1022:57a4] 21:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
			 	[1022:7901] 2b:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
				[2:0:0:0]    disk    ATA      WDC WD40EFRX-68N 0A82  /dev/sdb   4.00TB
				[5:0:0:0]    disk    ATA      WDC WD40EFRX-68N 0A82  /dev/sdc   4.00TB
IOMMU group 25:				[1022:57a4] 21:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Matisse PCIe GPP Bridge
			 	[1022:7901] 2c:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
				[7:0:0:0]    disk    ATA      WDC WD40EFRX-68N 0A82  /dev/sdd   4.00TB
				[10:0:0:0]   disk    ATA      WDC WD40EFRX-68N 0A82  /dev/sde   4.00TB
IOMMU group 26:			 	[126f:2262] 23:00.0 Non-Volatile memory controller: Silicon Motion, Inc. SM2262/SM2262EN SSD Controller (rev 03)
				[N:1:1:1]    disk    TS512GMTE220S__1                           /dev/nvme1n1   512GB
IOMMU group 27:			 	[8086:1533] 26:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
IOMMU group 28:			 	[8086:1533] 27:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03)
IOMMU group 29:				[1a03:1150] 28:00.0 PCI bridge: ASPEED Technology, Inc. AST1150 PCI-to-PCI Bridge (rev 04)
			 	[1a03:2000] 29:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 41)
IOMMU group 30:			 	[1987:5012] 2d:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)
				[N:2:1:1]    disk    Force MP510__1                             /dev/nvme2n1   960GB
IOMMU group 31:			 	[1987:5012] 2e:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)
				[N:3:1:1]    disk    Force MP510__1                             /dev/nvme3n1   960GB
IOMMU group 32:			 	[1987:5012] 2f:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)
				[N:4:1:1]    disk    Force MP510__1                             /dev/nvme4n1   960GB
IOMMU group 33:			 	[1987:5012] 30:00.0 Non-Volatile memory controller: Phison Electronics Corporation E12 NVMe Controller (rev 01)
				[N:5:1:1]    disk    Force MP510__1                             /dev/nvme5n1   960GB
IOMMU group 34:			 	[1022:148a] 31:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
IOMMU group 35:			 	[1022:1485] 32:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
IOMMU group 36:			 	[1022:1486] 32:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
IOMMU group 37:			 	[1022:149c] 32:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
				Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
				Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
IOMMU group 38:			 	[1022:1487] 32:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

 

  • Thanks 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.