Ryzen 3000 series build / Struggling with PCI-e lanes


144 posts in this topic Last Reply

Recommended Posts

  • Replies 143
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

@chris_netsmart ok just going from what wendell says form Level1Linux video (at about 5:50 in video below) iommu doesn't really separate things out in the older boards x370 x470 asmedia chip-set, wher

Thanks! This will be the most expensive build I have ever built and I'm super excited about it! 

Have you had any trouble using the USB 3.0 ports with your Unraid USB stick? I thought about getting one but the lack of USB 2.0 kind of scared me off.

Posted Images

@Pducharme was going though spaceinvaders one's videos and found this one it might help explain and breakup the iommu group to give you some usb ports to pass through

 

edit: could you plug a few different usb devices in to the ports and run the 'for usb_ctrl .....' command again to work out whats plugged it to where? iommu group 6 usb might be you internal ports, its all just trial and error on the new frontier

 

 

Edited by fr05ty
added missing info
Link to post

@fr05ty I tried on ACS override with the USB Controller in IOMMU group 9, because the one in IOMMU 16 has my USB Unraid key.  It fail because my NIC card doesn't work anymore after.  I had to manually edit the file on my Mac computer to remove the extra parameter.   I'll try on the IOMMU 16 and move the Unraid USB key.

 

** EDIT

 

It failed too for USB controller of IOMMU group 16.  When I check at both the 9 and the 16, the ID is the same, so it's probably why it failed.  I guess that if someone want a real VM with full passthrough of GPU and USB Controller, it would be best with a PCIe USB extension card that will create it's own IOMMU group.

Edited by Pducharme
Link to post
On 7/11/2019 at 2:27 AM, fr05ty said:

@BLKMGK sorry if this seems a little long but hopefully it will clear a few things up for you

 

IBM M1015 and Perc H310 are basically 9211-8i from what i understand, so for 1 card quick math is 8 drives x 600 MB/s = 4800 MB/s max throughput, but as it is PCIe 2.0 x8 its max speed is 4096 MB/s so 8 drives could do up to 512 MB/s each.

 

if you take both outputs from 1 card into an extender and have 16 drives off the extender you would have a max speed of 256MB/s per drive if you are only connecting spinning rust to them you may only just get to the limits of the card if doing something like a scrub when reading all drives at once. of course there could be some overhead info that may drop the speed a little.

 

if you had a 9207-8i or similar card that is a PCIe 3.0 x8 card the PCIe 3.0 x8 max throughput is 7.88GB/s so 8 x 600 = 4800 / 16 = 300 MB/s which is still plenty for HDD's.

 

if you only had 1 HBA IBM M1015 or Perc H310 with 1 cable to the extender and 1 to 4 drives 4096 / 2 = 2048 / 20 drives = 102 MB/s and 2048 / 4 drives = 512 MB/s

 

When a scrub is started on my server it tops out around 160MB/s per drive on all 15 disks which is 2400MB/s in total i only have 1 cable to the extender which is 2400MB/s max speed and as it continues through the 4tb drives it will drop to match the drives as the further you get through a disk the read speed slows down when they have finished scrubbing and all the 8tb drives are left to continue it will pick up in speed again.

 

I have a 9207-8i in my server as i didn't want to fuss around with trying to flash a card, these ones just work otb as jbod/i.t. mode. i just started a scrub to take some screen shots which are attached below all my drives are connected/detected as sata3 6Gb/s disks for a bit more info.

 

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

<snip>

 

This is very helpful and I truly appreciate you having done all the maths! I believe, if I'm following you, this to mean that using an expander won't hurt me. I'm a ways away from setting up unRAID on this but am trying to be prepared. I do have an expander card at least and the cabling for it so I feel good about that. I'll setup my board in a standard linux build as soon as my memory arrives, shipping is taking ages from NewEgg - ugh. Again, much thanks and when I can put this together I'll post any lessons learned. I've really wanted to get my drive speeds up and had hoped it was bottlenecked with my controllers - perhaps not! My last parity check finished at 103mb/sec FWIW.

One thing I'm seeing that everyone should be aware of is reports of ASUS firmware setting voltages WAAAY too HIGH! Be sure to look into this on a fresh build and update firmware ASAP.

 

 

Link to post
3 minutes ago, BLKMGK said:

One thing I'm seeing that everyone should be aware of is reports of ASUS firmware setting voltages WAAAY too HIGH! Be sure to look into this on a fresh build and update firmware ASAP.


I have the ASUS, what is the good voltage? I haven’t changed anything. Also ASUS only released 1 BIOS (0702, on July 9th). Maybe they will release a newer later?

Link to post
16 minutes ago, Pducharme said:


I have the ASUS, what is the good voltage? I haven’t changed anything. Also ASUS only released 1 BIOS (0702, on July 9th). Maybe they will release a newer later?

This video seemed to explain it best and I've saved it off to use for mine for sure. Some of the voltages seemed awful high and I know a few reviewers have had CPU fry on them so this perked my ears up pretty quickly!
 

Skip to about the 3min mark!

 

Edited by BLKMGK
Link to post
On 7/13/2019 at 3:44 PM, BLKMGK said:

This video seemed to explain it best and I've saved it off to use for mine for sure. Some of the voltages seemed awful high and I know a few reviewers have had CPU fry on them so this perked my ears up pretty quickly!

 

Thanks! I changed my values to manual in my BIOS.  I had the same voltage that Jay was having at "stock" settings.  

 

On an other topic, I contacted the person doing the asus-wmi-sensors project on GitHub.  This is a project so linux can read the sensors.  I wonder if that could be integrated into the CA System Temp plugin when i'll have it working !

Link to post

@johnnie.black How would you proceed (in what order) to change the server stuff?  I will eventually power-down my current production server, unassemble it, and then take the stuff from my test bench to put it inside my Lian-Li Full tower.  I'll reconnect my stuff, but when i'll power up, will it see my drives as they were before? Also, I want to replace my cache drive from a Samsung 840 Evo to the new Corsair MP600 I have now.  How to proceed to have as minimal downtime as possible ??

Link to post
3 minutes ago, Pducharme said:

I'll reconnect my stuff, but when i'll power up, will it see my drives as they were before?

Yes, as long as you weren't using any RAID controllers or USB enclosures.

 

4 minutes ago, Pducharme said:

Also, I want to replace my cache drive from a Samsung 840 Evo to the new Corsair MP600 I have now.  How to proceed to have as minimal downtime as possible ??

If current cache is btrfs you can do an online replacement, no down time:

 

Link to post

@johnnie.black So if my current Cache is btrfs, I can do it after the server is booted?  I don't plan to keep using the slower EVO 840 after, does that make any difference? I don't have a cache pool now, just the single 840 Evo.  If I understand what you said, I reassemble everything on the new motherboard/CPU/RAM combo, put back my 2 current HBA, my GPU and then I reconnect all drives, including the current cache disk (840 Evo), then i'll Online-migrate the content to the Corsair MP600 NVMe ? After, i'll power-down and remove the 840 Evo, or use it maybe as a Unassigned dedicated download disk ? 

Link to post
1 minute ago, Pducharme said:

If I understand what you said, I reassemble everything on the new motherboard/CPU/RAM combo, put back my 2 current HBA, my GPU and then I reconnect all drives, including the current cache disk (840 Evo), then i'll Online-migrate the content to the Corsair MP600 NVMe ? After, i'll power-down and remove the 840 Evo, or use it maybe as a Unassigned dedicated download disk ? 

Correct, just note the bold parts of the instructions that apply to single cache:

https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480419

 

Link to post

ok its near 4 am here i got my parts together i got windows installed on a ssd fiddled around for a bit got the gigabyte master and bios updated to f5g (slower chipset fan) it had f3 not even listed on the website, and boy was it unstable. lots of boot loops

 

for anyone interested in what the iommu looks like before i kill this thing playing around with it

Bus 1 --> 0000:07:00.1 (IOMMU group 17)
Bus 001 Device 003: ID 05e3:0608 Genesys Logic, Inc. Hub
Bus 001 Device 002: ID 048d:8297 Integrated Technology Express, Inc. 
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Bus 2 --> 0000:07:00.1 (IOMMU group 17)
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

Bus 3 --> 0000:07:00.3 (IOMMU group 17)
Bus 003 Device 005: ID 046d:c52b Logitech, Inc. Unifying Receiver
Bus 003 Device 004: ID 046d:c537 Logitech, Inc. 
Bus 003 Device 003: ID 05e3:0608 Genesys Logic, Inc. Hub
Bus 003 Device 002: ID 8087:0029 Intel Corp. 
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Bus 4 --> 0000:07:00.3 (IOMMU group 17)
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

Bus 5 --> 0000:0c:00.3 (IOMMU group 9)
Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Bus 6 --> 0000:0c:00.3 (IOMMU group 9)
Bus 006 Device 002: ID 0781:5591 SanDisk Corp. Ultra Flair
Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
IOMMU group 17
[RESET] 03:08.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:57a4]
[RESET] 07:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485]
        07:00.1 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
[RESET] 07:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
IOMMU group 7
        00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 15
[RESET] 03:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:57a3]
IOMMU group 5
IOMMU group 231 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU group 231 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
[RESET] 0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1)
        0a:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
IOMMU group 13
[RESET] 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:57ad]
IOMMU group 3
        00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 21
[RESET] 05:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU group 11
        00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1440]
        00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1441]
        00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1442]
        00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1443]
        00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1444]
        00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1445]
        00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1446]
        00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 [1022:1447]
IOMMU group 1
[RESET] 00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU group 18
[RESET] 03:09.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:57a4]
[RESET] 08:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU group 8
        00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
[RESET] 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
[RESET] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function [1022:148a]
IOMMU group 16
[RESET] 03:05.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:57a3]
IOMMU group 6
        00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 14
[RESET] 03:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:57a3]
IOMMU group 4
        00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 22
[RESET] 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. Device [10ec:8125] (rev 01)
IOMMU group 12
[RESET] 01:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981 [144d:a808]
IOMMU group 2
[RESET] 00:01.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU group 20
[RESET] 04:00.0 Network controller [0280]: Intel Corporation Device [8086:2723] (rev 1a)
IOMMU group 10
        00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 61)
        00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU group 0
        00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 19
[RESET] 03:0a.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:57a4]
[RESET] 09:00.0 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU group 9
        00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
[RESET] 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
[RESET] 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485]
[RESET] 0c:00.1 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP [1022:1486]
[RESET] 0c:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
[RESET] 0c:00.4 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller [1022:1487]
IOMMU group 0:	[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 1:	[1022:1483] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 2:	[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 3:	[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 4:	[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 5:	[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
IOMMU group 6:	[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 7:	[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
IOMMU group 8:	[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
	[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
	[1022:148a] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
IOMMU group 9:	[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
	[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
	[1022:1485] 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
	[1022:1486] 0c:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
	[1022:149c] 0c:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
	[1022:1487] 0c:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller
IOMMU group 10:	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
	[1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
IOMMU group 11:	[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0
	[1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1
	[1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2
	[1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3
	[1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4
	[1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5
	[1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6
	[1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7
IOMMU group 12:	[144d:a808] 01:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981
IOMMU group 13:	[1022:57ad] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57ad
IOMMU group 14:	[1022:57a3] 03:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3
IOMMU group 15:	[1022:57a3] 03:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3
IOMMU group 16:	[1022:57a3] 03:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3
IOMMU group 17:	[1022:57a4] 03:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
	[1022:1485] 07:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
	[1022:149c] 07:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
	[1022:149c] 07:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
IOMMU group 18:	[1022:57a4] 03:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
	[1022:7901] 08:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
IOMMU group 19:	[1022:57a4] 03:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
	[1022:7901] 09:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
IOMMU group 20:	[8086:2723] 04:00.0 Network controller: Intel Corporation Device 2723 (rev 1a)
IOMMU group 21:	[8086:1539] 05:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
IOMMU group 22:	[10ec:8125] 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. Device 8125 (rev 01)
IOMMU group 23:	[10de:1b80] 0a:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1080] (rev a1)
	[10de:10f0] 0a:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1)

 

Link to post

Finally, ASUS removed all WMI interfaces for the Sensors in the X570 boards.  It means it can work with the System.Temp plugin if @bonienl update his plugins.  Lots of new AMD boards are using the Nuvoton nct6775 driver.  The « DETECT » is currently broken because it use a old version of « sensors-detect » script.  I tried manually putting the latest version, but the GUI doesn’t seems to care and doesn’t find the sensors or even what driver it should load.  Running the latest « sensors-detect » return the Driver name (Nuvoton) and seems available in the modprobe folder (don’t know if the version is high enough since i’m not able to make it work).

Link to post

just installed plex and the nvidia unraid plugin couldn't use the command "watch nvidia-smi" kept getting an error "nvidia -smi command not found" so i had to ad

 mem_encrypt=off 

to the flash Syslinux configuration eg...   append isolcpus=1-3,5,7,9-11,13,15 mem_encrypt=off initrd=/bzroot

now that's all passing through nicely

Link to post

I'm also putting together an unRAID Ryzen 2 build (Gigabyte B450 Aorus Pro, Ryzen 5 3600X, Colorful RTX 1060 6GB) and having trouble with passthrough.

I've got the GPU passing through fine, but front-panel audio isn't working. I'm assuming I have to pass through the onboard audio device:

 

[1022:1487] 0b:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

but even with SpaceInvaderOne's instructions on syslinux.cfg mods (vfio-pci.ids or ACS pcie override=downstream) I still can't passthrough the Audio Controller.

 

I don't recall having to pass through a separate onboard Audio Controller for front-panel audio passthrough with my old build though (ASRock H97 Pro4, i5-4460).

 

Can anyone confirm that on Ryzen builds you do have to pass through the Audio Controller in addition to the GPU for your front-panel audio plugs to be passed through? If so, any ideas what else I can try to pass it through?

Link to post

i think i have found a small work around for me and usb with a vm i can assign a mouse and K/B to the vm so they are always there when it starts up then using a plugin called "Libvirt Hotplug USB" i can plug anything in to the box and add it to the vm as needed from the vm page, not the best solution but still better than nothing

edit: also have to make sure the usb port(s) i use are able to be reset if i want to reuse/remount it

Edited by fr05ty
more info added
Link to post

Ok, today I switch my hardware to the new Ryzen 3000 board/CPU/RAM.  I have major issue with Networking.  None of the Docker are starting unless I set them to "Host" or Custom: br0.  When I let them on "bridge", this is what I get :

 

Quote

Execution Error :  Server Error with the big Red X. 

 

If I check the logs of the docker, I see that docker fail to bind to 'bridge' saying something about binding.   I was testing with a different USB key when I did my testing and networking was working fine.  Now, it seems that I can't make it work when using my USB key I was running with my Pro license... 

 

Any idea @johnnie.black ?  How can I reset the network ?  I want 1 NIC or 2 NIC if possible and still be able to use the bridge.

Link to post

Got mine fired up today and flashed the BIOS to 0702 after first having seen it boot just fine into the Linux disk in my system. After playing in the new BIOS some I attempted to get back into Linux - no go. The system doesn't appear to see any of my PERC cards and thus no boot device :( What firmware is everyone running? It had a 200 firmware when I first booted and this 702 is the only I see on the ASUS site right now. Can't do much with it if it won't see my SAS cards!

Link to post

Loaded defaults (again), swapped cards, no change. Removed the second card and it now sees the drives, ugh! Mind you it was booting fine previously with two cards! Way more finicky than it should be for sure. Oh these are M103 cards but pretty much the same thing either way. I cannot recall if these cards displayed their BIOS in the past but they sure don't now. I'll get a Windows OS loaded in there for some tweaking and testing ASAP, it will run Linux most of the time when done though I think. Not sure this is the board that will get into my unRAID chassis but this CPU will when the 3950x come out and this machine gets another upgrade. IMO the 470 boards might be more economical for unRAID, anyone seen any with more slots and real onboard video?

 

Edit: and now I'm finding that the IBM M1013 cards aren't supported by WIN10, SAS9220-8i likewise (same card). This upgrade is turning out to be pretty frustrating lol

Edited by BLKMGK
Link to post

If someone with an ASUS ACE workstation board could take a look at their board during bootup I'd appreciate it. There's a set of lights near the memory that countdown as it goes through the boot process. Mine is stopping at the White LED after having it hang trying to adjust memory speeds. Does your board progress past the White LED? The next one looks to be labeled "boot". <sigh> Mine was running well till I tried to get the memory near spec, I no longer get kbrd lights or anything. I don't have a chassis speaker so no idea if it's beeping and no docs for codes exist that I've found for this board anyway - certainly not the user manual. Pulled the CMOS battery, jumped the PITA pins, am going to let it sit sans battery while I travel a few days and hope like heck it comes back to life - am NOT happy!

 

Problem "solved" thanks to Microcenter doing an exchange! New board boots and runs fine so far. Now to find a heatsink that will fit in a 4U case. 120mm Noctua AM4 sits about 5cm too high so I'm stuck with the stock cooler for now. This thing flies, cannot wait for the 3950X!

Edited by BLKMGK
Link to post
  • 2 weeks later...

FYI, ASUS released a new BIOS UEFI for the ASUS WS X570-ACE motherboard.  I will install it, but I first need to find a long hdmi cable for connecting a TV in the other room...

Version 0803
2019/08/0615.39 MBytes
PRO WS X570-ACE BIOS 0803
1.Updated AM4 combo PI 1.0.0.3 patch ABB
2.Optimized the chipset fan profile. The new profile allows the chipset fan to stop during idle or when temperature is low.
3.Supports Ubuntu 19.04 and other Linux distros
4.Improved system performance, stability and storage device compatibility

 

Link to post
On 7/18/2019 at 8:17 AM, Pducharme said:

Finally, ASUS removed all WMI interfaces for the Sensors in the X570 boards.  It means it can work with the System.Temp plugin if @bonienl update his plugins.  Lots of new AMD boards are using the Nuvoton nct6775 driver.  The « DETECT » is currently broken because it use a old version of « sensors-detect » script.  I tried manually putting the latest version, but the GUI doesn’t seems to care and doesn’t find the sensors or even what driver it should load.  Running the latest « sensors-detect » return the Driver name (Nuvoton) and seems available in the modprobe folder (don’t know if the version is high enough since i’m not able to make it work).

It's not a problem of sensor-detect script but the board is too new to be supported by the nct6775 driver we current used in Unraid 6.7.0.

I compiled the newest kernel 5.2.7 mannually the sensors all work properly with System.Temp plugins.You should try. 

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.