Jump to content
StarsLight

Best motherboard for AMD 3900x

4 posts in this topic Last Reply

Recommended Posts

Anyone has experience on the motherboard as below requirement:

 

1. support 3900x

2. support 128GB

3. at least 5-6 PCIE slot if possible for 24+ harddisk

4. work with latest unRAID

 

I done researching that most of X570 motherboard supporting 128GB, but I don't really think about this because of heat/power consumption. Hence, I think x470 should be much better.

Share this post


Link to post

Getting 5-6 PCIE card slots with a Ryzen series configuration, while not impossible, is going to be really, really rare, if even manufactured at all.  The Ryzen 3000 architecture accommodates 24 lanes, and the x570 chipset has 16 PCIE lanes for a total of 40 total lanes; and that's the best you can do!  This is for everything ... PCIE cards, NVME, SATA buses, USB, LAN, etc.  While it's 'possible' to have this all dedicated to physical PCIE cards, you'd be sacrificing a lot of other computing functionality to get it.  I don't think you're going find a mainstream configuration that will do that.  Possibly years from now when people take old hardware and make new, strangely and unsupported configured boards with them and sell them on AliExpress.

 

https://www.techpowerup.com/255729/amd-x570-unofficial-platform-diagram-revealed-chipset-puts-out-pcie-gen-4

 

Compare that to the Threadripper that accommodates 64 lanes (with a x399 chipset).  You will usually only see lots of PCIE lanes on server grade, or near server grade hardware combinations.

 

You said you wanted to accommodate lots of hard disks with these lanes.  You don't need lots of lanes for lots of hard disks ... especially spinners that max out at ~150MBps on a good day.  Look into Host Bus Adapters (HBAs) and HBA Expanders.

 

For example, something like the combination of a the LSI 9211-8i and the Intel RES2SV240 SAS 2 Expander.  Wouldn't cost a lot to get a lot.

 

 

-JesterEE

Share this post


Link to post

^ That's all about right. But it's basically up to cost/performance. For the most part, using up more slots the cost less and has the best performance . For example:

3 x IBM M1015 (LSI 2008 chipset) Max available speed per disk is 320MB/s. Cost is about say $90 ($30/each) But uses 3 slots. 

2 x IBM M1015 (LSI 2008 chipset) + 2 Intel RAID SAS Expander RES2SV240 Max available speed per disk is 205MB/s Cost is about $300. $60 for 2 M1015 and $240 for 2 RES2SV240. Uses 2 slots, the RES2SV240 can be powered without using a slot.

2 x LSI 9207-8i chipset + 2 Intel RAID SAS Expander RES2SV240 Max available speed per disk is 275MB/s Cost is about $340. $100 for 2 LSI 9207-8i and $240 for 2 RES2SV240. Uses 2 slots, the RES2SV240 can be powered without using a slot.

1 x LSI 9300-16i + 2 RES2SV240 (not sure if this is possible) Max available speed per disk is 275MB/s Cost $540? ($300 for LSI 9300-16i and $240 for 2 RES2SV240) Uses 1 Slot.

 

Or if you have a case with a SAS expander built in, that can save a lot of headache and trouble. Quickest and nicest wiring job also. For example if you get a 24 bay case with a built in expander. In single link with a SAS2 expander connected to a single IBM M1015 (LSI 2008 chipset). You have 125 MB/s max per drive (when all drives are being used simultaneously)

 

You could also use 1 x IBM M1015 (LSI 2008 chipset) and the HP 6Gb (3Gb SATA) SAS Expander. Max available speed per disk is 95MB/s Cost is about $60 total.

 

As far as lanes, this is the best post for explaining speeds on pcie bus. But the IBM M1015 (LSI 2008 chipset) uses PCIe gen2 x8. Which is equal to PCIe gen4 x2. So on x570 you'll only need 6 lanes to max out the cards/drives speed with 24 disks. And realistically, you only need x1 on PCIe gen4 and you'll have 185MB/s available per disk (if using 24 disks) which is only 3 lanes. So unless I'm missing how lanes are determined, lanes aren't really going to be an issue at all. It's going to be more likely a PCIe slot issue. Which in some way is related to lanes but not really.

 

The issue you kind of run into is consumer boards tend to give you x16 slots and x1 slots. Sadly the x1 slots aren't really useful in a physical sense. Especially when it comes to HBA cards as they are for the most part x8 cards. In the server boards world x8 is much more of a common slot.  You can easily use a x8 card in a x16 slot. But you'll run out of them fast, and you might need them. I think you could use adapters from x1 to x8 without a performance hit but not 100% sure. Really just depends on what the motherboard manufacturer decides. Would be easier if they just used x16 for every slot but I doubt that will happen

 

It makes way more sense to get the right board for what you need but this was interesting: https://linustechtips.com/main/topic/1040947-pcie-bifurcation-4x4x4x4-from-an-x16-slot/

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.