PCIe Bifurcation for all NVMe Build.


Recommended Posts

So, my eventual goal is to build a mini ITX NAS with 6x8TB NVME drives for storage. (Once the Rocket 8TB TLC drives release) I am well on my way to collecting the parts to create this build. But, I am starting to worry about PCIe bifurcation wonkiness that might limit the number of drives I can get to run simultaneously. As I understand it, NVMe drives require 4 PCIe lanes each. 

 

My plan was to build this on the ROG Strix z690-i mobo with the 12900k processor. (major overkill i know) The main idea here was to use a processor with onboard graphics to leave open the 16x slot for an 4x m.2 splitter card. (ASUS Hyper m.2 gen 4) I guess I was expecting with ASUS pushing bifurcation support so much that their two devices would almost surely be fully compatible. (unable to tell from existing documentation) But, it seems like with a little back-of-the-napkin math, the CPU supports 20 lanes and I am asking for 24 lanes worth of drives. Are the two m.2 drives built into the mobo sharing 4 lanes or something? It'll be another week or so before I can tell if this mobo supports 4x4x4x4 bifurcation. And, even if it does, I won't immediately have the drives to verify functionality. As I understand it, sometimes some of those bifurcated branches can be disabled at a hardware level if the lanes are being shared elsewhere or something. 

 

It also seems like AMD might be much more on top of bifurcation support than Intel. If that is the case, and an AMD build is going to be a lot less of a headache, I do have a recent x570 build I could easily pivot to be this machine. (would need a new CPU with integrated graphics support) Repurpose the 12900k to my main machine and turn the AMD machine into the NAS.

 

Anyone know if the Intel build will work? If not, will the AMD?

Link to comment

Best for this would be a Xeon platform, usually most slots can be bifurcated, and the CPU as more lanes, e.g. on one of my servers I have a Xeon 4114 Silver with 48 lanes, board has 6 CPU slots and all can be bifurcated to x4, I could use 14 NVMe devices with that combo, (2 would go through the PCH).

Link to comment

So you're telling me to build a real server? I guess I could. But, I fear the cost would have me so pot committed that it'd be hard to ever update it.

 

Cramming 6 NVMes in a tight ITX space (you know... flexing their advantage over magnetic storage) is a lot more challenging than I thought it was going to be. Looks like all the AMD apus are limited to pcie 3.0 for power consumption reasons. Poking around my BIOS, it looks like my 5950x could do this. But, then I have no graphics output whatsoever...

 

I hadn't considered that. Is it possible to build a computer with no graphics output at all? (Maybe swap the graphics card out for the nvme card after getting unraid set up and managing the whole setup remotely)

Link to comment
On 1/11/2022 at 7:59 AM, severalboxes said:

No bifurcation needed for this card. I've got the 4 nvme slot version and it works fine. Just pricey....

 

https://www.highpoint-tech.com/ssd/ssd7140a-overview

I had seen these types of cards before. I was trying to get by with direct bifurcation mainly because I have made some assumptions that it would be harder to manage the drives if I have to deal with them being obfuscated behind a layer of drivers and software. Would information about the 8 drives be passed along to the machine? Like could I keep an eye on individual drive temperatures and pool them however I see fit in Unraid? Before I lay down like a grand for one, I'm curious about the pros and cons of a card that has its own bifurcating logic on it. Obviously, one advantage is this could take me all the way up to 10 drives on the machine... And of course now I'm getting greedy and wanting a PCIe 5.0 one to double the bandwidth.

Link to comment
3 hours ago, Jlarimore said:

I had seen these types of cards before. I was trying to get by with direct bifurcation mainly because I have made some assumptions that it would be harder to manage the drives if I have to deal with them being obfuscated behind a layer of drivers and software. Would information about the 8 drives be passed along to the machine? Like could I keep an eye on individual drive temperatures and pool them however I see fit in Unraid? Before I lay down like a grand for one, I'm curious about the pros and cons of a card that has its own bifurcating logic on it. Obviously, one advantage is this could take me all the way up to 10 drives on the machine... And of course now I'm getting greedy and wanting a PCIe 5.0 one to double the bandwidth.

There is just a plx chip on the nvme card to divide the lanes up, it's the reason that card is so expensive. Your machine would see the drives just like normal.

 

I've got a asus x299 workstation board which has plx chips onboard to give more pcie slots, which is ironically why the bifurcation from the CPU doesn't work on that motherboard. So my nvme drives have to go through 2 plx chips (1 on the motherboard and 1 on the nvme card) before getting to the CPU, and they work just fine.

 

There are some lower cost brands like Syba. SI-PEX40152 is a quad m.2 board.

Link to comment
18 hours ago, severalboxes said:

There is just a plx chip on the nvme card to divide the lanes up, it's the reason that card is so expensive. Your machine would see the drives just like normal.

 

I've got a asus x299 workstation board which has plx chips onboard to give more pcie slots, which is ironically why the bifurcation from the CPU doesn't work on that motherboard. So my nvme drives have to go through 2 plx chips (1 on the motherboard and 1 on the nvme card) before getting to the CPU, and they work just fine.

 

There are some lower cost brands like Syba. SI-PEX40152 is a quad m.2 board.

Sweet. I think that's probably the route I will eventually go. I think I'll slowly collect these expensive 8tb drives and once I'm about to exceed 4 of them, I'll switch to a PLX board and slowly go all the way to 10 drives with 1 or 2 being parity. Hopefully at that point a pcie 5.0 variant of the PLX card will exist. It looks like that 8 drive 4.0 card would just narrowly fit in my Ghost S1 ITX case. That's a boat load of lightning fast, well protected storage in a tiny, tiny space.

Link to comment

Just so you don't get disappointed don't expect NVMe speeds for array assigned devices, Unraid isn't built for speed, I did a test some time ago and even without parity a read check was much slower than expected, it's not a bandwidth issue since it was done on am Intel Xeon with plenty of lanes, and all devices are using CPU lanes:

 

imagem.png.a4f1c2748fc8b8a9b11c1832aee5a9c9.png

  • Like 1
Link to comment

Thank you for tempering expectations. I do hate let downs. I'm assuming those are gen 3 NVMe drives. So, roughly 3.4GB/s reads are to be expected. Yikes. Only reading like 35% of expected speeds.

 

Are you telling me it might actually be beneficial to use one drive as a cache drive even if my cache and array devices have identical read/write speeds? Seems silly. But, okay.

Edited by Jlarimore
Link to comment

I guess if all of the input/output channels out of the computer are slower than 1GB/s it doesn't much matter for most use cases. In my case it looks like my wireless transfer rate is fastest (Wifi 6) which I think would be a little over that rate. I guess I will debate whether it would be worth using one drive as a cache disk.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.