Jump to content

Recommended Posts

So im trying to get the maximum perfromance out of my server but im a bit stuck on how PCI lanes work.

I know how they work in general but don't realy know whats best practice.

 

i have 

 

- 3 sata SSD (cache pool) 

- 2 nvme SSD one for VM's one for docker 

- 2 spinning drives but want to add some more (2/3)

- 1x Pci express to 4 sata

 

so i was thinking to put the cache pool on to the motherboard headers

and get a 4x pci express card for the spinning drives and i found a card for 5

(DeLOCK 5 port SATA PCI Express x4 Card Low Profile interface card)

I also want to add a 10gb network card to take advantage of my fiber 

 

so my questions are  

 

- is this the correct way to do it? or am i completly wrong here? xDD

- does it matther for the 3 sata SSD's what headers i use?  

- is the PCI card good idea? does it make a big differnce if u use the 4x lanes on spinning drives?

- do i use PCI slot 1 for the drives and 2 for the network or other way arround?

 

https://download.msi.com/archive/mnu_exe/mb/E7C75v1.1.pdf

 

thanks already and sorrry for my english xD

 

                                image.thumb.png.c97966b71ca546a4c4edd25d099a9d0d.pngimage.thumb.png.8adec02a0f4400f2a6059cfa15f336a6.pngimage.thumb.png.cd4be58336e54d6c4d41f7a643f953ff.png  

Link to comment

M.2_1 is a nvme that goes directly to you Processor and is its own iommu gorup.

PCIE lanes info:

https://www.crystalrugged.com/knowledge/what-is-pcie-slots-cards-lanes/#:~:text=or professional tasks.-,What is a PCIe lane%3F,work together to transmit data.

PCH switching:

 

Quote

so my questions are  

 

- is this the correct way to do it? or am i completly wrong here? xDD

- does it matther for the 3 sata SSD's what headers i use?  

- is the PCI card good idea? does it make a big differnce if u use the 4x lanes on spinning drives?

- do i use PCI slot 1 for the drives and 2 for the network or other way arround?


In the end it how you want to interact with it and what is it capable of doing.
The answer depending on the mopboard bios and configurations settings with in bios.
Some board manufacture swap the multiple bottom x1 into a combined to process as a x4 but that would require a different mobo.

1. trial and error and learn from the mistakes. no wrong questions or answers, just wrong outcomes...

 

3x sata will connect to the sata headers. Please make sure you are correctly plugged into 0,1,2(ports 1,2,3 depending on onboard label) as some sata ports can be disabled to enable the 2nd nvme slot. (the block diagram above assuming its correct for your hardware. Means that the sata ports share there pcie bandwidth with the nvme drivers. Meaning, the drive will most likely be in x2 mode instead of x4 mode. you may see a performance hit there.

2. the answer is no, but it depends. Do you have enough sata onboard ports? but no unraid is not drive order specif as it pull the guid/uuid of the disk drive to maintain drive arrangement.


the diagram of the board tells me that then second nvme slot shares pci bandwidth with the 2nd x16 slot. if you really look at the solders pins, that 2nd x16 slot is actual an electrical wired x4 slot(arc doing to your diagram above) --this was a x8 but 4x pce lines were taken for the nvme 2nd slot... This has been common in the motherboard board manufacture space. 

Would need more info such as processor to give you your total # of pcie lanes and mother board model to potential help bios pcie setting and layout for pcie devices. to get a trure block diagram. (they are manufacture specific. and processor generic)

My recommendation is putting the x4 device(?delock hba?) into your 1st pcie x 16 slot.

The delock card is a HBA? if you plan to use a vm and pcie pass the delcok card make sure your unraid drive are separate from the 5 port sata card.
 

FYI they make x1 fiber cards. 1GB fiber: https://www.amazon.com/Gigabit-Network-Single-1000bps-Desktop/dp/B086YJNV11


FYI: PCIe x1 is only capable of 2.5Gbps, so even if you found a 10gig card that fit it wouldn't be able to run at full speed. You'd need a minimum of an x4 slot for 10gig


so your sfp/fiber nic will be in the second x16 slot.

Summary:
*so I would recommend having the HBA in the top slot(if at all) and have the fiber nic in the 2nd x16 slot. install both nvme use onbaord sata port for the disk drives.

--get a x1 to x16 miner riser if you need to have a gpu for display out testing if needed. https://a.co/d/bvvtc1M
a g-card while having a full x16 needs at least x8 for full memory and graphic capabilities. using a x1 riser will give you a display out for testing and boot if your mobo cpu doesn't have onboard graphics...

 

Edited by bmartino1
Link to comment

IMHO
I have person lay found the am4 amd ryzen 5-9 processor with the b450 or higher chipset to be more than capable of handling my pcie needs. The trick is finding a good motherboard and the proper pcie device to fit your needs can indeed be torturing.

Just to let you know, they do make SATA extend HBA cards for x1 pcie slots. I would recommend one of these for your SATA disk and hdd expansion.

https://a.co/d/eOPTq2U

this could maybe open up your x16 at the top for a ful g-card for vm/docker uses

Based on the block diagram posted and with losing some other essential pcie lanes and devices including the diagrammed switching from the sata controller to be nvme tells me that this is a terrible/junk board. And that the PCH switch may be a bios setting on how your processor lanes are handling x devices pluged in.

Link to comment

Thx its a lot to wrap my head arround i will need to read true it a few times to fully understand xDD

 

wel some more information 

 

- i have a 10400f for now but i am upgraiding to a 10600k for the on board graphics (plex transcoding). 

- i found a 4x 10gb network card that i wanted to use just for the VM, don't think i realy need a gpu.

https://www.amazon.com.be/-/nl/ASUS-XG-C100C-netwerkadapter-RJ-45-aansluiting/dp/B072N84DG6/ref=asc_df_B072N84DG6/?tag=begogshpadd0d-21&linkCode=df0&hvadid=655894083480&hvpos=&hvnetw=g&hvrand=3421046420356722456&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=1001034&hvtargid=pla-436375922076&mcid=45c1e64488153f0c8ebb08ba51e9b8fa&th=1

 and also if i have all my drives on a x1 does i think it will performe worse then a 4x no?

 cant realy find aything about HBA in the specs tbh but now im using this 

https://www.amazon.com.be/-/nl/uitbreidingskaart-draadloze-Express-kaartadapter-koellichaam-desktop-pc/dp/B07KQ7CWWS/ref=asc_df_B07KQ7CWWS/?tag=begogshpadd0d-21&linkCode=df0&hvadid=633420095888&hvpos=&hvnetw=g&hvrand=17462768249150925860&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=1001034&hvtargid=pla-1284909858155&psc=1&mcid=acbf2d161796376c9c458b0d31e52940

and this is the one i seen to replace

 https://www.allekabels.be/delock/6335/3743505/delock-5-poorts-sata-pci-express-x4-kaart-laag-profiel-vormfactor.html?mc=nl-be&gad_source=1&gclid=CjwKCAjw8diwBhAbEiwA7i_sJe5RXFrFzpL0SzLIuk_BTMXoIGN3Trck78Nvfe1sVrF_wc1GBajXFBoCn7cQAvD_BwE 

- i dont have to much running on the system i have a share for my home pc's (404notfound) so i dont have data on them, plex (movies and tv share), qbitorrent (data share) and a twingate connector for remote acces, most of the grunt goes to a Ubuntu VM where i have 2 deddicted Rust servers running i found this is the best way for me and my limited knowledge. have the vdisk on the Nvme drive for the VM's 

- don't minde the home share it use to be a iSCSI share for steam but i will add the drives to my gaming pc cuz it was always online exept when i needed it xD

- the Block diagram is indeed from the motherbord im using, do i need to change the "junk board" or will it work for what i need by changing tb bios?

thing is the server use to be my old pre-build were i got ripped off by the store cuz it should have a 10400K so the board being junk does not suprice me, i do have some budget to add but not much more before it does not make sense anymore xD 

 

-image.thumb.png.842ec01950a234596654bc8041c0c106.pngimage.thumb.png.b76287f39ade7602f548f135f7c8ad91.pngimage.thumb.png.3e9d64ee4c9e85e4ff30bb5d110d7638.png

Link to comment

ok we go to the process manufacture to get info. Since that is what you have, that i what I will post info on. You may already have and know some of this information.
https://www.intel.com/content/www/us/en/products/sku/199278/intel-core-i510400f-processor-12m-cache-up-to-4-30-ghz/specifications.html
https://www.techpowerup.com/review/intel-core-i5-10400f/3.html

 

This is your processor generic block diagram:

image.thumb.png.cc77ff81fe2c29757b8ad484f8049236.png


Boards "PCH switching" - Bifurcation...

what this means for your pcie devices and lanes is that chipset...

 

Top left is what we care and will talk about. its One or the other...


You can have a single g-card in the top slot and get all 16 lanes and nvme in top slot. (this generation of processor is up to 24 pcie lanes. Meaning that there are 20 lanes that are available. Which is why it has bifurcation / pch switching. if the single g card exist in the top slot and uses all 16 lanes, that only 4(maybe 8 ) left for use by the system and other parts on the board.

 

so to accommodate and use other devices. You can have the top slot be x8 pcie lanes and the 2nd slot be x8 slot pulling for the 20/24 total lanes....

-This is the configuration it goes into when you use both nvme slot devices on board. nvme can run in x2 mode to mean only 2 pcie lanes. Most run in x4 lanes. The reason I use 20 total is that the top nvme is usually the other 4 that is direct to the processor. so the top slot is the nvme that i recommend for full use in your VM setup.

 

As the second slot is taking 4 lanes. (PCIe uses four lanes for storage devices, resulting in data exchange that is four times faster than a SATA connection, which only has one lane) these means that while there are 4 dedicated lanes to the nvme in the second slot, the nvme is sharing the 4 pcie lanes with the onboard sata ports. (this is the manufacture's doing to put as many features on the board. I'm not sure if there is a generation line that fixes that without paying quite a bit for it. Everything from the usb ports to onboard display tap into the pcie lanes. this is handled by your south bus bridge controller. in Intel this is done over the DMI link.

 

so in theory there is a bottleneck when using a single sata drive as it needs 1 pcie lane... so it depends on the constant read and writes performed. This is handled by bios and pcie switching. The more lanes, the better. A single pcie1 also has x amount of bandwidth usu 4 sata ports on a single x1 pcie slot will not show a performance hit, but there is one technically.

This is where the trial and error come in as it silicon lottery and manufacture digression on how they handle the firmware in charge of the pcie switching.

 

Summary

no board replacement is not needed but to inform of other possibilities. the more you add to the pcie lanes the more the processor needs to have lanes to use this is done at processor level and processor chipset layout per manufacture. This reminds me of the show (Jackie Chan uncle. -- You Must Do More Research!)

image.png.2dc768c7618c020b2d68e0bcd0cb840f.png

 

I don't see an issue with your pcie devices, outlines, nor problem with processor upgrade path. The processor upgrade will not change the pcie layout or total # of lanes.

 

As stated above, in theory there is a performance hit. But I doubt you will ever see or notice it. 

Some HBA have onboard pcie switching to help this. 

 

 

Some hardware that I have used with unraid:

side hardware note. I have used this for router passthorugh testing with unraid and know that this nic works well.

https://a.co/d/8L07p5x

 

there are also x1 slots for nvme this force 2x mode on them:

https://a.co/d/gusHMu3

 

I have also tested and used theses HBA that have onbaord chip.

this one is x8 they make x4 variants with less sata ports https://a.co/d/fNNL1CW x4 in this hba can handle full sata speeds for 4 disk only. I have also used these in the past: https://a.co/d/4iavr9c

 

 

Edited by bmartino1
Link to comment

So if i get it right the best practice for my board/cpu is to not use any sata ports and move all the disks to a HBA controller in the first PCI 16x slot

if i get the 8x i can add 2more discs its good for me if they get full ill just get some bigger ones

 

then use the second slot for the net nic

 

Im pretty sure the VM's use the NVMe close to the cpu so guess i got that right xD

 

is there anything i need to change in the bios under the PCI settings?

 

wel i need to get into bios stuff anyways cuz i noticed my VM's dont seem to boost to max core clock speed so that was my next step but think thats better for another topic 

 

 

The HBA u posted will take 4 to 8 weeks but i think this one is the same specs and i can get it tomorrow xD 

https://www.amazon.com.be/-/en/Kafuty-RAID-Karte-Hauptsteuerdiskette-RAID0/dp/B084SKFBCN/ref=sr_1_17?crid 

 

Already massiv thx! i learned a lot!

Link to comment

Correct to get the full use of the nvme drive without a performance hit. I would recommend the HBA in the top slot handling all disks.

 

bios is trickery to assist online. it more be aware that bios settings affect pci switching/bifurcation. Some amd bios you can force set the modes and generations over auto detect. as example.

 

if goning after a x8 pcie sas hba card make sure it is in "IT mode" and not raid mode. also verify when buying that the sas to 4 sata ports cable are included.

Edited by bmartino1
Link to comment

well i cant find one with cable but if i buy the its around the same price

do all cards have it mode i dont see it in the specs

 

is this good? 

 

Specification:
Item Type: RAID Card
Port: 8 x internal SATA + SAS ports.
Connector: 2 x mini-SAS SFF-8087x4 connector
Processing capacity: 6 Gb/s for each port
Main Chip: For LSISAS2208 ROC Dual Core, Power PC 800MHz
Mini MD2: approx. 6.6 inches x 2.536 inches.
Host interface: x8 PCI Express 2.0
Cache: 512 MB DDRIII 1333 MHz high-speed cache
Optional Battery Backup Unit: LSIiBBU09
Connect up to 128 SATA and/or SAS devices
RAID Level: 0, 1

Automatic recovery during rebuild
Automatic recovery during remodeling
Online Capacity Expansion (ECO)
Online RAID Level Migration (RLM)
Provide SSD support with SSD Guard
Dedicated global backup disk with recovery backup drive support
Automatic Reconstruction
Storage Cabinet Affinity
Emergency SATA Backup Disk for SAS Arrays
Single controller multipath (failover)
I/O Load Balancing
Complete RAID Management Set

Package list:
1 x RAID card

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...