SATA on PCI vs. PCIe


tillkrueger

Recommended Posts

so on a standard unRAID motherboard (like the P5B-VM DO) with 7 SATA drives hooked up to on-board connectors and 8 drives hooked up to two 4-port PCI SATA controllers, i get parity sync rates of between 10MB/sec and 22MB/sec...

 

if there were an 8-port PCIe (1x) SATA controller that worked on this board, or maybe even a 16-port PCIe (8x) controller like the RocketRaid, what would the potential increase in parity sync be? are we talking a potential increase of 25%, 50% or even 100%?

 

just wondering how much brain-share (and money) that topic is actually worth.

 

while somewhat off topic, how much increase in network performance would a PCIe (1x) Network adapter do over the on-board controller (if any)...i get about 30-35MB/sec writes to the unRAID when not using parity, quite a bit less when the parity calculations make the throughput drop to 0 every few seconds.

 

anyone with experience/valid-theories in these matters?

Link to comment
Guest Sparkie

The PCIe bus should handle more throughput in most cases. It will basically allow more to come through at once. I've always had better performance using an Intel PCI-e than most onboard NIC cards in linux

Link to comment

As mentioned at http://lime-technology.com/forum/index.php?topic=1660.0, I recently installed 2 PCIe cards (1x) with 2 SATA ports each on a P5B-VM DO running 4.3.beta3. This is in addition to a Promise TX4 PCI card. I've found that the PCI card is a real bottleneck when doing parity check. Off the top of my head, I am seeing low 20s MB/s with 4 HDDs on the PCI card, high 30s MB/s with only 2 HDDs, and I think even better without any HDDs on the PCI card. These numbers seemingly do not change whether I have 0 or 4 HDDs on the PCIe cards.

 

My tentative (and largely uninformed) conclusions are i) that PCI has higher latency than PCIe (on the P5B-VM DO) and ii) that the lower bandwidth of PCI becomes a real factor for unRAID performance beyond 2 HDDs. This should mean that more than 4 HDDs on a PCIe 1x card would also become a bottleneck.

 

For completeness, I can mention that I have the parity drive on the JMicron port and currently run a total of 12 HDDs, all SATA.

Link to comment

thanks ya'll.

 

Sparkie, you're talking about an INTEL Gbit PCIe LAN card, right? i get about 35MB/sec via my onboard LAN when i copy to the unRAID (with no parity)...do you see much better rates than that with your PCIe controller?

 

Rene, interesting comment/post...i knew that the PCI bus is a bottleneck, but just not how *much*of a bottleneck...since i have 15 drives in my unRAID (6 on the INTEL bridge, 1 - my parity - on the JMicon, like you, and 8 on the two Promise TX4's), we have very similar systems...cool...my performance also seems in line with what you had before adding the PCIe cards to your system.

 

so now the question is: what's necessary to get one of the RocketRaid 2340 (16-port) to be recognized and working in the 16x slot, or two RocketRaid 2320's (8-port 4x each), or at the very least two 2310's (4-port 4x each)? Tom, i know that you have plenty of work without people like me asking such questions, but is it a matter of enabling/including drivers in the package, or is it deeper than that?

 

i know that unRAID is built/conceived to be a low-cost, high-availability solution, but i am sure that some people (like me) would consider paying the $450 premium of running a PCIe 16x 16-port SATA controller...having all drives addressed through the same driver might also have some advantages, but that's just an intuition...a famous male intuition.

 

also, it would simplify the choice of motherboards for systems that need the full 16 drive capacity, but don't have to deal with the limitations of matching high on-board count SATA characteristics with the right set of expansion slots, etc...am i totally off-base?

 

keep us posted about your quest, Rene...i think you're one of many who are eying the PCI vs PCIe SATA issue...count me in. 

Link to comment

Just double-checked the parity-checking speed.

* Motherboard is P5B-VM DO; unRAID is 4.3.beta3.

* Parity is a 500GB SATA-II on the JMicron port.

* The onboard Intel ports have 5 500GB SATA-IIs.

* The PCIe cards have 3 500GB SATA-IIs.

* The PCI card has 2 250GB SATA-IIs and 1 36GB Raptor (SATA-I).

 

I'm seeing

- high 20s MB/s with 3 active HDDs on the PCI card, then

- low-mid 40s MB/s with 2 active HDDs on the PCI card, then

- mid-high 40s MB/s with 0 active HDDs on the PCI card.

 

---

 

PS! It's probably worth mentioning that all three numbers trail off towards the end of the drive(s) in question, maybe for the last 10% or so of the space. Other than that, the speeds seem to remain steady.

Link to comment

I'm sure JBOD on these controllers would work if the driver were compiled into the kernel.

 

Another reason I would be inclined to use a RocketRaid or 3Ware card is to do RAID0 for the Parity drive.

 

(Unless there will be the ability to do RAID0 via the unraid Software itself).

 

 

 

 

Link to comment

wow, that's quite a difference between 3 drives on the PCI controller and 0 drives on the PCI controller...i have the same setup (pretty much) and have 8 drives on the two PCI controllers (Promise TX4's), with parity sync rates in the lower 10MB's to just barely in the 20MB's (this is with 15 drives total).

 

so, it appears as if compiling the drivers for the most popular RocketRaid/3Ware drivers into the kernel (thanks WeeboTech!) could result in a potential parity-sync speed-ups of 100-300% over what i/we have right now.

 

Tom, any chance of compiling those drivers into one of your next builds for testing?

Link to comment

Hi,

 

I just built my unRAID server with PRO keys using the same P5B-VM DO motherboard and came across this thread to discover that users here seems to be using JMicron port for the parity drive. I have all 3 SATA II HD plugging to the on-board SATA ports. Will I gain any performance by using the JMicron port for parity drive?

 

I am sorry if I am run off the topic a bit.

 

Thanks,

---Tom

 

 

Just double-checked the parity-checking speed.

* Motherboard is P5B-VM DO; unRAID is 4.3.beta3.

* Parity is a 500GB SATA-II on the JMicron port.

* The onboard Intel ports have 5 500GB SATA-IIs.

* The PCIe cards have 3 500GB SATA-IIs.

* The PCI card has 2 250GB SATA-IIs and 1 36GB Raptor (SATA-I).

 

I'm seeing

- high 20s MB/s with 3 active HDDs on the PCI card, then

- low-mid 40s MB/s with 2 active HDDs on the PCI card, then

- mid-high 40s MB/s with 0 active HDDs on the PCI card.

 

---

 

PS! It's probably worth mentioning that all three numbers trail off towards the end of the drive(s) in question, maybe for the last 10% or so of the space. Other than that, the speeds seem to remain steady.

Link to comment

i don't think you would...the only reason i am using the JMicron port for parity is that i use all the other 6 for data, and in my mind it just made sense to have "kind" of drives (data) on the INTEL ports, and the other "kind" of drive (parity) on the JMicron chipset...if i only had a 6 drive system, i would put the 5 data drives *and* the parity drive on the INTEL ports, just to have them all under the same chipset/driver...same reason i would like to have all 16 drives (well, 15, actually) on one PCIe SATA controller instead...it's just my idea of neat, and may do nothing for speed in the end...although it does seem to be true that taking the PCI controllers we are using out of the equation could do great things for speed.

 

as for you, i don't think you're anywhere near the bandwidth bottleneck of the INTEL chipset with your 3 drives, so just keep using it as-is until you have 6 data drives, at which point you can hook up the parity drive to your JMicron on-board controller.

Link to comment

i don't think you would...the only reason i am using the JMicron port for parity is that i use all the other 6 for data, and in my mind it just made sense to have "kind" of drives (data) on the INTEL ports, and the other "kind" of drive (parity) on the JMicron chipset...if i only had a 6 drive system, i would put the 5 data drives *and* the parity drive on the INTEL ports, just to have them all under the same chipset/driver...same reason i would like to have all 16 drives (well, 15, actually) on one PCIe SATA controller instead...it's just my idea of neat, and may do nothing for speed in the end...although it does seem to be true that taking the PCI controllers we are using out of the equation could do great things for speed.

 

as for you, i don't think you're anywhere near the bandwidth bottleneck of the INTEL chipset with your 3 drives, so just keep using it as-is until you have 6 data drives, at which point you can hook up the parity drive to your JMicron on-board controller.

 

I have a different Asus board, but have a combination of Intel sata ports and jMicron port.  My understanding is that the jMicron port(s) are connected via the Pcie bus, so in theory slower than the Intel ones directly connected to the southbridge?  Anyone know if this is true?

Link to comment

i don't think you would...the only reason i am using the JMicron port for parity is that i use all the other 6 for data, and in my mind it just made sense to have "kind" of drives (data) on the INTEL ports, and the other "kind" of drive (parity) on the JMicron chipset...if i only had a 6 drive system, i would put the 5 data drives *and* the parity drive on the INTEL ports, just to have them all under the same chipset/driver...same reason i would like to have all 16 drives (well, 15, actually) on one PCIe SATA controller instead...it's just my idea of neat, and may do nothing for speed in the end...although it does seem to be true that taking the PCI controllers we are using out of the equation could do great things for speed.

 

as for you, i don't think you're anywhere near the bandwidth bottleneck of the INTEL chipset with your 3 drives, so just keep using it as-is until you have 6 data drives, at which point you can hook up the parity drive to your JMicron on-board controller.

 

Great! Thanks for the information. It will take me some times to fill up all 6 ports unless the HD price drops a ton soon.

 

--Tom

Link to comment

I have a different Asus board, but have a combination of Intel sata ports and jMicron port.  My understanding is that the jMicron port(s) are connected via the Pcie bus, so in theory slower than the Intel ones directly connected to the southbridge?  Anyone know if this is true?

 

The pertinent issue would be latency rather than bandwidth, which is likely to be much less of an issue. If you search online, you'll find several people that have compared the two for throughput and the way I read their findings, we won't see any difference in unRAID performance if the two technologies are given a straight comparison for parity. I use the JMicron partly out of neatness, like tillkrueger, and partly to avoid any potential for a slowdown on the Intel controller when it has 6 drives connected. Of course, no slowdown might happen and, at any rate, these considerations are only really relevant for parity checking, in which case all drives will be running in sync, as it were, and the process as a whole is unlikely to suffer any more or any less if it is parity or a data drive that is on that last Intel port.

 

The only worthwhile issue to pursue, at least for the time being, seems to be to make sure that you have no more than 2 drives on the PCI bus.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.