Jump to content

PCI-E slots on X9SCM-F


Recommended Posts

The X9SCM-F motherboard in my mainserver has 4 PCI-E 2 slots. The 2 upper slots are x8 and the 2 lower slots are x4.

 

Do I get bandwith problems if an M1015 in an X4 slot is fully loaded with 8 Hitachi 4TB 7200rpm harddisks?

 

Can I place the parity and cache disk on a controller in the lowest slot without bandwith problems? Which controller would you recommend for that?

 

 

Link to comment

The X9SCM-F motherboard in my mainserver has 4 PCI-E 2 slots. The 2 upper slots are x8 and the 2 lower slots are x4.

 

Do I get bandwith problems if an M1015 in an X4 slot is fully loaded with 8 Hitachi 4TB 7200rpm harddisks?

 

Can I place the parity and cache disk on a controller in the lowest slot without bandwith problems? Which controller would you recommend for that?

Parity and cache should go on the motherboard ports.

Link to comment

The X9SCM-F motherboard in my mainserver has 4 PCI-E 2 slots. The 2 upper slots are x8 and the 2 lower slots are x4.

 

Do I get bandwith problems if an M1015 in an X4 slot is fully loaded with 8 Hitachi 4TB 7200rpm harddisks?

 

Can I place the parity and cache disk on a controller in the lowest slot without bandwith problems? Which controller would you recommend for that?

Parity and cache should go on the motherboard ports.

 

Why, speed? What if I want to run ESXi or KVM? Will I then need to pass through the onboard ports and loose them for data storage use?

Link to comment

The X9SCM-F motherboard in my mainserver has 4 PCI-E 2 slots. The 2 upper slots are x8 and the 2 lower slots are x4.

 

Do I get bandwith problems if an M1015 in an X4 slot is fully loaded with 8 Hitachi 4TB 7200rpm harddisks?

 

Can I place the parity and cache disk on a controller in the lowest slot without bandwith problems? Which controller would you recommend for that?

Parity and cache should go on the motherboard ports.

 

Why, speed? What if I want to run ESXi or KVM? Will I then need to pass through the onboard ports and loose them for data storage use?

 

 

That's a recommendation. Speed mostly. Usually the motherboard ports have a fast interface to the CPU.

If you have an individual controller and slot bandwidth to handle the two drives, it should work fine.

I had my parity on an individual controller for years and it was a benefit to performance.

 

Link to comment

A while ago I got my hands on 8 fast SSDs for a few days and made some tests using a script I found on the forum, the script can test a group of disks sequentially and then simultaneously to look for bottlenecks.

 

8 x Kinsgton V300 SSDs on a SAS2LP.

 

Sequentially:

 

sdb = 445.75 MB/sec
sdc = 449.81 MB/sec
sdd = 463.40 MB/sec
sde = 449.08 MB/sec
sdf = 449.01 MB/sec
sdg = 450.68 MB/sec
sdh = 450.35 MB/sec
sdi = 451.60 MB/sec

 

Simultaneously on PCI-e gen2 8x

 

sdi = 368.40 MB/sec
sde = 368.86 MB/sec
sdb = 369.32 MB/sec
sdd = 368.92 MB/sec
sdf = 368.48 MB/sec
sdg = 368.61 MB/sec
sdh = 368.72 MB/sec
sdc = 369.30 MB/sec

 

Simultaneously on PCI-e gen2 4x

 

sdb = 187.79 MB/sec
sdd = 188.23 MB/sec
sdf = 193.98 MB/sec
sde = 187.69 MB/sec
sdc = 187.79 MB/sec
sdi = 194.28 MB/sec
sdh = 193.80 MB/sec
sdg = 193.68 MB/sec

 

The result is the ballpark  “real world” bandwidth, so a 4x slot it’s not a limit for 8 current HDDs, but it’s close.

Link to comment

A while ago I got my hands on 8 fast SSDs for a few days and made some tests using a script I found on the forum, the script can test a group of disks sequentially and then simultaneously to look for bottlenecks.

 

8 x Kinsgton V300 SSDs on a SAS2LP.

 

Sequentially:

 

sdb = 445.75 MB/sec
sdc = 449.81 MB/sec
sdd = 463.40 MB/sec
sde = 449.08 MB/sec
sdf = 449.01 MB/sec
sdg = 450.68 MB/sec
sdh = 450.35 MB/sec
sdi = 451.60 MB/sec

 

Simultaneously on PCI-e gen2 8x

 

sdi = 368.40 MB/sec
sde = 368.86 MB/sec
sdb = 369.32 MB/sec
sdd = 368.92 MB/sec
sdf = 368.48 MB/sec
sdg = 368.61 MB/sec
sdh = 368.72 MB/sec
sdc = 369.30 MB/sec

 

Simultaneously on PCI-e gen2 4x

 

sdb = 187.79 MB/sec
sdd = 188.23 MB/sec
sdf = 193.98 MB/sec
sde = 187.69 MB/sec
sdc = 187.79 MB/sec
sdi = 194.28 MB/sec
sdh = 193.80 MB/sec
sdg = 193.68 MB/sec

 

The result is the ballpark  “real world” bandwidth, so a 4x slot it’s not a limit for 8 current HDDs, but it’s close.

 

That is damn interesting! Thanks for posting.

Link to comment

A while ago I got my hands on 8 fast SSDs for a few days and made some tests using a script I found on the forum, the script can test a group of disks sequentially and then simultaneously to look for bottlenecks.

 

8 x Kinsgton V300 SSDs on a SAS2LP.

...

The result is the ballpark  “real world” bandwidth, so a 4x slot it’s not a limit for 8 current HDDs, but it’s close.

 

Extremely informative, thank you for posting. The behavior exhibited is pretty much what I theorized.

Was a parity sync and parity check executed? If so can you post that additional information?

Link to comment

Yes, it's very interesting to see actual throughput with SSDs, which clearly show the actual controller bandwidth limits.

 

This confirms that with a PCIe v2 bus, a slot running at x4 isn't an appreciable bottleneck for rotating platter drives [although 7200rpm drives with 1TB platters would be very slightly slowed down (~ 5%) on the outermost platters.]

 

 

Link to comment

Yes, it's very interesting to see actual throughput with SSDs, which clearly show the actual controller bandwidth limits.

 

This confirms that with a PCIe v2 bus, a slot running at x4 isn't an appreciable bottleneck for rotating platter drives [although 7200rpm drives with 1TB platters would be very slightly slowed down (~ 5%) on the outermost platters.]

 

7200 RPM 6TB drives get 225 MB/s on the outer tracks.

Link to comment

Yes, it's very interesting to see actual throughput with SSDs, which clearly show the actual controller bandwidth limits.

 

This confirms that with a PCIe v2 bus, a slot running at x4 isn't an appreciable bottleneck for rotating platter drives [although 7200rpm drives with 1TB platters would be very slightly slowed down (~ 5%) on the outermost platters.]

 

7200 RPM 6TB drives get 225 MB/s on the outer tracks.

 

Yes, but they have 1.2TB platters ... I was talking about drives with 1TB platters, which is the max density I've seen with 4TB drives, like those dikkiedirk asked about.    In fact, I just looked up the actual Hitachi units he asked about; and they in fact use 5 800GB platters for their 7200 rpm 4TB units => so there's NO slowdown at all with those drives and a controller running at x4 speed.

 

 

Link to comment

Yes, it's very interesting to see actual throughput with SSDs, which clearly show the actual controller bandwidth limits.

 

This confirms that with a PCIe v2 bus, a slot running at x4 isn't an appreciable bottleneck for rotating platter drives [although 7200rpm drives with 1TB platters would be very slightly slowed down (~ 5%) on the outermost platters.]

 

7200 RPM 6TB drives get 225 MB/s on the outer tracks.

 

Yes, but they have 1.2TB platters ... I was talking about drives with 1TB platters, which is the max density I've seen with 4TB drives, like those dikkiedirk asked about.    In fact, I just looked up the actual Hitachi units he asked about; and they in fact use 5 800GB platters for their 7200 rpm 4TB units => so there's NO slowdown at all with those drives and a controller running at x4 speed.

 

It was extra information to help make an educated decision in the future.

I do utilize this type of information for each purchase during a drive upgrade.

 

I suppose a good way of determining a bottle neck is to do a dd read of each drive at an idle time to get the maximum speed of a single drive.

Then make a determination of how many and usage patterns.

Link to comment

Certainly not a bad idea to do a dd read test ... especially as drives get even denser.

 

The new 10TB Seagate SMRs are going to use 1.66TB platters ... so even at a slower rpm (I suspect 5900) they'll outperform 1.2TB 7200rpm units in sustained speed from the outer platters.    And who knows what densities we may see in 2-3 years !!

 

Link to comment

A while ago I got my hands on 8 fast SSDs for a few days and made some tests using a script I found on the forum, the script can test a group of disks sequentially and then simultaneously to look for bottlenecks.

 

8 x Kinsgton V300 SSDs on a SAS2LP.

...

The result is the ballpark  “real world” bandwidth, so a 4x slot it’s not a limit for 8 current HDDs, but it’s close.

 

Extremely informative, thank you for posting. The behavior exhibited is pretty much what I theorized.

Was a parity sync and parity check executed? If so can you post that additional information?

 

The results are from a script I found in the forum, I did run some parity checks, had to use unraid V5 because of the issue with V6 and the SAS2LP.

 

With the card on the PCIe 4x slot parity check runs at around 200MB/s, average speed in the end was 202.47MB/s, on the PCIe 8x slot speed and CPU load fluctuate a lot, I don’t know if it’s my Celeron G1620 or other component of my test server that can’t handle the full speed, in the end the average was 256MB/s.

pcie_x4.png.a27c1abacafa0993bc0f401d291c2b31.png

pcie_x8.png.067e7a11d69272cb7581e83b17eb3888.png

Link to comment

Here are some example speeds of models I have on an X10SL7 via motherboard ports. (as a comparison)

These tests were executed one at a time on an idle system to represent the maximums for this platform and associated disks.

 

ST3000DM001-1CH166 Seagate Barracuda 7200.14 (AF)

root@unRAIDb:~# dd of=/dev/null bs=4096 count=10240000 if=/dev/disk/by-id/ata-ST3000DM001-1CH166_W1F1GTFJ

10240000+0 records in

10240000+0 records out

41943040000 bytes (42 GB) copied, 216.953 s, 193 MB/s

 

ST4000VN000-1H4168 Seagate 5900 RPM 4TB

root@unRAIDb:~# dd of=/dev/null bs=4096 count=10240000 if=/dev/disk/by-id/ata-ST4000VN000-1H4168_S301HS8H

10240000+0 records in

10240000+0 records out

41943040000 bytes (42 GB) copied, 246.716 s, 170 MB/s

 

ST6000DX000-1H217Z Segate 7200 RPM 6TB

root@unRAIDb:~# dd of=/dev/null bs=4096 count=10240000 if=/dev/disk/by-id/ata-ST6000DX000-1H217Z_Z4D0EE7M

10240000+0 records in

10240000+0 records out

41943040000 bytes (42 GB) copied, 190.552 s, 220 MB/s

 

HGST HDN726060ALE610 7200 RPM 6TB

root@unRAIDb:~# dd of=/dev/null bs=4096 count=10240000 if=/dev/disk/by-id/ata-HGST_HDN726060ALE610_NAG1D7TP

10240000+0 records in

10240000+0 records out

41943040000 bytes (42 GB) copied, 185.776 s, 226 MB/s

 

Samsung SSD 840 PRO Series 500GB SSD

root@unRAIDb:~# dd of=/dev/null bs=4096 count=10240000 if=/dev/disk/by-id/ata-Samsung_SSD_840_PRO_Series_S1AXNSAF701196M

10240000+0 records in

10240000+0 records out

41943040000 bytes (42 GB) copied, 79.4316 s, 528 MB/s

Link to comment

The results are from a script I found in the forum.

Can you provide the link to that script?

 

Link is here, you were part of the discussion  :)

 

Great job on capturing this information!

 

I always take screenshots because I know that after a few weeks I won’t be sure of the exact results.

 

I wasn't sure if I was part of the discussion or it was someone else's cooler script.  ;)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...