Bandwidth Limit


Recommended Posts

9 hours ago, johnnie.black said:

Each wide port has total bandwidth of 4 x the linking speed, so for this case, since it links at sata2 speed, or 300MB/s, max theoretical bandwidth is 4 x 300MB/s, of those 1100MB/s are what I measured as the real world usable bandwidth, which is pretty good considering for example PCIe which has usually only about 70% usable.

 

OK, to make sure i'm clear and understand my situation: I'm running 8 disks off one 4x port (the external port) via a 4x cable to the MD1000 (which by the way, hates to be connected to unRaid in unified mode using all 15 disks... locks up unRaid before you can ever start the array) The MD1000 lists the connector as "One x4 3GB SAS (SFF 8470)." Disks are all SATA and show 3gbps connectivity. Technically all 8 disks should have 137.5MB/s available to them in an ideal world (1100/8), correct? But right now I'm seeing 62.5MB/s on all disks (500/8). Basically a 50% reduction, right? Doesn't it seem odd that I'm nearly splitting what should be the available bandwidth in half? HP intends 4 disks per port when used in a proliant. Could this be the cause of my problem? I say this because I know that I can pass more read data over the card when using additional ports, as shown through the earlier testing. 

 

 

updated below as I was typing the first part of this reply

I've been scouring the internet looking for anyone with similar issues but can't find any. The closest I've come is someone in 2015 who is using 15 wd reds (connection to server unknown) who says that he only managed 300MB/s in a raid configuration...  and someone else using a perc5e capped at 350MB/s ... so I guess my 500MB/s should be a blessing. I guess the enclosure is just slow... I mean, it is at least 7 years old...ha! SO maybe the expander isn't the issue here, for me anyways.

 

 

I would be nice to see a read/write speed test integrated into unRaid that would allow you to select any disk, in the array or unassigned, and test them at the same time to help diagnose hardware issues. But maybe that's wishful thinking....

Link to comment

No I think you are on the right track. I am running 25 disks over 1 4x connection right now (Even though 2 physical cables are connected) and I am only getting another half the bandwidth I should be getting over that 4x port. I would be interested in seeing how you have everything cabled. I am also interested in trying some different cables on my system.

 

I'll do some more troubleshooting in a day or so as the parity check should be done by then. I'll make sure to document and report my findings.

Link to comment
38 minutes ago, lonnie776 said:

No I think you are on the right track. I am running 25 disks over 1 4x connection right now (Even though 2 physical cables are connected) and I am only getting another half the bandwidth I should be getting over that 4x port. I would be interested in seeing how you have everything cabled. I am also interested in trying some different cables on my system.

 

I'll do some more troubleshooting in a day or so as the parity check should be done by then. I'll make sure to document and report my findings.

 

Mine is pretty straightforward, and thanks to 3rd grade art class, I can show you! (I can't show you an actual picture because the server is rebuilding a disk at the moment, and quite frankly it's harder to see with the cabling as it is in the machine anyways.)

 

5907ddb457375_ScreenShot2017-05-01at9_12_12PM.png.23cc2f0580e6b8f70c020f587461a195.png

 

 

 

Link to comment

Yup that works for me. So just to reiterate, your system reports an 8x connection through the HBA and you are running 8 disks off of 1 4x port in the rear and 2 disks in the front cage which should have a full connection each (No multiplexing)?

 

So I am seeing that you are running through a second, integrated expander in the MD1000. Does that sound correct?

Link to comment
4 hours ago, lonnie776 said:

Yup that works for me. So just to reiterate, your system reports an 8x connection through the HBA and you are running 8 disks off of 1 4x port in the rear and 2 disks in the front cage which should have a full connection each (No multiplexing)?

 

So I am seeing that you are running through a second, integrated expander in the MD1000. Does that sound correct?

So i had another look at your diags you posted and there is most definately 2 expanders if not 3. But is there also a P410i in the MD1000?

 

If there is I would disconnect that as I suspect that would allocate some bandwidth to itself.

 

Also Johnnie, would you mind posting your diags so I know what a properly setup system should look like. Specifically I am interested in the LSSCSI.txt file under system.

Edited by lonnie776
Link to comment
6 hours ago, 1812 said:

Mine is pretty straightforward, and thanks to 3rd grade art class, I can show you! (I can't show you an actual picture because the server is rebuilding a disk at the moment, and quite frankly it's harder to see with the cabling as it is in the machine anyways.)

 

So you have dual link between the HBA and the HP expander, so the 8 is correctly reported, you then have a single link between the HP expander and the M1000 (which is also an expander), naturally the HBA can only talk with the M1000 trough that single link, I don't know if fact that they  are cascaded introduces any other limits but don't think so.

 

Don't know if the front cages have another expander builtin ( it has if a single link can access all disks).

Link to comment
5 hours ago, lonnie776 said:

Also Johnnie, would you mind posting your diags so I know what a properly setup system should look like. Specifically I am interested in the LSSCSI.txt file under system.

 

Don't think you can see anything about link speed in the diags, but I attached them below.

 

So, besides the 8 result I get I can be sure my expander is using dual link because it's the only way it could handle the 12 disks there connected without any limits, pic shows the start of a parity check, 4.33GB/s for 30 disks it's about 145MB/s per disk (and the maximum speed of the slowest disks in the server), so the 12 disks on the expander are using ~1750MB/s, well above the single link total bandwidth.

 

tower6-diagnostics-20170502-0754.zip

Screenshot 2017-04-27 13.55.45.png

Edited by johnnie.black
  • Upvote 1
Link to comment
7 hours ago, lonnie776 said:

Yup that works for me. So just to reiterate, your system reports an 8x connection through the HBA and you are running 8 disks off of 1 4x port in the rear and 2 disks in the front cage which should have a full connection each (No multiplexing)?

 

There are 7 disks (8 slots total) in the front cage, either cache or unassigned. The MD1000 is array disks only. The pic was just a stock photo.

7 hours ago, lonnie776 said:

So i had another look at your diags you posted and there is most definately 2 expanders if not 3. But is there also a P410i in the MD1000?

 

the P410i is the onboard raid controller and not in use because it won't do jbod. It is usually disabled, but every now and then comes back through my fiddling around in the bios. When I did a reboot yesterday to put in a different video card, I re-disabled it. A quick parity check this am shows the same read bandwidth as before. Unraid reports the sas expander and the md1000 as an scsi enclosure, so technically 2 "expanders" I guess?

2 hours ago, johnnie.black said:

Don't know if the front cages have another expander builtin ( it has if a single link can access all disks).

 

I am not sure of this either, but the front cage disk speed isn't part of the array bandwidth reported previously. But regardless, with the single 4x line to the md1000 one port, wouldn't that still be 1100MB/s? The sas expander spec sheet says that 4x port is "6Gb/s SAS bandwidth = 2400 MB/s" and the enclosure is connected at "One x4 3GB SAS" per dell documentation.

2 hours ago, johnnie.black said:

Don't know if the front cages have another expander builtin ( it has if a single link can access all disks).

It is not reported as an "enclosure" like the md1000 is. It seems transparent to unRaid. 

 

 

Nice to bang one's head against the wall so early. Almost more effect than coffee.

Link to comment
47 minutes ago, johnnie.black said:

 

I believe that yes it should, unless the fact that it's cascaded on another expander makes a difference.

 

I've been doing some digging and I can't find anything that says that would provide anything less than what is expected. I even read about people cascading the hp expander into another hp expander with no loss in bandwidth of the ports used.

Link to comment

I was able to find this little nugget which gives some insight into the theoretical vs actual throughput of a sas cable running at 3gpbs per link.

 

Just a though but is there a way to force a 6gbps connection to the drives as all mine support it  but by default they connect at sata 2.0 (3gb/s) for stability/compatability reasons.

Link to comment
28 minutes ago, lonnie776 said:

Just a though but is there a way to force a 6gbps connection to the drives as all mine support it  but by default they connect at sata 2.0 (3gb/s) for stability/compatability reasons.

 

That's the problem with these HP expanders, although they link at 6gbps with SAS2 devices they can only link at 3gbps with SATA3 devices (and earlier firmwares will only link at 1.5gbps with SATA devices) but that still doesn't explain why you guys are only getting ~500MB/s usable from a single link (4 x 300MB/s) when I can get 1100MB/s.

Link to comment
26 minutes ago, johnnie.black said:

 

That's the problem with these HP expanders, although they link at 6gbps with SAS2 devices they can only link at 3gbps with SATA3 devices (and earlier firmwares will only link at 1.5gbps with SATA devices) but that still doesn't explain why you guys are only getting ~500MB/s usable from a single link (4 x 300MB/s) when I can get 1100MB/s.

 Ahh that makes sense, I would also imagine that the linking speed spills all the way back to the expander and HBA.

 

Also Johnnie, I noticed you have 2 HBA's in your system. Is one builtin or are you putting two HBA's into your sas expander? 

Link to comment
13 minutes ago, lonnie776 said:

I noticed you have 2 HBA's in your system. Is one builtin or are you putting two HBA's into your sas expander? 

 

This is how the 30 disks are connected:

 

LSI #1 -> Expander -> 12 HDDs

LSI #2 -> 8 HDDs

Adaptec 1430SA -> 4 HDDs

Onboard Intel -> 6 HDDs

 

Link to comment
7 hours ago, lonnie776 said:

Oh ok, so I assume you have been testing only the disks behind the expander and are getting speeds above the ~500MB/s that we have been getting?

 

I tested all my controllers and expanders with SSDs to check max bandwidth and posted results here:

 

 

  • Upvote 1
Link to comment

My HBA is in a PCIe2 x4 slot, but supports up to x8. My understanding is that shouldn't be a limiting cause of only only getting half speed on one port of the expander, correct? It would currently have 2000MB/s of bandwidth to the board. I "can" move it to an x8 slot, but that means I have to make a few brackets for other cards (mixed half and full height in the servers) rerun wiring, and a few other things....which is going to be a major PITA, but if for some reason it is a possible solution, then I suppose I could try that?

 

And in looking at the numbers, why do I feel like the port that connects to the external enclosure is somehow only able to use 1 PCIe lane through the HBA? I know it can use more as evident by the 700MB/s that occurred when I ran a parity check and ssd file move.... It did show slightly more than 500MB/s but is 13MB/s over within any reported deviation of accuracy for the dynamix stats plugin, or is it 100% accurate, meaning that the data coming from the expander is using more than 1 PCIe lane through the hba?

Link to comment
9 minutes ago, 1812 said:

My HBA is in a PCIe2 x4 slot, but supports up to x8. My understanding is that shouldn't be a limiting cause of only only getting half speed on one port of the expander, correct? It would currently have 2000MB/s of bandwidth to the board. I "can" move it to an x8 slot, but that means I have to make a few brackets for other cards (mixed half and full height in the servers) rerun wiring, and a few other things....which is going to be a major PITA, but if for some reason it is a possible solution, then I suppose I could try that?

 

PCIe x4 is good for about 1500/1600MB/s so doubt it will help.

 

10 minutes ago, 1812 said:

And in looking at the numbers, why do I feel like the port that connects to the external enclosure is somehow only able to use 1 PCIe lane through the HBA? I know it can use more as evident by the 700MB/s that occurred when I ran a parity check and ssd file move.... It did show slightly more than 500MB/s but is 13MB/s over within any reported deviation of accuracy for the dynamix stats plugin, or is it 100% accurate, meaning that the data coming from the expander is using more than 1 PCIe lane through the hba?

 

A single lane maxes out at about 400MB/s usable bandwidth, so it can't be that.

Link to comment

To help rule things out like HBA issues, I wanted to see how much data I could push through just the front cage, which as shown above is connected via the expander. This was done by running a read test on one one crappy ssd, a file transfer between 2 samsung ssd's using krusader, and a file transfer from another crappy SSD to another ssd on a server over the network.

 

5909caed7b884_ScreenShot2017-05-03at8_09_39AM.thumb.png.f497e87cd7e513c0dd89d3009361c4f3.png

 

 

So front cages can move more data, but, they are also connected via 2 ports on the expander.... so I'm not sure that rules any port limitation, but shows that the card can move more than the 500MB/s previously shown.

 

I then repeated the test (slightly modified using larger files for transfers between ssd's) and added in a parity check with the array:

 

 

5909cb545edb3_ScreenShot2017-05-03at8_17_01AM.thumb.png.9499796536f3e177951033774293e459.png

 

 

So, the HBA can move at least that much data through it. I do have my first 6 threads isolated only for unRaid on the main server, but cpu usage during the bigger test showed no bottleneck there

 

5909cc048bad0_ScreenShot2017-05-03at8_17_18AM.png.08870a47dd39538dd614106cf4820a76.png

 

 

 

Link to comment
On 5/1/2017 at 6:36 PM, lonnie776 said:

I'll do some more troubleshooting in a day or so as the parity check should be done by then. I'll make sure to document and report my findings.

I figured out my issues.

 

Issue #1: Only 4x Connection

Solution: When I was playing around with the cables trying to figure something else out I noticed the one cable to my HBA had not clicked in properly. Reinsterted, rebooted and voila, x8 connection but still a limit of 425MB/s.

 

Issue #2: Total bandwidth of 425MB/s MAX

Solution: it turns out that on my particular motherboard (Asus x99-a) one 3 of the 6 PCIe slots are 3.0. Also my CPU (Intel 5820K) only has 28 available PCIe lanes the combination of these factors meant that with all the extra cards I have installed I had poorly chosen a slot for my HBA and it was only getting a connection of 1x at PCIe 2.0 speeds. After looking into it PCIe 2.0 has a bandwidth limit of 500MB/s per lane. I swapped the card with my 10Gb NIC (which was in a 3.0 slot) and now I get over 2GB/s throughput at the cost of only having 500MB/s bandwidth on my NIC which in my opinion is the far lesser of two evils.

 

I know this doesn't really help with your issues 1812 but I did try my HBA connections in different ports and it is truly indifferent as to port allocation on the Expander. Past that I would suggest looking into the MD1000 for any limitations there as what you have should supply 1100MB/s

 

I hope this helps someone.

  • Upvote 1
Link to comment
45 minutes ago, lonnie776 said:

I know this doesn't really help with your issues 1812 but I did try my HBA connections in different ports and it is truly indifferent as to port allocation on the Expander. Past that I would suggest looking into the MD1000 for any limitations there as what you have should supply 1100MB/s

 

I'm seriously beginning to think the MD1000 is the "issue." Thankfully I'm not as bad as this guy was but wonder if I could incorporate anything from his tweaks into my usage:

 

https://thias.marmotte.net/2008/01/dell-perc5e-and-md1000-performance-tweaks/

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.