Single link vs dual link


Recommended Posts

This comes out of a recent thread about bandwidth, I didn't want to hijack that thread so I figured I would start a new one, specifically about my situation. I am trying to figure out how I can achieve dual link performance.

 

When I run this command on my server

cat /sys/class/sas_host/host1/device/port-1\:0/sas_port/port-1\:0/num_phys

I get 4 which means single link. I am running a single dell H310 on my 36 bay supermicro server. I have connected the cables from my HBA the way they have recommended. I emailed them and asked about adding a second HBA to increase performance, this is their reply:

 

This is the model of my chassis:  6047R-E1R36N attached is a diagram of the backplane in my server.

 

Backplane support 2 cables connection from HBA or raid controller to backplane which would help to increase data transfer bandwidth. However, it will depend on raid controller able to support it.

 

Front backplane, BPN-SAS2-846EL1 contain total of 3 miniSAS (SFF8087) ports. Rear backplane, BPN-SAS2-826EL1 contain total of 2 miniSAS (SFF8087) ports.

 

You could make the connect from HBA to backplanes in two ways.

 

The way you have done is one way. One port on HBA card connect to front backplane, PRI_I0 (orPRI_J1/PRI_J2) port. Another port on HBA card connect to rear backplane, PRI_I0 (orPRI_J1/PRI_J2) port.

 

The other way is, using two cables connecting 2 ports on HBA card to front backplane PRI_J1 and PRI_J2. Use single cable connecting front backplane, Pri_J0 port to rear backplane, PRI_J0 port.

Prefer the way you have done the cable connection to backplane.


Technical Support
ES

supermicro backplane.png

Link to comment

I have no experience with those backplanes but from the support response looks like at least the front backplane supports dual link, you could try option2 from response or get another HBA so you could use one dual linked to front backplane and the other single linked to rear backplane (single link is enough for 6 disks).

 

But IIRC your parity check speed was CPU limited so you may not see any improvements.

Link to comment

The second response intrigued me because they say using just one HBA, connect both connectors on the HBA using two cables to two of the three connectors for the front backplane, then using a third cable connect it to the remaining third plug on the front backplane to the rear backplane. Is this what you recommend?

 

I recently upgraded my CPU's to hex core models, so I have more horsepower :-)

Link to comment

So I got my cable today and configured it the second way tech support suggested plugging the two cables from the HBA into the front backplane and then cascading the front to the back with a third cable. When I run:

cat /sys/class/sas_host/host1/device/port-1\:0/sas_port/port-1\:0/num_phys

 

I now get an 8 meaning I am running in dual link mode.

Edited by ashman70
  • Upvote 1
Link to comment
  • 3 years later...
On 5/2/2017 at 1:25 PM, ashman70 said:

So I got my cable today and configured it the second way tech support suggested plugging the two cables from the HBA into the front backplane and then cascading the front to the back with a third cable. When I run:


cat /sys/class/sas_host/host1/device/port-1\:0/sas_port/port-1\:0/num_phys

 

I now get an 8 meaning I am running in dual link mode.

 

@ashman70 Just wondering if you saw improved disk I/O afterwards? I'm considering the same for mine. Any precautions or issues I need to be concerned with?

 

Link to comment
2 hours ago, ashman70 said:

Wow this is an old thread, I did not notice an increase in I/O performance.

Old-ish, but relevant as it's the same scenario as I have. I did have some new miniSAS cables on hand so I went ahead and made the change. I am not seeing any real improvement in speed, but I haven't re-ran the Diskspeed docker yet to see if I need to change my tunables in unRAID's disk settings.

 

Prior to this I have randomly had unusual 'lock-ups' where many of my 24 CPU threads are pegged at 100%. This seems to happen when I have more than 2 streams going on Plex and when I'm simultaneously writing new data to the array pool. This also manifested as 'buffering' for the Plex streams that were in progress.

 

But last night I had 2 local and 1 remote stream going while simultaneously writing 550GB of new data to the array. And everything ran smoothly. No buffering for any of the streams and my average write speed to the array (moving from cache) was around 85MB/sec. I can live with the slower write speed, especially if this change means no more buffering issues!

 

It will be interesting to see what a new run of Diskspeed recommends for the tunable settings in unRAID.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.