Jump to content

5.0-RC3 - ASMedia 1061 works now -- but Port Multiplying fails


Recommended Posts

As of 5.0-RC3 -- ASMedia 1061 chipset appears to work fine.  I have upgraded to the latest firmware avaialble for the chipset - 0.95 with the AHCI rom image.

 

At boot-time, the ASMedia BIOS POST shows all drives I would expect -- in my case, I have two cards, with two ports each.

 

One port is dedicated to an internal drive, one port is connected to an external Sans Media SATA II cage (Rosewill RSV-S8 oem - believed to be a TowerRAID TRM8-B) -- which can hold 8 drives in two Port-Multiplying configuration (4 drives per multiplier)

 

I have, for testing, four drives in the external cage, two per port-multiplier channel.  And specifically do have a drive in the "0" slot of each.  {For Win boxes, etc only the 0 drives will show up until the drivers load}

 

At boot, for each of the two cards, I see *THREE* Drives {per controller} in the BIOS POST -- the one internal drive (on a dedicated link to the ASMedia card) and BOTH external drives (via the Port-Multiplier)

 

However, once Unraid 5.0-RC3 starts -- only the "dedicated" drive shows up.  I do not see (in the drop-down for adding an additional drive to the array) either of the external, port-multiplier drives, available.  For either card.

 

So -- summary -- At BIOS POST -- I see 6 drives.  Two dedicated, and 4 Port-Multiplied.  UNRAID only sees the dedicated drives, not the port-multiplied ones.

 

(My older SIL 3132 card did work with Unraid and port-multiplied drives -- but for other reasons, those cards were pulled)

 

Attached is a SYSLOG from a fresh boot.

 


 

These are the cards installed:  http://www.sybausa.com/productInfo.php?iid=1104 

 

SATA III 2 Internal 6Gbps Ports PCI-e Controller Card -- Part Number: SY-PEX40039

 

ASM1061 Chipset (Asmedia 1061 SATA Host Controller)

Compliant with PCI-Express Specification V2.0 and Backward Compatible with PCI-Express 1.x

Compliance Serial ATA AHCI Spec. Rev. 1.3, Serial ATA Rev. 3.0

Supports Hot Plug and Hot Swap

Supports Communication Speeds of 6.0Gbps, 3.0Gbps, and 1.5Gbps

Supports 2 Ports Serial ATA

Supports Native Command Queue (NCQ)

Supports Port Multiplier

 

(For completeness - I'm using a simple SATA -> eSATA bracket to bring one of the internal SATA connectors from each of the ASMedia chipset cards to the outside, and using a high-quality braided and shielded cable to connect to the external cage}

Syslog-2012-05-25.txt

Link to comment

Probably this is going to end up being a linux driver limitation that I won't be able to do anything about at present.  But it would be helpful to simplify your config, so that say only 4 drives, all installed in your external port multiplied attached enclosures.  Then boot up server and let's see what drives it discovers.  Also at some point, open a telnet session and tell me what output of this command shows:

 

lsmod

 

At one time I was all exited about the prospect of using PM enclosures.  But I have found PM support to be "lacking".  Also there are serious h/w limitations for disk array application if the chipset does not support FIS.  See:

http://en.wikipedia.org/wiki/Port_multiplier

 

 

Link to comment

Probably this is going to end up being a linux driver limitation that I won't be able to do anything about at present.  But it would be helpful to simplify your config, so that say only 4 drives, all installed in your external port multiplied attached enclosures.  Then boot up server and let's see what drives it discovers.  Also at some point, open a telnet session and tell me what output of this command shows:

 

lsmod

 

At one time I was all exited about the prospect of using PM enclosures.  But I have found PM support to be "lacking".  Also there are serious h/w limitations for disk array application if the chipset does not support FIS.  See:

http://en.wikipedia.org/wiki/Port_multiplier

 

No problem doing the test. (and simplifying the test as you suggested) --  Honestly, for me, the PM feature is a desire, not a need.  I retired the external cabinet for performance reasons.

 

But I "bought" two Pro keys, and have been considering setting up a stand-by / test server with some of my left-over hardware, including the external cage. 

 

Thought I'd report the problem -- since the addition of the ASMedia chipset support was new, but the PM failed.

 

I'll do the testing -- I've got sufficient ports and cabling to do it.

 

Will advise.

 

And thanks for looking at it.

Link to comment

Probably this is going to end up being a linux driver limitation that I won't be able to do anything about at present.  But it would be helpful to simplify your config, so that say only 4 drives, all installed in your external port multiplied attached enclosures.  Then boot up server and let's see what drives it discovers.  Also at some point, open a telnet session and tell me what output of this command shows:

 

lsmod

 

At one time I was all exited about the prospect of using PM enclosures.  But I have found PM support to be "lacking".  Also there are serious h/w limitations for disk array application if the chipset does not support FIS.  See:

http://en.wikipedia.org/wiki/Port_multiplier

 

Ok, steps performed -- simplified system down to just the ASMedia card talking to 4 external drives in a cage.

 

1) Powered down Unraid system.

2) Ejected all trays from my 5x3 cages

3) Ensured that the ONLY connection was from one ASMedia 1061 to the external cage

4) Ensure that all four drives in the external cage were in the same "group"  {Eg,  as drives 0,1,2,3 -- and none in 4,5,6,7}

5) Used a new Flash Drive -- freshly formatted, only put the base 5.0-RC3 on it, and syslinx'd it, and only mod was to network.cfg to give static IP

6) Booted Unraid system

7) Confirmed that during POST -- the ASMedia BIOS showed all four drives

8} Once UnRaid was up -- confirmed that the "main" page showed "Parity/Disk1/Disk2" as unassigned, but there were no drives listed in the drop-down.

 

I will note, that during bootup, I watched the cage and the unraid boot screen -- there were a bunch of messages about "hard resetting" -- and when they would appear, the various lights of the external cage would flash - almost like it was "trying" to read the drives....  and after a minute or so of this, the "Login:" prompt appears

 

Attached is the Syslog from this 'clean' boot -- and the lsmod you requested.

syslog.txt

lsmod.txt

Link to comment

RR 622A will work. It does have a port multiplier but the marketing does not mention it.

 

I think the RR 622A uses the Marvel 88SE9128 Chipset.  I may consider that as an option.  Can you confirm that you ARE getting it working in a Port Multiplying config with more than one drive per eSata port?

Link to comment

I think I know why it doesn't work.  The ahci.c driver in the kernel has the ASM1061 PCI IDs added to it, but I don't think the port multiplier functionality is activated.  I will take a look at the kernel source and post back.

 

That's a good guess.  Thank you for looking into this.

Link to comment

chuck23322,

 

Please do this on your system with the ASMedia card installed and post the output:

 

dmesg | grep ^ahci

 

I need to see if the PMP flag is being reported as a capability of the card.

 

Here ya go.  Do I need to pull all but the ASMedia out?  This is with all the cards, and the ASMedia's in.

 


root@unraid:/boot# dmesg | grep ^ahci
ahci 0000:00:11.0: version 3.0
ahci 0000:00:11.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
ahci 0000:00:11.0: AHCI 0001.0100 32 slots 6 ports 3 Gbps 0x3f impl SATA mode
ahci 0000:00:11.0: flags: 64bit ncq sntf ilck pm led clo pmp pio slum part ccc
ahci 0000:02:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
ahci 0000:02:00.0: irq 46 for MSI/MSI-X
ahci 0000:02:00.0: controller can do FBS, turning on CAP_FBS
ahci 0000:02:00.0: AHCI 0001.0200 32 slots 8 ports 6 Gbps 0xff impl SATA mode
ahci 0000:02:00.0: flags: 64bit ncq fbs pio
ahci 0000:02:00.0: setting latency timer to 64
ahci 0000:03:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18
ahci 0000:03:00.0: irq 47 for MSI/MSI-X
ahci: SSS flag set, parallel bus scan disabled
ahci 0000:03:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x3 impl SATA mode
ahci 0000:03:00.0: flags: 64bit ncq sntf stag led clo pmp pio slum part ccc sxs
ahci 0000:03:00.0: setting latency timer to 64
ahci 0000:04:00.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
ahci 0000:04:00.0: irq 48 for MSI/MSI-X
ahci: SSS flag set, parallel bus scan disabled
ahci 0000:04:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x3 impl SATA mode
ahci 0000:04:00.0: flags: 64bit ncq sntf stag led clo pmp pio slum part ccc sxs
ahci 0000:04:00.0: setting latency timer to 64
root@unraid:/boot#

Link to comment

 

I guess that these are the two Asmedias ... with PMP available.  I suspect that Elkay is correct - it just needs to be activated:

 

ahci 0000:03:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18
ahci 0000:03:00.0: irq 47 for MSI/MSI-X
ahci: SSS flag set, parallel bus scan disabled
ahci 0000:03:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x3 impl SATA mode
ahci 0000:03:00.0: flags: 64bit ncq sntf stag led clo pmp pio slum part ccc sxs
ahci 0000:03:00.0: setting latency timer to 64
ahci 0000:04:00.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
ahci 0000:04:00.0: irq 48 for MSI/MSI-X
ahci: SSS flag set, parallel bus scan disabled
ahci 0000:04:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x3 impl SATA mode
ahci 0000:04:00.0: flags: 64bit ncq sntf stag led clo pmp pio slum part ccc sxs
ahci 0000:04:00.0: setting latency timer to 64
root@unraid:/boot#

Link to comment

What card do you have that is 8 ports?

Pretty neat ... stuff we learn when we're looking for something else, huh?  It's his Supermicro pcie-x4  card (using mvsas driver).

 

I'll be quiet now -- you can surprise him!! :)

 

 

UhClem is right....    This franken-unraid-build has

 

1 x 8-port SUPERMICRO AOC-SASLP-MV8 PCI Express x4

2 x 2-port ASMedia 1061 {IO Crest 2 Port SATA III PCI-Express x1 Card (SY-PEX40039) }

1 x 2-port Marvell-88SE91xx {ECS S6M2 SATA 6Gb/s PCI-E Card}  <---- Shouldn't it have shown up too?  Did I miss it?

 

6 MoBo ports on Gigabyte GA-MA770T-UD3

 

ahci 0000:00:11.0: AHCI 0001.0100 32 slots 6 ports 3 Gbps 0x3f impl SATA mode
ahci 0000:02:00.0: AHCI 0001.0200 32 slots 8 ports 6 Gbps 0xff impl SATA mode
ahci 0000:03:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x3 impl SATA mode
ahci 0000:04:00.0: AHCI 0001.0200 32 slots 2 ports 6 Gbps 0x3 impl SATA mode

Link to comment

After some investigation (and reading your syslog) I can conclude that the port multiplier functionality is being activated, but there is some sort of hardware quirk that is keep the HBA from talking to the enclosure.

 

May 26 13:18:42 Tower kernel: ata19: SATA max UDMA/133 abar m512@0xfd8ff000 port 0xfd8ff100 irq 48
May 26 13:18:42 Tower kernel: ata19: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
May 26 13:18:42 Tower kernel: ata19.15: Port Multiplier 1.1, 0x1095:0x3726 r23, 6 ports, feat 0x1/0x9
May 26 13:18:42 Tower kernel: ata19.00: hard resetting link
May 26 13:18:42 Tower kernel: ata19.00: SATA link up 1.5 Gbps (SStatus 113 SControl 320)
May 26 13:18:42 Tower kernel: ata19.01: hard resetting link
May 26 13:18:42 Tower kernel: ata19.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
May 26 13:18:42 Tower kernel: ata19.02: hard resetting link
May 26 13:18:42 Tower kernel: ata19.02: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
May 26 13:18:42 Tower kernel: ata19.03: hard resetting link
May 26 13:18:42 Tower kernel: ata19.03: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
May 26 13:18:42 Tower kernel: ata19.04: hard resetting link
May 26 13:18:42 Tower kernel: ata19.04: SATA link down (SStatus 0 SControl 320)
May 26 13:18:42 Tower kernel: ata19.05: hard resetting link
May 26 13:18:42 Tower kernel: ata19.05: SATA link up 1.5 Gbps (SStatus 113 SControl 320)
May 26 13:18:42 Tower kernel: ata19.15: qc timeout (cmd 0xe4)
May 26 13:18:42 Tower kernel: ata19.05: failed to read SCR 0 (Emask=0x4)
May 26 13:18:42 Tower kernel: ata19.05: failed to read SCR 0 (Emask=0x40)
May 26 13:18:42 Tower kernel: ata19.00: failed to IDENTIFY (I/O error, err_mask=0x40)

 

This should be treatable via software, but it will require a custom kernel compile to accomplish that.

 

Also, your Marvell-based ECS card is not showing up.  This is fixable via my enabler script but I'll need you to attach the pci.txt file from running:

cat /proc/bus/pci/devices > /tmp/pci.txt

 

 

Link to comment

After some investigation (and reading your syslog) I can conclude that the port multiplier functionality is being activated, but there is some sort of hardware quirk that is keep the HBA from talking to the enclosure.

 

{sniped code}

This should be treatable via software, but it will require a custom kernel compile to accomplish that.

 

Also, your Marvell-based ECS card is not showing up.  This is fixable via my enabler script but I'll need you to attach the pci.txt file from running:

cat /proc/bus/pci/devices > /tmp/pci.txt

 

Yes, during the "resets" is when I saw the enclosure flash it's lights (it has a master light, and a light for each drive) -- it would have the master light go on/off -- and each of the drives would flash in sequence...

 

I will note, that during bootup, I watched the cage and the unraid boot screen -- there were a bunch of messages about "hard resetting" -- and when they would appear, the various lights of the external cage would flash - almost like it was "trying" to read the drives....  and after a minute or so of this, the "Login:" prompt appears

 

My sincere thanks for taking a look at all of this.  I don't know how much of it can make it into 5.0-RCx...  I'm not skilled enough to create a custom kernel,  so for that kind of level of mods,  I rely on others, or do without. 

 

And to be honest, for supportability -- to this Linux-challenged person -- I don't mind testing things -- but I'd rather see LimeTech get things into the main-line build...  {While I'm quite dangerous -- I'm a CCNP/CCDP and very savvy with networking -- but my Linux really puts me one or two steps above noob}

 

I have attached the pci.txt file.

 

Actually, this might be a loose cable issue:

 

http://lime-technology.com/wiki/index.php/The_Analysis_of_Drive_Issues

 

The error in your syslog is exactly that under "Drive interface issue #4"

 

 

I'll look at the cabling again.  Well, it's really just one cable I can do something about.  Because dgaschk noted that the Marvel will do port multiplying, I did hook up 'one-side' of my external bay to that card and it does work (as a test) -- So the only cable really to work with is the eSata cable (high quality, braided shield -- but I will change it when I can (am ripping right now)}

 

Also, other than a cable, the other item in #4 is a "backplane" issue -- I assume this means in my external chassis.  So, rationally, I'd think it'd fail with the Marvell as well as the Asmedia if it was a backplane issue -- albeit, given my experience with other similar "weird gremlins" -- I wouldn't put it past being a "tolerable" issue with my Marvell {and my past SIL 3132} and having some fits with the Asmedia....  unlikely, but possible.  IMHO.

 

RR 622A will work. It does have a port multiplier but the marketing does not mention it.

 

The reason I didn't use the Marvell card is that I need *two* port-multiplying eSata channels {My external jbod chassis is 8-bay, 1 channel per 4 drives, with two external eSata connections} -- and, more specific -- that I was dedicating the Marvell card to my Parity drive -- didn't want to have it sharing bandwidth driving the external chassis.

 

It has made a significant difference in performance of my build getting my Parity drive upgraded to a modern 3TB 7200 RPM drive with 64gb cache, and driving it at SATA III on a dedicated card.  {Seagate ST3000DM001}

pci.txt

Link to comment

This should be treatable via software, but it will require a custom kernel compile to accomplish that.

 

A custom kernel that would do what exactly?

 

I thought at the time that it was going to require some hacking to libata-pmp.c but now I'm not convinced.  I am leaning towards a cabling issue (not the cable from the SATA port to the enclosure, but the cables internal to the enclosure to the drives.)

Link to comment

What card do you have that is 8 ports?

Pretty neat ... stuff we learn when we're looking for something else, huh?  It's his Supermicro pcie-x4  card (using mvsas driver).

 

I'll be quiet now -- you can surprise him!! :)

 

 

UhClem is right....

Thank you ... but I was wrong about elkay14 . I thought his question (above) was a result of astute observation ... but it now appears he was just idly curious :).  But it did cause me to take a look, and make an astute observation ...

...  This franken-unraid-build has

 

1 x 8-port SUPERMICRO AOC-SASLP-MV8 PCI Express x4

And, now. the surprise:

Connect your (2x)3726-based external enclosure (via your esata-sata bracket) to two of the ports on the 8-port (above); move those 2 drives to the 1061(s).

 

Final suggestion: Forget about using the ASM1061 for port-multiplier duty. Just because an (Asian) chipmaker claims a feature, especially one related to the "black art" of Port-Multipliers, does not guarantee its reality/utility. Contrary to what others may be saying, it will take way more than a kernel re-compile to get that puppy polygamous :).

 

 

2 x 2-port ASMedia 1061 {IO Crest 2 Port SATA III PCI-Express x1 Card (SY-PEX40039) }

1 x 2-port Marvell-88SE91xx {ECS S6M2 SATA 6Gb/s PCI-E Card}  <---- Shouldn't it have shown up too?  Did I miss it?

It shows in your syslog, but, for some reason did not get assigned AHCI, regardless of it exposing a proper ID (1b4b:9123). Just Murphy ...

 

Link to comment

 

And, now. the surprise:

Connect your (2x)3726-based external enclosure (via your esata-sata bracket) to two of the ports on the 8-port (above); move those 2 drives to the 1061(s).

 

 

Not sure that works physically....  I have SAS break-out cables going from my 8-port Supermicro card that terminate in female connectors (from 2x on the SuperMicro card to 8x ends via 2 cables) -- that fit into SATA drives.

 

The SuperMicro card ends in Internal Mini SAS 36pin (SFF-8087), and I use a "forward breakout cable" to get to my internal SATA drives via a pair of Monoprice cables -- the cable breaks out the dual 8087 connectors into 2x4 = 8 total SATA cables...

 

(Not sure how to describe it -- but if normal SATA cables are "female to female" (to go from male mobo ports to male drive port) 

 

Then my breakout cables end in 8 total "female" connectors (after the breakout)

 

And my eSata bracket also has the same internal "female" cables to fit into male "mobo" ports (or in this case, male ports on the cards)....

 

So my 8-port card is ending in "female" connectors -- and so does my "eSata Bracket" cable....  no dice.  They won't, ahem, "cooperate" here.

 

{Now, if I have my descriptions backwards - then substitute "male" for "female" and it's the same thing)

 

I could see what you are proposing if the Supermicro card had physically 8 regular SATA "male" connectors" (akin to my MoBo that has 6 of them) -- but it doesn't. 

 

Did I miss something?

 

(Regardless, my Supermicro is an x4 and I didn't want to drive my external array via it...) -- it's in heavy duty driving plenty of drives as it is.

Link to comment

 

Not sure that works physically.... [male vs. female connections]

That pretty much sums up the whole "gay marriage" dilemma, too. :)

I have SAS break-out cables ...

I get ya.

Did I miss something?

Yeah ... two of these  [link]. :)

[Realistically, though, with all that cabling, and connectors/adapters, and then going out to E-Sata (which is not real E-Sata), you could be laying out the royal welcome mat for dear ol' Murphy. But ...]

 

Here's what you should do:

1) Move your parity drive to one of the 1061's (it's as good as the 9128 guy for that purpose).

2) Connect one of your 3726 ports [Rosewill esata#1] to the ECS S6M2's external port

3) With 4 drives on that 3726, test throughput (see below)

4) Get 1 (or 2) of those gender-benders

5) Move one of the 8-port's drives to either that same 1061 [see 1)] (it should have the chops, if your mobo's PCIE x1's are Gen2) or to the second one.

6) Connect that freed-up SV8 port with a gender-bender to your bracket, and then to the same 3726 as used in 2) & 3) [in place of the ECS] and test throughput (see below)

7) Report results, and we'll go from there.

 

>> !!! >> Testing multiple/simultaneous drive throughtput:

Do parallel runs of hdparm using the script in this posting [link].

 

(Regardless, my Supermicro is an x4 and I didn't want to drive my external array via it...) -- it's in heavy duty driving plenty of drives as it is.

Yes, too bad it is only PCIE Gen1 ... and that might be your Achilles heel here (Warning to others: be wary of "wasting" your mobo's PCIE Gen2 lanes with Gen1 cards, when throughtput matters.)

 

But, you should still mess around with the above script; it can shed some light on your real (vs assumed) bottlenecks.

 

[i don't know if the kernel presents the 8-port's drives as sdX. If not, you'll have to mung the script a bit.]

 

-- UhClem

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...