LSI SAS3442E and LSI SAS3442X


starcat

Recommended Posts

Got a link ??

 

I probably will have to edit my previous post as the price for these appear to be US$199 on dell.com

 

For some reasons (possible price error  ;)) these are priced much lower here in Canada - www.dell.ca

 

The following SAS 6/iR controllers (documentation here - http://support.dell.com/support/edocs/storage/RAID/SAS6iR/en/index.htm) are affected:

 

Can$ 75.99 - DELL part number 341-6035 - http://search.dell.com/results.aspx?s=dhs&c=ca&l=en&cs=cadhs1&k=341-6035&cat=snp

 

Can$ 79.99 - DELL part number 341-5793 - http://search.dell.com/results.aspx?s=dhs&c=ca&l=en&cs=cadhs1&k=341-5793&cat=snp

 

Can$ 79.99 - DELL part number 341-5943 - http://search.dell.com/results.aspx?s=dhs&c=ca&l=en&cs=cadhs1&k=341-5943&cat=snp&x=2&y=7

 

I got one shipped already and I have not posted this possible price error elsewhere - let's keep it for the UNRAID comunity

 

But I do not know ih these will work with Unraid or if they can be flashed with the LSI  firmware?

 

Link to comment

The are Fusion MPT SAS cards so should work, but aren't what I'd call the finished item. I ran my Dell 5e for a week and data stayed put.

 

Electrically they'll work in x4, physically require x8 slot. They support 2.4GB/s (2 x 1.2GB/s) so need a x8 electrical connection to work at full steam. Keep us posted on how well they work!

 

 

 

 

Link to comment

Very true however PCI-e slots can be the other way round, Physically x8 or more commonly now an x16 slot but electrically an x8 or x4 depending on what other cards are installed in the system.

 

This is very common practice in multi x16 PCI-e slot systems where there are only 24/32 slots to be shared amongst all the slots.

 

Hope that helps.

 

Kevin

Link to comment

Your defo wrong there Jim.

 

PCIe X4 slot is about twice the size of the first notch on a PCIe card.

http://www.rackmountpro.com/imageview.aspx?id=1971&view=8204elp.jpg&type=0

 

These Dell cards are at least twice that size and are x8.

http://accessories.dell.com/sna/products/System_Drives/productdetail.aspx?c=ca&l=en&s=dhs&cs=cadhs1&sku=341-6035

 

The overview says:

The SAS6/iR features PCIe® connectivity and requires a x8 slot for connectivity

 

The tech specs say:

Slot(s) Required x8 PCIe

Link to comment
  • 3 weeks later...

Finally got my cable today and did a quick test on the brand new DELL SAS 6/IR controller (only 341-5793 is suitable for general use and they do use a different cable)

 

It does work to some degree - I can boot Ubuntu Live USB and everything appears to be OK but the drives attached to the controller are not recognized from unRAID - they are market as "wrong" drives.

However I can go to the Disk Management tab in Unmenu and query them and they are apparently there - just something tiny is missing...

 

Tested in the primary and the secondary PCIe x 16 slots - same results.

The motherboard is Biostar TA790GXE 128M , 2 x 2GB DDR2 ECC memory and AMD 4850e

 

Will try tomorrow again but open to suggestions.

 

 

 

Link to comment

Are you sure your PCI-E 16x is not for graphics cards only? Could you try in another 8x slot?

 

As far as mine motherboard is concerned I am sure as the Supermicro's AOC-MV8 card works in the both primary and secondary slots and the DELL SAS 6/iR controller works in the both slots under the latest Ubuntu USB Live.

 

Under Unraid - the drives attached to the DELL cntrl (1068e based) and updated to the latest MPT BIOS and LSI "IT" firmware are stuck with status "upgrading drive". They can be queried with hdparm but the smartstatus is not available.

 

Need one extra step to make them work....

Link to comment

I am not sure what is wrong.

 

This is the output with lspci -v:

for Dell:

 

02:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E PCI-Express Fusion-MPT SAS (rev 08)

Subsystem: Dell SAS 6/iR Adapter RAID Controller

Flags: bus master, fast devsel, latency 0, IRQ 18

I/O ports at c000

Memory at fe9fc000 (64-bit, non-prefetchable)

Memory at fe9e0000 (64-bit, non-prefetchable)

Expansion ROM at fe800000 [disabled]

Capabilities: [50] Power Management version 2

Capabilities: [68] Express Endpoint, MSI 00

Capabilities: [98] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable-

Capabilities: [b0] MSI-X: Enable- Mask- TabSize=1

Capabilities: [100] Advanced Error Reporting

Kernel driver in use: mptsas

Kernel modules: mptsas

 

for Supermicro:

 

03:00.0 SCSI storage controller: Marvell Technology Group Ltd. MV64460/64461/64462 System Controller, Revision B (rev 01)

Subsystem: Super Micro Computer Inc Unknown device 0500

Flags: bus master, fast devsel, latency 0, IRQ 19

I/O ports at d800

Memory at feaf0000 (64-bit, non-prefetchable)

Expansion ROM at fea80000 [disabled]

Capabilities: [48] Power Management version 2

Capabilities: [50] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable-

Capabilities: [e0] Express Legacy Endpoint, MSI 00

Capabilities: [100] Advanced Error Reporting

Kernel driver in use: mvsas

Kernel modules: mvsas

 

Link to comment
  • 1 month later...

Same here, no reply from Limetech on the SAS test version, cant comment on whether shipping version works yet, since the adaptor plates are the wrong versions (host not target). The correct ones should be here this week, got a couple of loan SAS drives lined up too for next weekend. Cables are here.

 

The mptsas driver is in shipping (4.5.3) but I dont know what changes went into sas version to make it work, or what changes went into shipping to make it work with the Supermicro SAS card and what effect they will have on LSI SAS support.

 

SCST support looks interesting and hopefully a simple way to integrate more hardware support requests and iSCSI.  

 

Just wondering if there's an update on this? Have the changes in the sastest version made it to 4.5.4? No reply from Limetech to my emails... :(

 

Link to comment
  • 1 month later...

I'm curious if there's been any progress made on the SAS1068E based cards. Any updates Kaygee or bcbgboy13?

 

I've run across an entry detailing the various multi-port SATA/SAS controllers with a breakdown on features, relative port speeds, and Linux and Solaris support with which driver is used. [ http://blog.zorinaq.com/?e=10 ]

 

I will be expanding/rebuilding my server within the next two months, so I was evaluating the various options available. It would be nice to have multiple options.

Link to comment
  • 4 weeks later...

This article also shows the 8 port 4x PCIe Supermicro card being performance wise inferior to the PCI-X version, 80 against 107 MB/s per port if the card is fully populated with 8 drives.

 

What article? I see no link at all.

 

 

Sorry, the one you posted in your previous post in this thread: http://blog.zorinaq.com/?e=10

The main reason for the PCIe version being slightly lower performance than the PXI-c is that it is only 4x card.

 

Link to comment

Ah, I see. I thought maybe there was another that had some more details and not just some calculated probable performance ranges. The theoretical maximum number for 4x PCI Express is 1000 MB/s for the port, leaving 125 MB/s per disk if 8 are in use. He estimated 60 - 70 % of that giving him 75 MB/s - 88 MB/s.

 

I think I saw someone say they had final parity check speeds of 90 MB/s. If that was with all 8 drives used, then performance of the card is much better than the previous estimate. Given the nature of how disk speeds start off faster at the beginning of the disk and get slower at the end of the disk, then in order to have such a high final number (90 MB/s), the controller was seeing measured performance well in excess of that average (likely 110 - 140 MB/s).

 

That's why it would be nice to have some real world benchmark numbers to see how accurate or how far off those estimates are.

 

I wonder how well the 8x PCI-Express version of the card works with unRAID. It should work just as well, assuming the device id is supported, but I know there can be subtleties with different hardware.

 

Here's the blurb on how he estimated performance:

 

The MB/s/port number in square brackets indicates the maximum practical throughput that can be expected from each SATA port, assuming concurrent I/O on all ports, given the bottleneck of the host link or bus (PCIe or PCI-X). I also assumed, for all PCIe controllers, that only 60-70% of the maximum theoretical PCIe throughput can be achieved, and for all PCI-X controllers, that only 80% of the maximum theoretical PCI-X throughput can be achieved on this bus. These assumptions concur with what I have seen in real world benchmarks assuming a Max_Payload_Size setting of either 128 or 256 bytes for PCIe (which is very often the default), and a more or less default PCI latency timer setting for PCI-X. As of May 2010, modern disks can easily reach 120-130MB/s of sequential throughput at the beginning of the platter, so avoid controllers with a throughput of less than 150MB/s/port if you want to reduce the possibility of bottlenecks to zero.
Link to comment
  • 3 years later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.