SuperMicro X10SL7-F - Onboard LSI


Recommended Posts

Hi guys after reading through this thread a week or so ago I bit the bullet and retired my lga1136 - 950 i7 board (Didn't handle virtualization)

 

So it arrived and now I'm on day 3 of trying to get it to work and was hoping someone with this board might be able to give me some advice or at least point out where I may have stuffed up.

 

Hynix 8GB (1x8GB), PC-12800 (1600MHz) ECC X 2

Intel Xeon E3 1230 v3 (3.30GHz / 8MB / LGA1150 / Quad Core) Supermicro - X10SL7-F

Supermicro X10SL7-F MB LSI SAS controller.

RocketRaid 2720 PCIe 8 port controller.

5 x 3TB Red

1 x 4TB Red

1 x 4TB Black

2 x 2TB Greens

2 x 1.5 Seagate's

1 x 180 Intel 520 SSD

1 x 120 Intel 520 SSD

1x 60 Vortex SSD

 

 

 

No matter what I do the 2308 controller will not see a HDD bigger then 2TB, the 2720 is fine and was working in my previous build.

I've flashed the 2308 to IT mode v16 and still didn't fix it.

 

Installed EXSI to the 60gb SSD but hangs on loading MPT2SAS

Tried installing Xenserver - Cant find any HDD to install on although all those hard drives are connected!

Tried Hyper-v 2012 r2 and it sees the SSD's and drives <2tb on LSI and 4tb's onboard Sata2 but errors on some dodgy windows 0x0000etc error when trying to install on the SSD or 4TB drive.

 

I do like this board and Unraid will work if I plug my HDD's back into the RocketRaid 2720. But I want EXSI, XEN or what ever hypervisor will work!

 

Anyone have any ideas?

 

Thanks

Will

 

EDIT: My 4TB black started to display in the list of HDD connected to the LSI2308 I went back to FW rev 15 but Still no 3tb red drives showing up

 

Link to comment
  • Replies 149
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Will,

 

Bottom line first: I have this board (and same CPU) and I have unRAID works flawlessly w/ 4TB drives, both natively and under ESXi.

 

Now let's drill down a bit:

 

First, I didn't flash the LSI controller, in spite of popular wisdom here (so it still runs the factory installed v15 IR firmware). I thought I'd first try this way. And it worked: drives are recognized nicely, work well, no issue. Obviously I'm not caring much for the RAID functionality of the 2308.

 

Second, to get ESXi to run on this board, I had to tweak the ESXi 5.5 install image, to add support for the onboard NICs. They need the igb Intel driver at a version that's way higher than the one included with ESXi. Let me know if you want some instructions there (although it didn't sound as if this was your problem).

 

Third, all my 4TB drives are recognized on unRAID native (or on ESXi, from which I pass them thru into unRAID as RDM).

 

From the details in your description, the next thing I'd do is disconnect all HDDs, and start adding them one by one, and see where the problem starts to manifest itself. E.g., disconnect all, connect only a single 4TB red, see what happens. Then connect a 3TB red, see what happens, etc. etc. This might lead you to the culprit.

 

There's always a chance that there's some problem in your LSI chip, but if I had to bet I'd say the odds of that are kinda slim.

 

Post your results...

Link to comment

Thanks for your reply doron,

 

I used ESXi customizer to inject the newest Intel drivers so that bit is all good. 

 

Is it better practice to pass through the PCI controller to unRAID or use RDM? I will do as you suggest and go through adding disks one by one and see if I can get it to see the 3tb RED drives. (I think I will go back to IR fw if IT mode isn't needed.)

 

Would you agree the correct configuration for my setup would be.

ESXI 5.5

32gb Corsair GT as Esxi boot drive

120 SSD as Datastore on SATA3 controller

4tb black as storage on SATA3 controller

Pass through LSI to VM unRAID x7 HDDs + 1 Parity drive

Pass through PCIe Rocket Raid to VM unRAID x8 HDDS

Pass through USB to boot unRAID and use Plop to boot usb automatically

 

 

Thanks

Will

 

 

Link to comment

Is it better practice to pass through the PCI controller to unRAID or use RDM? I will do as you suggest and go through adding disks one by one and see if I can get it to see the 3tb RED drives. (I think I will go back to IR fw if IT mode isn't needed.)

 

Common wisdom here is pass the controller. I did the RDM thing since my ESXi boot drive is on the LSI, so I wouldn't want to pass it  8).

My point is that I get good, solid performance (and obviously sensor access and spin up/down etc.) even when I just RDM the individual drives.

 

Would you agree the correct configuration for my setup would be.

ESXI 5.5

32gb Corsair GT as Esxi boot drive

120 SSD as Datastore on SATA3 controller

4tb black as storage on SATA3 controller

Pass through LSI to VM unRAID x7 HDDs + 1 Parity drive

Pass through PCIe Rocket Raid to VM unRAID x8 HDDS

Pass through USB to boot unRAID and use Plop to boot usb automatically

 

It sounds right, yes. When you say "storage" you mean another ESXi Datastore, or something else?

 

I'd be curious to see how this unfolds, so please report progress.

Link to comment

Short story is the board is going back to reseller and going to get the X10SLM without LSI controller.

 

Flashed the 2308 back to 16IR mode, went through and disconnected all the drives from the LSI 2308 controller and booted.

 

SAS config took seconds to open and listed no drives connected in SAS Topology

 

Power server up with a single WD30EFRX-68AX9N0 drive plugged in, took nearly 30 seconds on intializing then went into SAS Config. SAS Topology again nearly took 30 seconds and listed no drives connected.

 

Went through same tests again with 4 x 3tb reds and all showed exact same outcome. ( I had a newer WD30EFRX-68EUZN0 which also performed the same)

 

Did the tests with a WD20EARX-00PASB0, SAS config opened in seconds as well as opening the SAS Topology in seconds, it listed a 2tb Green drive plugged into controller.

Did the tests with a WD40EFRX-68WT0N0, SAS config opened in seconds as well as opening the SAS Topology in seconds , it listed a 4tb red drive plugged into controller.

 

 

Phoned WD Red Support and they have never heard of any issues with these drives and LSI controllers. http://forums.storagereview.com/index.php/topic/33548-problems-with-lsi-9260-8i-and-3tb-wd-red-disks/

Phoned LSI support and these RED drives and not listed as being compatible so no love there.

Have been emailing supermicro support and they also have never heard of an issue with just 3tb red drives.

 

 

Pretty sure im going to return the board get the one without the 2308. Buy a few more 4tb's and keep all my data on the RocketRaid 2720 PCIe x2 card. LSI mentioned a new 12Gbs SAS controller coming out in a month so will drop that into the PCIe x3 slot later on.

 

I also had a win with ESXi late last night but don't know if I go down the XENServer or ESXi path, either way its all new and no relevance to my work as they are all MS & Hyper v

 

Cheers

Will

 

 

 

 

 

 

 

Link to comment

Wow.

Well, at least you got to the culprit relatively quickly.

 

I too have never heard of a controller having problems with a specific drive model (and not with its "relatives"). I don't have any 3TB reds here or I'd go on to test this in a heartbeat. Seems like the chip isn't able to complete the initial handshake protocol; this being specific to one type of drive is kinda weird. Must be the reds firmware. (I'm presuming you used same physical ports and cables so these items are out of the equation).

 

In theory, it would now be curious to check whether the same LSI chip on an actual PCIe board would demonstrate the same problem. If I had to put money on it, I'd bet on "yes".

 

One last thing I'd do before turning it in is to flash down the LSI, into, e.g., version 15, and see what happens.

 

By the way, in your setup, it seems like you could live with this limitation, by connecting all your 3TB reds to the 2720 and other drives to the 3208, no?

 

 

 

Link to comment

You're right i could live with leaving the 3's on the 2720. I'm rebuilding parity at the moment after all the mucking round, I'm just using the 2720 and the onboard SATA2 ports for now.

 

The info i found only about the controllers in the Red, Black & Green drives.

 

I will have another look tomorrow and take the LSI 2308 down to 15.

 

 

 

The WD Red 3TB (WD30EFRX) includes the Marvell 88i9346-TFJ2 controller and recently updated to DDR2, 64MB Samsung RAM module.

 

The WD Black 4TB (WD4001FAEX) includes a Marvel 88i9346 controller chip, as well as 64MB of DDR2 RAM from the Samsung K4T51163QJ-BCE7 module.

 

The WD Red 4TB HDD utilizes the Marvell 88i9446-NDB2 controller as well as 64MB of DRAM cache from the SKhynix H5PS5162GFA

 

The WD Green 2TB HDD is a Marvell 88i9045-TFJ2 controller chip paired with a 64MB Hynix DDR400 memory module.

Link to comment

I've got the X10SL7-F booting to ESXI, and have been testing it / setting it up for a few weeks. Its been working for my test drives to 2TB (I don't have any larger spares lying around.)

 

My 'big move' to move all my drives from my old build to the virtualized build should happen tonight. I've got a few of the WD Red 3TB's, and will post success/ failure after I put them in. (Though I'm a bit nervous now, especially since I've got an Intel expander between the 2308 and the drives!)

 

I do have my 2308 flashed to 16 IT firmware.

Link to comment

Moved everything over, it recognized all my 3TB drives, including WD Red.

 

I am running into some strange syslog messages, that I havent seen before in my travails. If anyone has any thoughts, I'm all ears. I've got 10 drives + parity hooked up to an intel expander. That is attached to a pair of reverse breakout cables from the LSI 2308 on the MB. (Running in an unRAID 5.02 setup in a VM on ESXI) I've also got a ZFS setup hanging off a m1015 in another VM.

 

[pre]Nov 22 22:16:08 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:16:11 tower last message repeated 2 times

Nov 22 22:16:26 tower kernel: mpt2sas0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000)

Nov 22 22:16:31 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

Nov 22 22:16:33 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:16:34 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:16:38 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

Nov 22 22:16:40 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:17:01 tower last message repeated 12 times

Nov 22 22:17:03 tower kernel: mpt2sas0: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303)

Nov 22 22:17:05 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:17:17 tower last message repeated 7 times

Nov 22 22:17:19 tower kernel: mpt2sas0: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303)

Nov 22 22:17:23 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

Nov 22 22:17:24 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:17:27 tower last message repeated 2 times

Nov 22 22:17:42 tower kernel: mpt2sas0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000)

Nov 22 22:17:44 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:18:14 tower last message repeated 16 times

Nov 22 22:18:18 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

Nov 22 22:18:23 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

Nov 22 22:18:24 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:18:33 tower last message repeated 3 times

Nov 22 22:18:34 tower kernel: mpt2sas0: log_info(0x31120303): originator(PL), code(0x12), sub_code(0x0303)

Nov 22 22:18:37 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:18:40 tower last message repeated 2 times

Nov 22 22:18:47 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

Nov 22 22:18:48 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:19:00 tower last message repeated 6 times

Nov 22 22:19:05 tower kernel: mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436)

Nov 22 22:19:07 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:19:36 tower last message repeated 14 times

Nov 22 22:19:57 tower kernel: sd 0:0:11:0: attempting task abort! scmd(f244fd80)

Nov 22 22:19:57 tower kernel: sd 0:0:11:0: [sdm] CDB:

Nov 22 22:19:57 tower kernel: cdb[0]=0x85: 85 08 0e 00 00 00 01 00 00 00 00 00 00 00 ec 00

Nov 22 22:19:57 tower kernel: scsi target0:0:11: handle(0x0014), sas_address(0x5001e677b9e4dff7), phy(23)

Nov 22 22:19:57 tower kernel: scsi target0:0:11: enclosure_logical_id(0x5001e677b9e4dfff), slot(23)

Nov 22 22:19:58 tower kernel: sd 0:0:11:0: task abort: SUCCESS scmd(f244fd80)

Nov 22 22:20:00 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)

Nov 22 22:20:11 tower last message repeated 4 times

Nov 22 22:21:40 tower kernel: mpt2sas0: log_info(0x31111000): originator(PL), code(0x11), sub_code(0x1000)

Nov 22 22:21:42 tower kernel: mpt2sas0: log_info(0x31110d01): originator(PL), code(0x11), sub_code(0x0d01)[/pre]

Link to comment

I am running into some strange syslog messages, that I havent seen before in my travails. If anyone has any thoughts, I'm all ears. I've got 10 drives + parity hooked up to an intel expander. That is attached to a pair of reverse breakout cables from the LSI 2308 on the MB. (Running in an unRAID 5.02 setup in a VM on ESXI) I've also got a ZFS setup hanging off a m1015 in another VM.

 

...what intel expander?...do you know that these have a FW too?

See: http://lime-technology.com/forum/index.php?topic=25412.msg221179#msg221179

Link to comment

Its the Intel RES2SV240. I use it as 2 in / 4 out for essentially full speed spinner drive bandwidth (since the 2308 is pci 3.0 at 8 lanes, if my math is correct, I wouldn't be limiting any drives speed even on a full parity sync). The 8 other bays in the case are piped through the m1015 to Napp-it. SSD hanging off the motherboard SATA for VM's

 

The last firmware for the expander was July of last year, the card came with the latest.

 

I haven't identified the culprit yet, since I don't have an extra reverse breakout cable or backplane. Everything seems to be working perfectly with the motherboard though, rebuild rate was intensely fast.

Link to comment

I've done some more troubleshooting today

 

3 HDD's plugged into LSI 2308-1 onboard controller.

WD4001FAEX (4tb WD Black)

WD40EFRX (4tb WD RED)

WD30EFEX (3tb WD RED)

 

 

On IT 15

10 Seconds to open utility

20 Seconds to open SAS Topology & refresh display

 

Displays only  WD4001FAEX as connected

 

 

 

On IR 15

24 Seconds to intialize

55 Seconds to open utility

1 min 40 to get into SAS Topoloy & refresh display

 

Displays only  WD4001FAEX as connected

 

ON IT 16.0.1

15 Seconds to open utility

22 Seconds to open SAS Topology & refresh display

 

Displays only  WD4001FAEX as connected

 

 

After each firmware reload the WD4001FAEX had the slot on the SAS contoller changed and SATA cable swapped to one of the other non-discovered HDD's

 

 

All 3 HDD's connected to RocketRaid 2720 are discovered

 

 

Its got me pretty beat :(

Link to comment

Its got me pretty beat :(

 

Sure sounds like it.

In all experiments, if you remove the 3TB red and leave the other two, everything's sweet?

 

I know in a previous post I mentioned the 4TB red worked but it definitely does NOT work (I must have been confused by the black drive) I spent nearly 3 hours troubleshooting this yesterday and no combination saw a 3tb or 4tb RED drive working with this controller. With or without the rocketraid controller installed.

 

I've been getting the Firmware from Supermicro's FTP site & currently awaiting their response into the compatibility.

 

Does anyone have the WD se range working on this controller?

 

 

 

 

Link to comment

Does anyone have the WD se range working on this controller?

 

Absolutely! I have a bunch of WD4000F9YZ hanging off this controller and they work beautifully. Personally, I'd recommend them over the greens (and reds), in a heartbeat. Solid, very cool, very quiet.

Comes for a premium though.

Link to comment

Does anyone have the WD se range working on this controller?

 

Absolutely! I have a bunch of WD4000F9YZ hanging off this controller and they work beautifully. Personally, I'd recommend them over the greens (and reds), in a heartbeat. Solid, very cool, very quiet.

Comes for a premium though.

 

The distributor in Australia has just called and indicated that the RED drives are not support on this LSI controller.

 

I've just pulled the trigger on a couple of these 4tb SE drives. Just hurts after dropping nearly $1k on hdd's two weeks ago just to find if I spent $200 more most of my drives would have been compatible.

 

Supermicro said LSI may fix it with a new firmware in the future but in the short term is looks like i'm having a fire sale on some red HDD's :P

 

 

 

Link to comment

Supermicro said LSI may fix it with a new firmware in the future but in the short term is looks like i'm having a fire sale on some red HDD's :P

 

Or you can still connect the reds to your RocketRaid ctlr, and have the Se's on the LSI side?

 

 

ESXi did some weird things when I vitalized unraid. If I went to stop the array it dropped all the disks connected to the RocketRAID it just seemed to not play nice when doing PCI pass through with ESXi.

 

 

 

 

Link to comment

ESXi did some weird things when I vitalized unraid. If I went to stop the array it dropped all the disks connected to the RocketRAID it just seemed to not play nice when doing PCI pass through with ESXi.

 

Have you tried RDM passthru for the individual drives, instead of passing the whole controller? Might work better for you in ESXi.

Just a thought.

Link to comment
  • 1 month later...

After reading through all of this I would love to just clarify something.

 

It's possible to pass through the LSI 2308 sata ports to be used with unraid?

 

Yes, absolutely (I assume you're asking re ESXi). In fact you can do it in two different ways:

 

1. Pass the whole controller unto the unRAID VM (this is, according to common wisdom in this forum, the recommended way).

2. Pass the specific drives, using ESXi RDM (this is how I do it on my system, since I need one of the drives on the controller to serve as datastore).

 

I tested both methods, both work very well.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.