Emergency! Array not possible!


Recommended Posts

Hello together,

I have a huge Problem since hours and really could need help!

Today I changed my Motherboard and now I can't start the Array. All 4 Drives are shown as "missing" (screenshot) and I am freakin out...

It seems that the LSI Card does no longer detect the HDDs. Instead of the Drives I only can see the Message: "no supported devices found!" (Screenshot)

Also important to mention:  I use two Drive Cages in my case and since I switched to the new MB only one of them - are working (LEDs are off on the no longer working one). I changed nothing on the Unraid-USB-Device.

In The Boot process takes also way longer. I stopps like forever on the message:

 

Quote

Device "br1" does not exist.

Cannot find devie "br1!

I don't know if this relays on this, but it seems to be a new message.

 

 

Here you see the missing devices:

1.thumb.jpg.01a1fb29cab672ee2d7c02f2ab5c351e.jpg

 

Boot:

2.thumb.jpg.55aa4dcc648d7450340e8e6d6fcc5895.jpg

 

 

 

Link to comment

@Mor9oth Is your LSI card an actual LSI branded card like the LSI 9211-8i or is it a clone like the Dell H310 or IBM M1015?  All three use the same LSI SAS2008 chipset and LSI 9211 firmware.

 

Also, it appears from the SATA card BIOS screen that you have IR firmware? 

 

I have the Dell H310 with IT firmware (I installed it without the card BIOS). 

 

LSI 9211 IT = Straight pass through no RAID options
LSI 9211 IR = Pass through as in IT mode, but you also have RAID options (RAID 0, RAID 1, RAID 1e and RAID 10)

 

If you are using the card in IR mode, do you have any RAID options set?

Link to comment
Quote

Is your LSI card an actual LSI branded card like the LSI 9211-8i or is it a clone like the Dell H310 or IBM M1015?  All three use the same LSI SAS2008 chipset and LSI 9211 firmware.

It is an actual LSI card. It is the SAS 9211-8i  (LSI SAS2008)

 

Quote

Also, it appears from the SATA card BIOS screen that you have IR firmware? 

Yes, it has the 18.00.00.00-IR Firmeware

2.thumb.png.c58159fe53beadbf4a1fb0967f5df3d9.png

 

Quote

LSI 9211 IT = Straight pass through no RAID options
LSI 9211 IR = Pass through as in IT mode, but you also have RAID options (RAID 0, RAID 1, RAID 1e and RAID 10)

 

If you are using the card in IR mode, do you have any RAID options set?

It seems that it is running in IT mode, I do have Raid options, but  I have never set any Raid options on it.

 

IMG-4875.thumb.jpg.a42d9c00e61d1ca2bfeb34c580c3c4b9.jpg

 

With the old Motherboard everything worked from the start. I just plugged the card in and all 4 HDDs were detected because of the IT mode I guess.

 

I found the following options in my MB-Bios could this have something to do with the problem?

Does the card maybe require some other boot option?

IMG-4879.thumb.jpg.92c12fe315b220cd455c883e043b9d5d.jpg

 

The Card is in PCI-E Slot 4 and I do not have any other PCI-E devices plugged in.

3.png.05465251fb30754e12116ac2776b036a.png

 

 

 

Link to comment

I just checked the Drive Cages. Since only the left one is showing up any power/led light. I switched all cables, switched HDDs in different ways (4 on all or 2 at both) but in all cases only the left one seems to work ... So it feels like the right backplate seems to be destroyed since I switched the Maindboard. Maybe that explains why it smelled a little burned yesterday when I switched on. 😬

 

But this does not explain why the 4 HDDs in the left cage aren't also detected by the SLI Card, right? Or does it? Do I need to split them up or can I use 4 HDDs in only one HDD cage?

 

IMG-4880.thumb.jpg.17cf66b434ff97ae0a1aa865effb6ff4.jpgIMG-4881.thumb.jpg.bec66ae4698e326e9d34a1e17476479b.jpgIMG-4882.thumb.jpg.f659805c59154cfd9390f778945e6d32.jpg

IMG-4883.thumb.jpg.2a26804a7c66ad38e963129697fe9260.jpg

Link to comment
54 minutes ago, Mor9oth said:

Well ...

I looked closer on the backplate. Seems like there was heat. I guess this backplate is dead ...

 

But how does this have effect if I just use the left cage?

 

4.thumb.png.eb6d0b5ab82ec8262570a0c82728b192.png

You use a PCIe plug to provide power, it is 12v only, the 5v are generated on the blackplane, if no LED on, usually it already burn out. If you know electronic, you can external provide the 5v and reuse that blackplane.

 

Pls isolated the fault cage first and don't touch HBAs firmware this moment because it work before and seems to be work currently.

 

Some bad news is, disk also burn out. Pls verify by connect one of those disk direct to onboard SATA port.

Edited by Vr2Io
Link to comment

Can't believe it! OMG! What's happening here!?

 

Quote

Some bad news is, disk also burn out. Pls verify by connect one of those disk direct to onboard SATA port.

Can you see this somewhere? Diagnostics?

 

What I tried next: I plugged all HDDs directly to the SATA (MB) for testing if they work (Screenshot).

Then I booted up, but they are all still not showing up. Then I felt sick! Are they actually all gone? 😵

God - please no!

 

Then I plugged all seperate to my computers SATA - all are not showing up. 

 

And so do not hear them at all.

WTF! I never had any HDD issues in my whole life - and now all die at the same time?!

 

And now? Is this server too dangerous to use anymore? Was it because of the HBA-Card?

What is now the best way to continue?

 

Are shucked HDDs bad? OMG! Can't believe it!

 

IMG-4899.thumb.jpg.dbb6432aa8de863f179c4089f2669647.jpg

 

 

 

 

IMG-4901.thumb.jpg.4abfc3b943af25272c47594e54b4f139.jpg

 

Link to comment
39 minutes ago, Mor9oth said:

Can you see this somewhere? Diagnostics?

Just guess base on the symptoms.

 

39 minutes ago, Mor9oth said:

Are shucked HDDs bad?

Not relate shuck or not, but pls check does relate 3.3v present issue. Use molex ( because no 3.3v ) to SATA power those WD 14TB disk.

 

Those cage use before ?

 

Problem also not cause by HBA, does you make anything wrong when you connect cage power.

 

Pls confirm does only mainboard have change, all work normal in previous ?

Edited by Vr2Io
Link to comment
Quote

Not relate shuck or not, but pls check does relate 3.3v present issue. Use molex ( because no 3.3v ) to SATA power those WD 14TB disk.

I checked the 3.3 V Problem before I purchased the case. At least I thought ...

So Molex to SATA power and how do I connect this to the Backplate? The Backplate is powered by 6 PIN.

 

Quote

Those cage use before ?

Yes, I used this setting the last 6 Months (since April)

 

Quote

Problem also not cause by HBA, does you make anything wrong when you connect cage power.

when I rethink over it all - I had the server on for checking if everything is working and realized that one data cable in the HBA (Mini SAS cable SFF-8643 to SFF-8087) Wasn't full plugged in. Well ... then I plugged it in while everything was powered. Don't remember exactly if I had all the Problems before ...

Could this killed all 4 Drives? Seems wired to me since it was the data cable.

 

But I also realized a sound then the smell of "hardware"

 

Quote

Pls confirm does only mainboard have change, all work normal in previous ?

I confirmed this!

 

So what is the best way to continue with the build. I mean I really want to make sure that everything is safe. Don't want any damage more - or worse.

Can I reuse my server?

 

Is it possible to save the data of the drives somehow? 

Link to comment
38 minutes ago, Mor9oth said:

I confirmed this!

Then it means the cage haven't 3.3v issue, btw don't know why it kill all your disks. ( need you carefully confirm does disks really die)

 

38 minutes ago, Mor9oth said:

Seems wired to me since it was the data cable.

I also don't think it kill anything.

 

38 minutes ago, Mor9oth said:

Can I reuse my server?

Yes, but not the fault one cage, and you must test another cage by some spare disk before put in production again.

 

38 minutes ago, Mor9oth said:

Is it possible to save the data of the drives somehow? 

If you have multimeter, then set in resistance measurement, use a molex-to-sata adaptor connect to those disks,  cross check ( swap red and black tip ), measure 5v-Gnd ( red and black wire ) resistance does zero ( short ), if that it could means only TVS diode dead only, if lucky, remove the TVS diode then disk may be resume.

Edited by Vr2Io
Link to comment
1 hour ago, Vr2Io said:

Then it means the cage haven't 3.3v issue, btw don't know why it kill all your disks. ( need you carefully confirm does disks really die)

 

I also don't think it kill anything.

 

Yes, but not the fault one cage, and you must test another cage by some spare disk before put in production again.

 

If you have multimeter, then set in resistance measurement, use a molex-to-sata adaptor connect to those disks,  cross check ( swap red and black tip ), measure 5v-Gnd ( red and black wire ) resistance does zero ( short ), if that it could means only TVS diode dead only, if lucky, remove the TVS diode then disk may be resume.

This sounds very promising. Unfortunately I am not a technician. Is there any tutorial about it? I also do not own a multimeter.

But if I do this how long can I use the disks then? I mean it sounds dangerous without the protection diode.

Link to comment

The provide method is a chance to get back data, in fact, it can't guarantee work. And this will void warranty. You need make decision

 

- Put back disk to org. enclosure and get RMA disk but loss all data.

- DIY to getback data, but this may make things worst ( and loss disk warranty )

- Get professional data rescue service ( also loss disk warranty )

 

Pls confirm does disk really die, i.e. use the good cage and connect the 1TB disk to test first. Then put one 14TB disk and try detect or not.

 

I notice it is a Silverstone CS381 case, you can try contact SS too, but they can't help for the fault disk or data.

Edited by Vr2Io
Link to comment
5 minutes ago, Vr2Io said:

The provide method is a chance to get back data, in fact, it can't guarantee work. And this will void warranty. You need make decision

 

- Put back disk to org. enclosure and get RMA disk but loss all data.

- DIY to getback data, but this may make things worst ( loss disk warranty )

- Get professional data rescue service ( also loss disk warranty )

 

Pls confirm does disk really die, i.e. use the good cage and connect the 1TB disk to test first. Then put one 14TB disk and try detect or not.

 

I notice it is a Silverstone CS381 case, you can try contact SS too.  

-   

I don't care about Guarante. I just want my system back.

So if i buy a multimeter - how do I remove the TVS diode? Just with a plier? Or do I also need soldering iron?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.