Mor9oth Posted October 9, 2020 Share Posted October 9, 2020 Hello together, I have a huge Problem since hours and really could need help! Today I changed my Motherboard and now I can't start the Array. All 4 Drives are shown as "missing" (screenshot) and I am freakin out... It seems that the LSI Card does no longer detect the HDDs. Instead of the Drives I only can see the Message: "no supported devices found!" (Screenshot) Also important to mention: I use two Drive Cages in my case and since I switched to the new MB only one of them - are working (LEDs are off on the no longer working one). I changed nothing on the Unraid-USB-Device. In The Boot process takes also way longer. I stopps like forever on the message: Quote Device "br1" does not exist. Cannot find devie "br1! I don't know if this relays on this, but it seems to be a new message. Here you see the missing devices: Boot: Quote Link to comment
Hoopster Posted October 9, 2020 Share Posted October 9, 2020 11 minutes ago, Mor9oth said: Today I changed my Motherboard and now I can't start the Array Might be a good idea to post complete diagnostics. If you can't get them through the GUI, go to the console command line and type 'diagnostics' which will save them to your flash drive. Quote Link to comment
Mor9oth Posted October 9, 2020 Author Share Posted October 9, 2020 @Hoopster Thank you! Didn't knew that ... Here is the File: cortex-diagnostics-20201010-0021.zip Quote Link to comment
trurl Posted October 9, 2020 Share Posted October 9, 2020 If BIOS can't see disks Unraid can't Try reseating controller, checking all connections, whatever. No doubt your data is fine you just need to resolve your hardware issue. Quote Link to comment
Hoopster Posted October 10, 2020 Share Posted October 10, 2020 @Mor9oth Is your LSI card an actual LSI branded card like the LSI 9211-8i or is it a clone like the Dell H310 or IBM M1015? All three use the same LSI SAS2008 chipset and LSI 9211 firmware. Also, it appears from the SATA card BIOS screen that you have IR firmware? I have the Dell H310 with IT firmware (I installed it without the card BIOS). LSI 9211 IT = Straight pass through no RAID options LSI 9211 IR = Pass through as in IT mode, but you also have RAID options (RAID 0, RAID 1, RAID 1e and RAID 10) If you are using the card in IR mode, do you have any RAID options set? Quote Link to comment
ChatNoir Posted October 10, 2020 Share Posted October 10, 2020 Also, what Motherboard are you coming from ? Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 41 minutes ago, ChatNoir said: Also, what Motherboard are you coming from ? I replaced the Asrock Rack: E3C242D4U2-2T with my new Asrock Rack: E3C246D4U2-2L2T because I had no iGPU support with the old Board. Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 Quote Is your LSI card an actual LSI branded card like the LSI 9211-8i or is it a clone like the Dell H310 or IBM M1015? All three use the same LSI SAS2008 chipset and LSI 9211 firmware. It is an actual LSI card. It is the SAS 9211-8i (LSI SAS2008) Quote Also, it appears from the SATA card BIOS screen that you have IR firmware? Yes, it has the 18.00.00.00-IR Firmeware Quote LSI 9211 IT = Straight pass through no RAID options LSI 9211 IR = Pass through as in IT mode, but you also have RAID options (RAID 0, RAID 1, RAID 1e and RAID 10) If you are using the card in IR mode, do you have any RAID options set? It seems that it is running in IT mode, I do have Raid options, but I have never set any Raid options on it. With the old Motherboard everything worked from the start. I just plugged the card in and all 4 HDDs were detected because of the IT mode I guess. I found the following options in my MB-Bios could this have something to do with the problem? Does the card maybe require some other boot option? The Card is in PCI-E Slot 4 and I do not have any other PCI-E devices plugged in. Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 I just checked the Drive Cages. Since only the left one is showing up any power/led light. I switched all cables, switched HDDs in different ways (4 on all or 2 at both) but in all cases only the left one seems to work ... So it feels like the right backplate seems to be destroyed since I switched the Maindboard. Maybe that explains why it smelled a little burned yesterday when I switched on. 😬 But this does not explain why the 4 HDDs in the left cage aren't also detected by the SLI Card, right? Or does it? Do I need to split them up or can I use 4 HDDs in only one HDD cage? Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 Well ... I looked closer on the backplate. Seems like there was heat. I guess this backplate is dead ... But how does this have effect if I just use the left cage? Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 Quote I changed nothing on the Unraid-USB-Device. Quote Device "br1" does not exist. Cannot find device "br1! Do I also have to delete/replace something on the Unraid-USB-Device? Quote Link to comment
Vr2Io Posted October 10, 2020 Share Posted October 10, 2020 (edited) 54 minutes ago, Mor9oth said: Well ... I looked closer on the backplate. Seems like there was heat. I guess this backplate is dead ... But how does this have effect if I just use the left cage? You use a PCIe plug to provide power, it is 12v only, the 5v are generated on the blackplane, if no LED on, usually it already burn out. If you know electronic, you can external provide the 5v and reuse that blackplane. Pls isolated the fault cage first and don't touch HBAs firmware this moment because it work before and seems to be work currently. Some bad news is, disk also burn out. Pls verify by connect one of those disk direct to onboard SATA port. Edited October 10, 2020 by Vr2Io Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 Can't believe it! OMG! What's happening here!? Quote Some bad news is, disk also burn out. Pls verify by connect one of those disk direct to onboard SATA port. Can you see this somewhere? Diagnostics? What I tried next: I plugged all HDDs directly to the SATA (MB) for testing if they work (Screenshot). Then I booted up, but they are all still not showing up. Then I felt sick! Are they actually all gone? 😵 God - please no! Then I plugged all seperate to my computers SATA - all are not showing up. And so do not hear them at all. WTF! I never had any HDD issues in my whole life - and now all die at the same time?! And now? Is this server too dangerous to use anymore? Was it because of the HBA-Card? What is now the best way to continue? Are shucked HDDs bad? OMG! Can't believe it! Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 I tested to ancient 1 TB HDDs they are shown up. So I guess this is the confirmation that they are dead. Quote Link to comment
Vr2Io Posted October 10, 2020 Share Posted October 10, 2020 (edited) 39 minutes ago, Mor9oth said: Can you see this somewhere? Diagnostics? Just guess base on the symptoms. 39 minutes ago, Mor9oth said: Are shucked HDDs bad? Not relate shuck or not, but pls check does relate 3.3v present issue. Use molex ( because no 3.3v ) to SATA power those WD 14TB disk. Those cage use before ? Problem also not cause by HBA, does you make anything wrong when you connect cage power. Pls confirm does only mainboard have change, all work normal in previous ? Edited October 10, 2020 by Vr2Io Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 Quote Not relate shuck or not, but pls check does relate 3.3v present issue. Use molex ( because no 3.3v ) to SATA power those WD 14TB disk. I checked the 3.3 V Problem before I purchased the case. At least I thought ... So Molex to SATA power and how do I connect this to the Backplate? The Backplate is powered by 6 PIN. Quote Those cage use before ? Yes, I used this setting the last 6 Months (since April) Quote Problem also not cause by HBA, does you make anything wrong when you connect cage power. when I rethink over it all - I had the server on for checking if everything is working and realized that one data cable in the HBA (Mini SAS cable SFF-8643 to SFF-8087) Wasn't full plugged in. Well ... then I plugged it in while everything was powered. Don't remember exactly if I had all the Problems before ... Could this killed all 4 Drives? Seems wired to me since it was the data cable. But I also realized a sound then the smell of "hardware" Quote Pls confirm does only mainboard have change, all work normal in previous ? I confirmed this! So what is the best way to continue with the build. I mean I really want to make sure that everything is safe. Don't want any damage more - or worse. Can I reuse my server? Is it possible to save the data of the drives somehow? Quote Link to comment
Vr2Io Posted October 10, 2020 Share Posted October 10, 2020 (edited) 38 minutes ago, Mor9oth said: I confirmed this! Then it means the cage haven't 3.3v issue, btw don't know why it kill all your disks. ( need you carefully confirm does disks really die) 38 minutes ago, Mor9oth said: Seems wired to me since it was the data cable. I also don't think it kill anything. 38 minutes ago, Mor9oth said: Can I reuse my server? Yes, but not the fault one cage, and you must test another cage by some spare disk before put in production again. 38 minutes ago, Mor9oth said: Is it possible to save the data of the drives somehow? If you have multimeter, then set in resistance measurement, use a molex-to-sata adaptor connect to those disks, cross check ( swap red and black tip ), measure 5v-Gnd ( red and black wire ) resistance does zero ( short ), if that it could means only TVS diode dead only, if lucky, remove the TVS diode then disk may be resume. Edited October 10, 2020 by Vr2Io Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 1 hour ago, Vr2Io said: Then it means the cage haven't 3.3v issue, btw don't know why it kill all your disks. ( need you carefully confirm does disks really die) I also don't think it kill anything. Yes, but not the fault one cage, and you must test another cage by some spare disk before put in production again. If you have multimeter, then set in resistance measurement, use a molex-to-sata adaptor connect to those disks, cross check ( swap red and black tip ), measure 5v-Gnd ( red and black wire ) resistance does zero ( short ), if that it could means only TVS diode dead only, if lucky, remove the TVS diode then disk may be resume. This sounds very promising. Unfortunately I am not a technician. Is there any tutorial about it? I also do not own a multimeter. But if I do this how long can I use the disks then? I mean it sounds dangerous without the protection diode. Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 But I fell like - why not just buying on ^^ Quote Link to comment
Vr2Io Posted October 10, 2020 Share Posted October 10, 2020 (edited) The provide method is a chance to get back data, in fact, it can't guarantee work. And this will void warranty. You need make decision - Put back disk to org. enclosure and get RMA disk but loss all data. - DIY to getback data, but this may make things worst ( and loss disk warranty ) - Get professional data rescue service ( also loss disk warranty ) Pls confirm does disk really die, i.e. use the good cage and connect the 1TB disk to test first. Then put one 14TB disk and try detect or not. I notice it is a Silverstone CS381 case, you can try contact SS too, but they can't help for the fault disk or data. Edited October 10, 2020 by Vr2Io Quote Link to comment
Vr2Io Posted October 10, 2020 Share Posted October 10, 2020 3 minutes ago, Mor9oth said: But I fell like - why not just buying on ^^ What means, not understand. Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 2 minutes ago, Vr2Io said: What means, not understand. Buy a multimeter Quote Link to comment
Vr2Io Posted October 10, 2020 Share Posted October 10, 2020 (edited) 7 minutes ago, Mor9oth said: Buy a multimeter Note. If you go RMA disk route and OK for lost all data, then could forget the multimeter. After that, contact SS to got a good blackplane, that's all. Edited October 10, 2020 by Vr2Io Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 5 minutes ago, Vr2Io said: The provide method is a chance to get back data, in fact, it can't guarantee work. And this will void warranty. You need make decision - Put back disk to org. enclosure and get RMA disk but loss all data. - DIY to getback data, but this may make things worst ( loss disk warranty ) - Get professional data rescue service ( also loss disk warranty ) Pls confirm does disk really die, i.e. use the good cage and connect the 1TB disk to test first. Then put one 14TB disk and try detect or not. I notice it is a Silverstone CS381 case, you can try contact SS too. - I don't care about Guarante. I just want my system back. So if i buy a multimeter - how do I remove the TVS diode? Just with a plier? Or do I also need soldering iron? Quote Link to comment
Mor9oth Posted October 10, 2020 Author Share Posted October 10, 2020 Waht do you even mean by RMA disk? 🙈 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.