cliewmc Posted November 19, 2019 Share Posted November 19, 2019 (edited) Hi there, I added two WD 8TB drives to a perfectly working unRAID system - when the system boots up, it goes into Stale Configuration. I have attached the diagnostic file here, generated when the system could not gracefully shut down after 60 secs. Further to this one of my new 8TB became disabled. (Disk 7) Any advice will be greatly appreciated. Regards. cL clnasty-diagnostics-20191119-2208.zip Edited November 22, 2019 by cliewmc minor correction Quote Link to comment
cliewmc Posted November 19, 2019 Author Share Posted November 19, 2019 And it gets worse, the unRAID comes up on the attached monitor but GUI does not connect, and after about 5 mins, it comes up. Then it says 'starting services' for a long time... when I click on the log, it says "waiting for 192.168.0.11..." Quote Link to comment
JorgeB Posted November 19, 2019 Share Posted November 19, 2019 There's a problem with one of them since early booting : Nov 19 22:03:25 cLNASty kernel: sd 6:0:11:0: [sdn] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x0b driverbyte=0x00 Nov 19 22:03:25 cLNASty kernel: sd 6:0:11:0: [sdn] tag#0 CDB: opcode=0x88 88 00 00 00 00 03 a3 81 2a a8 00 00 00 08 00 00 Nov 19 22:03:25 cLNASty kernel: print_req_error: I/O error, dev sdn, sector 15628053160 Nov 19 22:03:25 cLNASty kernel: sd 6:0:11:0: [sdn] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08 Nov 19 22:03:25 cLNASty kernel: sd 6:0:11:0: [sdn] tag#1 Sense Key : 0x2 [current] Nov 19 22:03:25 cLNASty kernel: sd 6:0:11:0: [sdn] tag#1 ASC=0x4 ASCQ=0x0 Nov 19 22:03:25 cLNASty kernel: sd 6:0:11:0: [sdn] tag#1 CDB: opcode=0x88 88 00 00 00 00 03 a3 81 2a a8 00 00 00 08 00 00 Nov 19 22:03:25 cLNASty kernel: print_req_error: I/O error, dev sdn, sector 15628053160 Check connections, there's not even a valid SMART report. Quote Link to comment
cliewmc Posted November 19, 2019 Author Share Posted November 19, 2019 Thanks johnnie, I suspect I may have damaged the SATA connector on the drive when I was dismantling it from the external casing. I am removing both the drives to test now. Quote Link to comment
cliewmc Posted November 19, 2019 Author Share Posted November 19, 2019 (edited) So on reboot, it goes back to normal but Disk 7 remains disabled. Any way to bring it back to 'life'? thanks johnnie for your assistance. i can never say that enough. Edited November 19, 2019 by cliewmc minor edit Quote Link to comment
cliewmc Posted November 19, 2019 Author Share Posted November 19, 2019 I have put the disk config in maintenance mode, and running an xfs_repair check now with -n to see what comes up. Quote Link to comment
JorgeB Posted November 19, 2019 Share Posted November 19, 2019 Once a disk gets disable it remains so until rebuilt, make sure emulated disk is mounting correctly before rebuilding on top. 1 Quote Link to comment
cliewmc Posted November 19, 2019 Author Share Posted November 19, 2019 Thanks, I had to stop the array, assign No Device to Disk 7, then start the service, stop, assign the same disk to it, restart... to get the data-rebuild going. I wonder what I can do with the 2 new WD drives? Any ideas? One gave the errors, and the other was not detected at all in Unassigned Devices... I think I will try adding on the missing one again after my unRAID gets back to a stable state. My plan was to install an extra parity and add another 8TB to the array... I will have to wait for a day or two until the rebuild is complete. Quote Link to comment
Frank1940 Posted November 19, 2019 Share Posted November 19, 2019 (edited) 2 hours ago, cliewmc said: he other was not detected at all in Unassigned Devices. You may have this problem: https://www.instructables.com/id/How-to-Fix-the-33V-Pin-Issue-in-White-Label-Disks-/ This is one way to fix it. The other way is to use a Molex to SATA power converter. Edited November 19, 2019 by Frank1940 1 Quote Link to comment
cliewmc Posted November 19, 2019 Author Share Posted November 19, 2019 Hi Frank1940, thanks! The response was extremely useful - especially since I may ignorantly end up with two bricks! Out of warranty, may I add. Cheers. Quote Link to comment
cliewmc Posted November 22, 2019 Author Share Posted November 22, 2019 My friend told me to plug the troublesome drives (with the 3.3V pin issues) into my hotswap cages - lo & behold I was able to 'see' them. I am now preclearing all in one go! It seems that the hotswap cages corrects the power issues. I am really grateful for my peer expertise including you guys. My unRAID is hopefully on its way to a 'healthy' state. Thank you all. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.