OrangePeel Posted March 21, 2017 Share Posted March 21, 2017 Hi all, Turned my server off for about 3 weeks. Moved it to a new location, fired it up, upgraded to the newest version, and now I've got HDD issues. I've got one harddrive that is unmountable and one that is disabled. Have I lost the data on those two? The first one says it is disabled. It seems to have an emulated disk. Can I copy my data from the emulation? I tried doing a new config with trusting the parity, but still got disabled. Could the data still be on this drive? The one that is unmountable says it doesn't have a file system. I ran xfs_repair, got the whole Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... ...Sorry, could not find valid secondary superblock Exiting now. Is there anything I can do to save the data on either of these drives? What should my next step be? Brandon Quote Link to comment
trurl Posted March 21, 2017 Share Posted March 21, 2017 12 minutes ago, OrangePeel said: What should my next step be? Your first step should have been to ask for advice. If what you describe doing is complete and accurate, I don't think you have made anything worse, but don't do anything else without further advise. Go to Tools - Diagnostics and post the complete zip. Quote Link to comment
OrangePeel Posted March 21, 2017 Author Share Posted March 21, 2017 Thanks for the reply. To begin with, I only had the disabled drive. The unmountable drive showed up later. I've attached the diagnostics. Brandon unraid-diagnostics-20170321-1026.zip Quote Link to comment
trurl Posted March 21, 2017 Share Posted March 21, 2017 There are a couple of things in your OP that concern me. 1 hour ago, OrangePeel said: I tried doing a new config with trusting the parity, but still got disabled. This is not really the best way to proceed generally, but the disk should not have been disabled after you did it, so I'm not sure what you did there. 1 hour ago, OrangePeel said: I ran xfs_repair Are you sure the unmountable disk is XFS? Looks like all your other disks are ReiserFS. SMART for all disks looks OK. The unmountable disk4 shouldn't prevent rebuilding disk2, and the disabled (emulated) disk2 shouldn't prevent repairing the filesystem on disk4. IF everything works right. I think I would do the rebuild first since that is the quickest way to get back to parity protection. You could rebuild disk2 to itself, but if you had a spare you could rebuild to, that would leave the original intact in case there were any problems. Did you check the connections to disk2? That is the most common reason for disk disabled. But before doing anything, do you have backups? Quote Link to comment
trurl Posted March 21, 2017 Share Posted March 21, 2017 2 minutes ago, trurl said: the disk should not have been disabled after you did it Could be it wasn't disabled after but just got disabled again. There are write errors to the disk in your syslog, and it looks like you probably rebooted at some point before that so whatever happened before we don't know. Quote Link to comment
OrangePeel Posted March 21, 2017 Author Share Posted March 21, 2017 Thanks for taking a look, trurl. Really appreciate it. I do have my important data backed up and offsite (Glacier, although I'll be switching to Crash Plan after this). Only thing I risk losing is movies, and those are no big deal. I'd rather not have to rip again, but it is what it is. I bought a new HDD from Amazon last night and they delivered today (thank you, Amazon Prime). I did check the connections to disk2 and did not find any issues. Even took it out of the hot swap and put it back in. Is there anything in particular I should do to rebuild the disk2 with the new disk? It is larger than disk2 (disk2 is 2TB, this one is 3TB), but matches the size of my parity. Disk2 also has the most data on it. Thanks again. Brandon Quote Link to comment
trurl Posted March 21, 2017 Share Posted March 21, 2017 3 minutes ago, OrangePeel said: Is there anything in particular I should do to rebuild the disk2 with the new disk? Shutdown, replace disk2 with new disk. Be careful you don't disturb any connections. Boot up, assign new disk to disk2 slot. Starting the array will start rebuild. Quote Link to comment
OrangePeel Posted March 21, 2017 Author Share Posted March 21, 2017 If I have already connected the new HDD, can I just change the disk2 slot to the new disk and then start the array? Brandon Quote Link to comment
trurl Posted March 21, 2017 Share Posted March 21, 2017 6 minutes ago, OrangePeel said: If I have already connected the new HDD, can I just change the disk2 slot to the new disk and then start the array? Brandon Yes Quote Link to comment
OrangePeel Posted March 21, 2017 Author Share Posted March 21, 2017 Got to thinking about it and realized I was being a little overcautious... lol... it's currently rebuilding. Thank you for the help. After this finishes, maybe/hopefully I'll be able to run the correct file system tools and get that filesystem corrected, too, or worst case replace and rebuild from parity. Brandon Quote Link to comment
trurl Posted March 21, 2017 Share Posted March 21, 2017 Just now, OrangePeel said: Got to thinking about it and realized I was being a little overcautious... lol... it's currently rebuilding. Thank you for the help. After this finishes, maybe/hopefully I'll be able to run the correct file system tools and get that filesystem corrected, too, or worst case replace and rebuild from parity. Brandon Rebuilding a disk will not correct its filesystem. Quote Link to comment
OrangePeel Posted March 21, 2017 Author Share Posted March 21, 2017 Well... This didn't go according to plan. The drive supposedly rebuilt... but I can't access the folders and files on it. And it seemed to rebuild very quickly... What should I do? Edit: Found this in the log: Mar 21 12:45:03 Unraid kernel: REISERFS error (device md2): vs-13070 reiserfs_read_locked_inode: i/o failure occurred trying to find stat data of [239 3698 0x0 SD] Mar 21 12:45:03 Unraid kernel: REISERFS warning: reiserfs-5090 is_tree_node: node level 0 does not match to the expected one 2 Mar 21 12:45:03 Unraid kernel: REISERFS error (device md2): vs-5150 search_by_key: invalid format found in block 212575667. Fsck? What should I do next? Brandon unraid-diagnostics-20170321-1249.zip Quote Link to comment
JorgeB Posted March 21, 2017 Share Posted March 21, 2017 Disk2 rebuild is completely corrupt, but old disk2 is probably OK, you're having issues with the LSI controller, probably VMWare related, from what version did you upgrade? Try changing your syslinux config default boot to look like this: Quote label unRAID OS menu default kernel /bzimage append mpt3sas.msix_disable=1 initrd=/bzroot Then reboot and post new diags. Quote Link to comment
OrangePeel Posted March 24, 2017 Author Share Posted March 24, 2017 Sorry this took so long... work has been kicking my ass. I came from 15rc16, or something similar to that. I changed append to what you recommended, rebooted, and the results are attached. Thanks for your help with this! Brandon unraid-diagnostics-20170324-1007.zip Quote Link to comment
JorgeB Posted March 24, 2017 Share Posted March 24, 2017 Controller errors are gone, so that's good, like I said before disk2 rebuild is completely corrupt, you'll need to do a new config with the old disk2. You also need to run reiserfsck on disk4: https://lime-technology.com/wiki/index.php/Check_Disk_Filesystems#Drives_formatted_with_ReiserFS_using_unRAID_v5_or_later Quote Link to comment
OrangePeel Posted March 24, 2017 Author Share Posted March 24, 2017 Still saying Drive 2 is bad... maybe it really is... and the rebuild failed again, too. May be screwed on that one. I'll run reiserfsck on disk 4 tonight when I get home from work and go from there. Brandon unraid-diagnostics-20170324-2334.zip Quote Link to comment
JorgeB Posted March 24, 2017 Share Posted March 24, 2017 Well, the controller errors are back, almost certainly related to vmware, but since I don't use it can't say for sure and can't help anymore. Quote Link to comment
OrangePeel Posted March 24, 2017 Author Share Posted March 24, 2017 That's very strange. Thank you for trying. I'll see what I can come up with in Google. Any idea if downgrading to an older version of Unraid would help? Brandon Quote Link to comment
JorgeB Posted March 24, 2017 Share Posted March 24, 2017 You could go to the release you were on to confirm, IIRC there were some issues with vmware and LSI controlleres from v6.2. Quote Link to comment
OrangePeel Posted March 24, 2017 Author Share Posted March 24, 2017 Thanks. May try that. Brandon Quote Link to comment
OrangePeel Posted April 17, 2018 Author Share Posted April 17, 2018 Finally getting around to fixing this a year later... Going back to version 5.0 resolved the weird disk errors. I did have an HDD die, though, so I'm currently rebuilding the array with it's replacement, then I'll try to upgrade again. Hopefully the new versions will address whatever issue caused this a year ago, assuming it was an incompatibility or something. Brandon Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.