Twisted Posted February 27, 2018 Share Posted February 27, 2018 I removed an empty drive from the array that used to have some data on it, as I was going to send it in for service. I moved all of the files over to another drive and followed the instructions below. After doing this, my main data drive now reads unmountable no file system. Is there any way to get my data back? unRAID Drive Removal Stop the array by pressing "Stop" on the management interface. ... Select the 'Utils' tab. Choose "New Config" Agree and create a new config. Reassign all of the drives you wish to keep in the array. Start the array and let parity rebuild. Link to comment
JorgeB Posted February 27, 2018 Share Posted February 27, 2018 Please post your diagnostics: Tools -> Diagnostics Link to comment
trurl Posted February 27, 2018 Share Posted February 27, 2018 Looks like you must have been following instructions for an old version of unRAID, since V6 doesn't have a "Utils" tab. Are you using V6? I hope you haven't accidentally assigned the wrong disk to parity and overwritten your data. You really should ask on the forum before doing anything you are unsure about. Link to comment
Twisted Posted February 27, 2018 Author Share Posted February 27, 2018 Here is my file. Thank you for taking a look at this for me. area51-diagnostics-20180227-1103.zip Link to comment
JorgeB Posted February 27, 2018 Share Posted February 27, 2018 You need to check filesystem don disk1 (md1): https://lime-technology.com/wiki/Check_Disk_Filesystems#Drives_formatted_with_XFS Link to comment
Twisted Posted February 27, 2018 Author Share Posted February 27, 2018 Here were the results: root@Area51:~# xfs_repair -v /dev/md1 Phase 1 - find and verify superblock... - block cache size set to 1482304 entries Phase 2 - using internal log - zero log... zero_log: head block 15893 tail block 15893 - scan filesystem freespace and inode maps... finobt ir_freecount/free mismatch, inode chunk 0/1246223264, freecount 52 nfree 54 agi_freecount 52, counted 54 in ag 0 sb_icount 704, counted 896 sb_ifree 31, counted 219 sb_fdblocks 247947380, counted 247685516 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 entry "" in shortform directory 96 references invalid inode 0 entry #0 is zero length in shortform dir 96, junking 5 entries corrected entry count in directory 96, was 5, now 0 corrected i8 count in directory 96, was 3, now 1 corrected directory 96 size, was 65, now 10 bogus .. inode number (412316860416) in directory inode 96, clearing inode number xfs_repair: dir2.c:1419: process_dir2: Assertion `(ino != mp->m_sb.sb_rootino && ino != *parent) || (ino == mp->m_sb.sb_rootino && (ino == *parent || need_root_dotdot == 1))' failed. Aborted Link to comment
JorgeB Posted February 27, 2018 Share Posted February 27, 2018 Doens't look very, does it go further if you run it a second time? Link to comment
Twisted Posted February 27, 2018 Author Share Posted February 27, 2018 It gives me the same result immediately. Link to comment
JorgeB Posted February 27, 2018 Share Posted February 27, 2018 Are you sure you didn't swap parity with disk1 when you did the new config? Do you have any prior diags or a flash backup saved somewhere? Since you have an odd number of xfs data drives parity would have what looks like a valid xfs filesystem at first. Link to comment
Twisted Posted February 27, 2018 Author Share Posted February 27, 2018 I didn't notice after I added drives and removed one from the array, it changed the drive letters. So I swapped the main drive and the parity drive and it started a process for about 15 hours then provided the following message (below). The drive still says Unmountable. no file system. Is there a next step? Phase 1 - find and verify superblock... bad primary superblock - inconsistent filesystem geometry information !!! attempting to find secondary superblock... .............................................. Sorry, could not find valid secondary superblock Exiting now. Link to comment
Twisted Posted March 2, 2018 Author Share Posted March 2, 2018 Does anyone have any suggestions? SHould I try to plug the drive into a windows machine and use a recovery software? Link to comment
trurl Posted March 2, 2018 Share Posted March 2, 2018 If you have overwritten a data drive with parity I'd say it's hopeless. Do you have backups? Link to comment
Twisted Posted March 2, 2018 Author Share Posted March 2, 2018 I just don't know how I did this. I must have missed checking the Parity is already valid box. If I did, it only happened for 1 to 2 seconds, as I stopped everything, as soon as I noticed it said unmountable no file system. I do not have any backups. Link to comment
trurl Posted March 2, 2018 Share Posted March 2, 2018 I guess you've got nothing to lose by trying recovery software. You simply must have backup plan. Even if you don't backup everything you must decide what to backup and do it and keep doing it. Lots of ways to lose data without an actual disk failure as you have seen. unRAID parity won't help even with something as simple as an accidental file deletion, much less something like this. Link to comment
BobPhoenix Posted March 2, 2018 Share Posted March 2, 2018 Wish the recovery tools for XFS and BTRFS were as good as ReiserFS. I had a similar thing happen to me on unRAID 4.7. I had a full 2TB cache drive that I put in as parity and started a parity build but stopped it after 5 minutes. I got back all but 200GB of data off it. Link to comment
Twisted Posted March 2, 2018 Author Share Posted March 2, 2018 Any recommendations on software? Link to comment
trurl Posted March 2, 2018 Share Posted March 2, 2018 I think UFS Explorer has been mentioned a number of times. Link to comment
JorgeB Posted March 2, 2018 Share Posted March 2, 2018 3 hours ago, trurl said: UFS Explorer Yes, it has been used successfully before in similar situations. Link to comment
Twisted Posted March 2, 2018 Author Share Posted March 2, 2018 Both Drives are empty....I just don't get what I did, but it was catastrophic. Link to comment
trurl Posted March 2, 2018 Share Posted March 2, 2018 5 hours ago, Twisted said: Both Drives are empty....I just don't get what I did, but it was catastrophic. I don't know since you don't, but it probably started with this: On 2/27/2018 at 4:08 PM, Twisted said: I didn't notice after I added drives and removed one from the array, it changed the drive letters. So I swapped the main drive and the parity drive and it started a process for about 15 hours The drive letters aren't usually very useful. They can be assigned differently at each boot depending on the order the disks respond. unRAID keeps track of drive assignments (the drive numbers) by serial number of the disks. Link to comment
Twisted Posted March 7, 2018 Author Share Posted March 7, 2018 Has anyone put in an enhancement for a drive eject feature similar to Unassigned Devices? If you remove a drive that you have pulled the data off of, it would be nice to hit an X next to the drive and unRAID stops looking for it. I didn't see anything like this requested previously but wanted to check before I add it. Link to comment
itimpi Posted March 7, 2018 Share Posted March 7, 2018 15 minutes ago, Twisted said: Has anyone put in an enhancement for a drive eject feature similar to Unassigned Devices? If you remove a drive that you have pulled the data off of, it would be nice to hit an X next to the drive and unRAID stops looking for it. I didn't see anything like this requested previously but wanted to check before I add it. This is not as easy as it sounds! The problem is that a drive cannot be removed without invalidating parity unless it is all zeroes and simply removing the files from a disk does not result in it being set to zeroes. There have been requests for providing an automated way to handle removing a disk without losing parity protection by stopping writing to it and updating parity to make it consistent with the drive being all zeroes (and thus safe to remove) but so far nothing has materialised. Link to comment
trurl Posted March 7, 2018 Share Posted March 7, 2018 8 hours ago, itimpi said: This is not as easy as it sounds! The problem is that a drive cannot be removed without invalidating parity unless it is all zeroes and simply removing the files from a disk does not result in it being set to zeroes. There have been requests for providing an automated way to handle removing a disk without losing parity protection by stopping writing to it and updating parity to make it consistent with the drive being all zeroes (and thus safe to remove) but so far nothing has materialised. There was a script to zero the disk while in the array, so parity remained consistent with eventual removal of the drive. The array would be protected throughout even if the process was interrupted for some reason Having unRAID pretend the drive is zeroed while it updates parity is essentially a parity rebuild, so doesn't seem much different than the usual approach of just removing the disk and rebuilding parity. Link to comment
bonienl Posted March 7, 2018 Share Posted March 7, 2018 2 minutes ago, trurl said: Having unRAID pretend the drive is zeroed while it updates parity is essentially a parity rebuild A small nuance. A true parity rebuild involves all disks in the array, while zeroing a single disk just updates the parity disk(s). The outcome is the same though. Link to comment
trurl Posted March 7, 2018 Share Posted March 7, 2018 3 minutes ago, bonienl said: A small nuance. A true parity rebuild involves all disks in the array, while zeroing a single disk just updates the parity disk(s). The outcome is the same though. OK, I see what you mean. It would read only parity and the disk to be removed, and update parity as if it were writing zero, just like the "normal" write. Parity rebuild would be more like "turbo" write. But my point was more about what would happen if it were interrupted for some reason. Then parity would be out of sync with the array and would have to be rebuilt (and array unprotected until done), whereas with the zeroing script really zeroing the disk, everything is in sync throughout. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.