Jump to content

Unmountable: No file system


Twisted

Recommended Posts

I removed an empty drive from the array that used to have some data on it, as I was going to send it in for service. I moved all of the files over to another drive and followed the instructions below.
 
After doing this, my main data drive now reads unmountable no file system. Is there any way to get my data back? 
 
 unRAID Drive Removal
Stop the array by pressing "Stop" on the management interface. ...
  1. Select the 'Utils' tab.
  2. Choose "New Config"
  3. Agree and create a new config.
  4. Reassign all of the drives you wish to keep in the array.
  5. Start the array and let parity rebuild.
Link to comment

Looks like you must have been following instructions for an old version of unRAID, since V6 doesn't have a "Utils" tab. Are you using V6?

 

I hope you haven't accidentally assigned the wrong disk to parity and overwritten your data.

 

You really should ask on the forum before doing anything you are unsure about.

 

Link to comment

Here were the results:

 

root@Area51:~# xfs_repair -v /dev/md1
Phase 1 - find and verify superblock...
        - block cache size set to 1482304 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 15893 tail block 15893
        - scan filesystem freespace and inode maps...
finobt ir_freecount/free mismatch, inode chunk 0/1246223264, freecount 52 nfree 54
agi_freecount 52, counted 54 in ag 0
sb_icount 704, counted 896
sb_ifree 31, counted 219
sb_fdblocks 247947380, counted 247685516
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
entry "" in shortform directory 96 references invalid inode 0
entry #0 is zero length in shortform dir 96, junking 5 entries
corrected entry count in directory 96, was 5, now 0
corrected i8 count in directory 96, was 3, now 1
corrected directory 96 size, was 65, now 10
bogus .. inode number (412316860416) in directory inode 96, clearing inode number
xfs_repair: dir2.c:1419: process_dir2: Assertion `(ino != mp->m_sb.sb_rootino && ino != *parent) || (ino == mp->m_sb.sb_rootino && (ino == *parent || need_root_dotdot == 1))' failed.
Aborted

Link to comment

I didn't notice after I added drives and removed one from the array, it changed the drive letters. So I swapped the main drive and the parity drive and it started a process for about 15 hours then provided the following message (below). The drive still says Unmountable. no file system. Is there a next step?

 

Phase 1 - find and verify superblock...
bad primary superblock - inconsistent filesystem geometry information !!!

attempting to find secondary superblock...
..............................................

Sorry, could not find valid secondary superblock
Exiting now.

Link to comment

I just don't know how I did this. I must have missed checking the Parity is already valid box. If I did, it only happened for 1 to 2 seconds, as I stopped everything, as soon as I noticed it said unmountable no file system. I do not have any backups. 

Link to comment

I guess you've got nothing to lose by trying recovery software.

 

You simply must have backup plan. Even if you don't backup everything you must decide what to backup and do it and keep doing it.

 

Lots of ways to lose data without an actual disk failure as you have seen. unRAID parity won't help even with something as simple as an accidental file deletion, much less something like this.

Link to comment

Wish the recovery tools for XFS and BTRFS were as good as ReiserFS.  I had a similar thing happen to me on unRAID 4.7.  I had a full 2TB cache drive that I put in as parity and started a parity build but stopped it after 5 minutes.  I got back all but 200GB of data off it.

Link to comment
5 hours ago, Twisted said:

Both Drives are empty....I just don't get what I did, but it was catastrophic.

 

I don't know since you don't, but it probably started with this:

 

On 2/27/2018 at 4:08 PM, Twisted said:

I didn't notice after I added drives and removed one from the array, it changed the drive letters. So I swapped the main drive and the parity drive and it started a process for about 15 hours

 

The drive letters aren't usually very useful. They can be assigned differently at each boot depending on the order the disks respond. unRAID keeps track of drive assignments (the drive numbers) by serial number of the disks.

Link to comment

Has anyone put in an enhancement for a drive eject feature similar to Unassigned Devices? If you remove a drive that you have pulled the data off of, it would be nice to hit an X next to the drive and unRAID stops looking for it. I didn't see anything like this requested previously but wanted to check before I add it.

Link to comment
15 minutes ago, Twisted said:

Has anyone put in an enhancement for a drive eject feature similar to Unassigned Devices? If you remove a drive that you have pulled the data off of, it would be nice to hit an X next to the drive and unRAID stops looking for it. I didn't see anything like this requested previously but wanted to check before I add it.

This is not as easy as it sounds!     The problem is that a drive cannot be removed without invalidating parity unless it is all zeroes and simply removing the files from a disk does not result in it being set to zeroes.   There have been requests for providing an automated way to handle removing a disk without losing parity protection by stopping writing to it and updating parity to make it consistent with the drive being all zeroes (and thus safe to remove) but so far nothing has materialised. 

Link to comment
8 hours ago, itimpi said:

This is not as easy as it sounds!     The problem is that a drive cannot be removed without invalidating parity unless it is all zeroes and simply removing the files from a disk does not result in it being set to zeroes.   There have been requests for providing an automated way to handle removing a disk without losing parity protection by stopping writing to it and updating parity to make it consistent with the drive being all zeroes (and thus safe to remove) but so far nothing has materialised. 

There was a script to zero the disk while in the array, so parity remained consistent with eventual removal of the drive. The array would be protected throughout even if the process was interrupted for some reason

 

Having unRAID pretend the drive is zeroed while it updates parity is essentially a parity rebuild, so doesn't seem much different than the usual approach of just removing the disk and rebuilding parity.

 

Link to comment
2 minutes ago, trurl said:

Having unRAID pretend the drive is zeroed while it updates parity is essentially a parity rebuild

 

A small nuance. A true parity rebuild involves all disks in the array, while zeroing a single disk just updates the parity disk(s). The outcome is the same though.

 

Link to comment
3 minutes ago, bonienl said:

 

A small nuance. A true parity rebuild involves all disks in the array, while zeroing a single disk just updates the parity disk(s). The outcome is the same though.

 

OK, I see what you mean. It would read only parity and the disk to be removed, and update parity as if it were writing zero, just like the "normal" write. Parity rebuild would be more like "turbo" write.

 

But my point was more about what would happen if it were interrupted for some reason. Then parity would be out of sync with the array and would have to be rebuilt (and array unprotected until done), whereas with the zeroing script really zeroing the disk, everything is in sync throughout.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...