btrfs replacement drive has no data after restore


Recommended Posts

Hey all,

 

TLDR: upgraded drive from 2Tb-4Tb, formatted 4Tb btrfs, after 12 hour parity restore, no data.

 

All, I replaced a 2 Tb drive 2 days ago with a 4Tb.  I am still running most of my drives as riserfs, but I decided to go to something a little different for my replacement drive.  I stopped the array, shutdown the server, replaced the drive, started up, before starting the array, I formatted the 4Tb to btrfs, added to the array and rebuild the drive.  It took aprox 12 hours for the restore.  I noticed that the drive had 32,854 Reads and 10,399,343 writes.  I assume with that amount of writes that it restored, but It is showing as 4Tb free, and browsing to the drive gives me only 1 file that was added last night.

 

I don't see (as of yet) that any of my data is missing, but this seems really odd to restore a drive and have nothing restore to it.  I still have the old 2Tb, I guess I can plug the old drive into another machine and compare the files to see if any went missing.  This just seemed strange enough that someone has probably seen it.

 

Any ideas?

Link to comment

I'm not completely clear on what you did, as the steps you outlined don't quite work.  How did you format the drive if the array wasn't started?  Or did you mean that you changed the format setting for the drive, which would cause it when next formatted to use that setting?

 

It sounds to me as if you may have somehow formatted the emulated image of the drive, which would have wiped out all data.  Then when the new 4TB was added, it wrote the updated image (of an empty fresh formatted drive) to the 4TB.  As you know, a drive rebuild is not a restore but a write of the entire current emulated drive image.  You would normally never want to format a drive AND rebuild a drive at the same time, somewhat mutually exclusive operations.  However, it's normally not possible to do both of those things at the same time.  Can you clarify exactly what steps you took?

Link to comment

I am afraid that an empty drive is exactly what I would expect from the sequence of actions you performed.    You CANNOT change disk format as part of a rebuild.    The format that you issued would have written an empty file system to the "emulated" drive which you then proceeded to rebuild onto the new drive.

 

The key point is that parity knows nothing about the data on any particular drive.    All it knows how to is to restore the same sectors onto another drive.  it is nor aware what the contents of the sectors is being used for.  A rebuilt drive therefore always has exactly the same content as the drive that was being emulated and the same file system type.  In fact if the emulated drive had file system corruption, then the restored drive will have exactly the same corruption.

Link to comment

The steps I used to create this fun situation:

After I stopped the array and shutdown the server, I replaced the 2TB drive with the 4TB, and started the server back up

When the interface came back up, I clicked on Main, and it had the list of my drives with Disk 3 missing. The array was stopped.

I clicked on the pull down and selected my new 4TB drive for the missing Disk 3.

I then click on the link for Disk 3 and changed the File System Type to btrfs, clicked Apply, and then clicked Done.

This took me back to the list of drives with the new drive selected for Disk 3

I then started the array. 

The drive came up with "Unmountable" where it usually says the disk space.

Below was the Format button with the new drive SN by it. 

I formatted the drive, and right after the format it began a rebuild (I thought).  It looked exactly like the rebuilds I have done in the past under the riserfs with the exception that the diskspace was not adjusting as it usually does.

The process finished up in about 10 hours with a lot of writes, but no data written to the drive.

 

At this point I am looking for some next steps.  I still have the old drive, so I could hook that up to my linux machine and copy the contents directly to the new Disk 3 (not to the share as a whole, I hear that is a bad thing), or should I reformat the current disk 3 and put riserfs back on.  Would that cause a restore of the previous data?

 

I run 2 NASs so I am not really worried about losing the data, so I can try several things.

Link to comment

...or should I reformat the current disk 3 and put riserfs back on.  Would that cause a restore of the previous data?

 

No, but if you didn't have the old disk reiserfsck could probably recover most of the data, best way forward is to copy the data from the old disk, over lan using another computer or if you have available ports using the unassigned devices plugin on your server.

 

P.S.: most believe xfs is the preferred file system for v6, btrfs is not considered mature enough, mostly used for cache pools.

Link to comment

The steps I used to create this fun situation:

After I stopped the array and shutdown the server, I replaced the 2TB drive with the 4TB, and started the server back up

When the interface came back up, I clicked on Main, and it had the list of my drives with Disk 3 missing. The array was stopped.

I clicked on the pull down and selected my new 4TB drive for the missing Disk 3.

I then click on the link for Disk 3 and changed the File System Type to btrfs, clicked Apply, and then clicked Done.

This took me back to the list of drives with the new drive selected for Disk 3

I then started the array. 

The drive came up with "Unmountable" where it usually says the disk space.

Below was the Format button with the new drive SN by it. 

I formatted the drive, and right after the format it began a rebuild (I thought).  It looked exactly like the rebuilds I have done in the past under the riserfs with the exception that the diskspace was not adjusting as it usually does.

The process finished up in about 10 hours with a lot of writes, but no data written to the drive.

 

At this point I am looking for some next steps.  I still have the old drive, so I could hook that up to my linux machine and copy the contents directly to the new Disk 3 (not to the share as a whole, I hear that is a bad thing), or should I reformat the current disk 3 and put riserfs back on.  Would that cause a restore of the previous data?

 

I run 2 NASs so I am not really worried about losing the data, so I can try several things.

the mistake you made was to change the file system type and then select the format option.  Format is NEVER a part of a disk replacement or rebuild.  Any time you use Format you will end with no data on the drive being formatted as the Format command is an instruction to create an empty file system.  In this case the writes to create the empty file system are reflected in the parity drive.    If you had not done the format then you could have changed the file system back to the original and your data would have been intact.

 

As you still have the original disk I would think the easiest way forward is to hook that up and copy its contents back to the unRAID server.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.