Jump to content
Sign in to follow this  
dasx86

Strategy to recover deleted files, XFS

23 posts in this topic Last Reply

Recommended Posts

Hey all,

 

Using Krusader, I moved files from a user share (array of spinning disks using XFS) to an unassigned device (M2 SSD via PCIE adapter, formatted with XFS.)  Unfortunately, that seems to have grenaded the files - I no longer see them at the source, nor the destination.  Seems like Krusader thought the move operation went smoothly and the files were removed from the source, but didn't actually land at the destination.

 

Not a big deal that the files are gone (I'd have had them backed up better if it were,) however, I'm trying to come up with a strategy to recover them if possible.  I stopped the array immediately upon realizing what had happened.  I do know that I did not initiate any other write activities, didn't have other plugins/dockers running, so all write activity if any should be minimal.

 

 

Current strategies to recover:

1. Stick the disks I believe had the data in a Windows box and scan using the various tools created by SysDev Laboratories (Googling "xfs recovery" comes up with at least 3 differently named softwares from this company.)  I've scanned one of the disks and I'm getting partial results, though the folder structure is a bit garbled.  Continuing this approach now

 

Another thought:

1. Can I mount the array in a read-only mode, to browse the array without risk of writing to it?  I may have moved these files to a different destination than I thought (really don't think so, but human error is possible.)  Is "maintenance" mode helpful here?  I can go through the drives one by one to accomplish my goal here, but perhaps there's a straightforward approach to this in Unraid

 

Questions on what happened:

1. At the time of the move, the path to the unassigned devices ("disks" folder) was mounted as RW, and not RW/slave as they should have been.  If Krusader could see the unassigned devices at the time, could this have contributed?

2. Issues with M2 -> PCIE devices?  Though, I'm unable to find other instances of this problem despite finding posts where people are mounting devices in this manner

 

 

Any thoughts on the strategy to recover, the possibility of a read-only array, or speculation on how this happened is appreciated!

Edited by dasx86

Share this post


Link to post
7 minutes ago, dasx86 said:

I may have moved these files to a different destination than I thought (really don't think so, but human error is possible.)

 

Without further details I think user error is the most likely explanation for what happened.

 

8 minutes ago, dasx86 said:

1. Stick the disks I believe had the data in a Windows box and scan using the various tools created by SysDev Laboratories (Googling "xfs recovery" comes up with at least 3 differently named softwares from this company.)  I've scanned one of the disks and I'm getting partial results, though the folder structure is a bit garbled.  Continuing this approach now

 

If you put the disks in Windows you have probably invalidated parity. Normally I would strongly recommend not putting a disk in another machine, but at this point I'm not sure there is any point in trying to recover in unRAID. Parity wouldn't have helped anyway, nor would filesystem repair.

 

 

Share this post


Link to post
2 minutes ago, trurl said:

Without further details I think user error is the most likely explanation for what happened.

 

I sure hope so!  Occam's razor....

 

2 minutes ago, trurl said:

If you put the disks in Windows you have probably invalidated parity. Normally I would strongly recommend not putting a disk in another machine, but at this point I'm not sure there is any point in trying to recover in unRAID. Parity wouldn't have helped anyway, nor would filesystem repair.

 

Yep, nothing to do with parity or filesystem issues.  I didn't think Windows would write anything to it, without support for XFS.  Assumption on my part.  If parity is invalidated, so be it....

Share this post


Link to post

I plan to keep this thread updated to help anyone out who might search in the future. 

 

After scanning both the source and destination drives with many different recovery programs, I say with confidence that the files did indeed blow up.  (Scanning all other disks and volumes came up with nothing.  99% sure I did not move them elsewhere.)

 

After painfully trying many different programs, I was able to retrieve from the source drive the couple files I was concerned about along with many more of less importance.

 

I had success this morning with Recovery Explorer Standard version 6.16.2.  I didn't have luck with any of the other software from SysDev Labs, who seems to rename their software with every major version.  Earlier versions (UFS Explorer and Raise Data Recovery) were not able to process the filesystem as completely and with as many correct filenames as the latest version.  As far as other vendors, I had no success with DiskDoctor's XFS Data Recovery, nor another program called ReclaiMe.  If I remember right, XFS Data Recovery was unable to read the filesystem at all, even the normal existing files.  ReclaiMe froze up at about 10% into the scan and seemed to be running suspiciously fast for the size of the drive.  YMMV may vary with any of these.

 

At the moment, I'm running a parity check WITHOUT correcting parity errors, to see if hooking an XFS drive that is part of the storage pool to a Windows box for recovery would cause any writes to throw parity off.  If it did, fine, I'll rebuild parity, no big deal.

 

EDIT: Can confirm that hooking the array's XFS drives up to windows for scanning did NOT throw off parity whatsoever.  YMMV

 

The next step will be to replicate my behavior (to the best of my human extent) that I believe caused this - moving files from a user share in an unRaid array to an M2 SSD via PCIE that is outside of the array, from within a Krusader docker.  I will try two ways: the initial way, when I had /mnt/disks mounted in RW mode, and then in RW/Slave mode.  Will report back.

 

In the meantime kids, eat your vegetables, floss your teeth, and back up your files.

Edited by dasx86

Share this post


Link to post

How successful, percentage wise, was recovery explorer standard? I ask because i too had problems with file recovery, I did delete them stupidly, and only had partial success in retrieval using ufs explorer. I still have the drive, uncorrupted any further, so am wondering if its worth a second go at recovery ...

Sent from my LG-D855 using Tapatalk

Share this post


Link to post

I don't think I could give you a percentage breakdown, but I can give you a rough idea of my situation.

 

Root directory files, all recoverable

9GB MSSQL backup file

800MB MSSQL backup file

10-15 files under 1MB: some PDFs, a couple zip files, other random

 

Intact subdirectories, all recoverable

1 folder with ~6 SQL files (text)

1 folder with ~10 SQL files (text)

 

Not intact subdirectories

An extracted ISO of MSSQL 2014 SP1, which has tons of files in many directories and subdirectories.

 

 

Based on the size, I'd guess that folder had about half of the contents of the extracted ISO.  This is an ISO that has thousands of files and subdirectories going many levels deep.  I can only begin to speculate how these subdirectories broke out.  However, I could see missing content, by going to the are of the recovery that is a "lost and found" of sorts.  However, the folder names and structure were lost.  Top level folders in the "lost and found" would have names like "Directory$4928" but the names of subfolders in that directory would be fine.  There was no way of telling where a certain folder should have been located, or whether the structure within it was complete.....

 

Of course, I didn't care about recovering that directory at all so no harm done.

 

Not sure if that makes sense; don't have this in front of me.  But bottom line, if you're looking for specific files like I was, I would give it a shot.  Downloading and running a scan is free, just costs money once you try to recover files over 250KB.

 

I'll add more in here later.  Depending on how motivated I feel, I might start a new thread and write up a comparison of the programs I used during my recovery attempts.  Not that any of it has anything to do with unRaid at all, nor am I qualified to speak out about XFS or any other filesystem, but it isn't the first thread created on this subject here and may be a useful reference.

Edited by dasx86

Share this post


Link to post

Sounds very similar to ufs explorer in that case. Worth a try though using the free version, see how it copes. Thanks!

Sent from my LG-D855 using Tapatalk

Share this post


Link to post
14 minutes ago, superloopy1 said:

Sounds very similar to ufs explorer in that case. Worth a try though using the free version, see how it copes. Thanks!

Sent from my LG-D855 using Tapatalk
 

 

Yeah, I didn't expect the latest version to have a significantly different engine or abilities than the prior version, but it really made a difference for me.

Share this post


Link to post

I have been able to replicate the behavior that caused me to lose files in the first place.  Can someone explain what's happening here?  I am not well versed in mount points, etc....

 

Latest version of the Krusader docker by binhex.  I have container path /media pointing to /mnt, and I've set up container path /UNASSIGNED to point to /mnt/disks.

 

Left is /media/disks, and right is /UNASSIGNED

 

K3r7XnE.png

 

After diving into the Samsung SSD (fourth on the list) on both paths, here's what I see:

 

uVonlG0.png

 

I've copied files into both of these paths for testing.  The path on the right is what I can see exposed via SMB, and is what seems to be correct.  Also, notice the drive stats at the top - the full drive capacity is shown on the right, while the left path still shows ~63GB.

 

Note: if I dive into ANY of the unassigned devices via /media/disks on the left, I cannot see the contents of ANY device.

 

I'm fairly confident that if I were to restart my server, the files I've copied to /media/disks/*samsung ssd* would vanish into the ether, just like the situation that caused me to create this thread to begin with.

 

1) Why is /media/disks/ seemingly not actually pointing to the drives, while /UNASSIGNED is?  They're referencing the same paths on the host

2) Where the hell is the file that I've copied (on the left) actually going?  This is the black hole that ate my files the first time

 

Thanks 

 

edit: fixed cases on mount points

Edited by dasx86

Share this post


Link to post

Not quite sure what you did wrong from your explanation.    However do you realise that under Linux file/folder names are case sensitive?    In other words /MNT is not the same as /mnt.   Since Shares try and replicate the Windows case insensitivity if you have two paths with differing case at the Linux Level only one of these will show up at the Share Level (and the others content will effectively be hidden).  If you really have used upper case as your text seems to suggest for the paths this could explain your problems.  

Share this post


Link to post

Actually, you can see on the left side that it's pointing to rootfs, which is in memory, rather than the partition on disk which is XFS.

 

So why is the /mnt/disks path pointing to rootfs?

 

Certainly explains why I lost my files the first time - they were just hanging out in RAM!

Share this post


Link to post

Anything that is not under /mnt/disk? (note the lower case) for array disks or /mnt/disks (for Unassigned disks) is in RAM.

Share this post


Link to post

Any reason you can think of that browsing to /mnt/disks (unassigned devices) in Krusader would point to rootfs?

Edited by dasx86

Share this post


Link to post
2 minutes ago, dasx86 said:

Any reason you can think of that browsing to /mnt/disks (unassigned devices) in Krusader would point to rootfs?


Because the mount points are in RAM - you need to step into one of the mounted volumes to enter an actual disk volume.

Share this post


Link to post
1 minute ago, pwm said:


Because the mount points are in RAM - you need to step into one of the mounted volumes to enter an actual disk volume.

 

Please look at the path on the left side in the bottom picture, it still shows as rootfs even after seemingly diving into the disk volume...

Edited by dasx86

Share this post


Link to post
1 hour ago, dasx86 said:

I have been able to replicate the behavior that caused me to lose files in the first place.  Can someone explain what's happening here?  I am not well versed in mount points, etc....

 

Latest version of the Krusader docker by binhex.  I have container path /MEDIA pointing to /MNT, and I've set up container path /UNASSIGNED to point to /MNT/DISKS.

 

Left is /media/disks, and right is /UNASSIGNED

 

K3r7XnE.png

 

After diving into the Samsung SSD (fourth on the list) on both paths, here's what I see:

 

uVonlG0.png

 

I've copied files into both of these paths for testing.  The path on the right is what I can see exposed via SMB, and is what seems to be correct.  Also, notice the drive stats at the top - the full drive capacity is shown on the right, while the left path still shows ~63GB.

 

Note: if I dive into ANY of the unassigned devices via /media/disks on the left, I cannot see the contents of ANY device.

 

I'm fairly confident that if I were to restart my server, the files I've copied to /media/disks/*samsung ssd* would vanish into the ether, just like the situation that caused me to create this thread to begin with.

 

1) Why is /media/disks/ seemingly not actually pointing to the drives, while /UNASSIGNED is?  They're referencing the same paths on the host

2) Where the hell is the file that I've copied (on the left) actually going?  This is the black hole that ate my files the first time

 

Thanks 


You should be careful when playing with mappings - and when creating directories.

Avoid creating aliases - and always be very careful with upper/lower case.

 

Scenario 1:

Your /media points to /mnt

Your /UNASSIGNED points to /mnt/disks

 

So /media/disks/SAMSUNG_xxx is /mnt/disks/SAMSUNG_xxx

and /UNASSIGNED/SAMSUNG_xxx is /mnt/disks/SAMSUNG_xxx

 

It's very dangerous to create this kind of alias, because programs will believe it's two different locations when it's the same.

So it's a great way for a program to create a file on one side that instantly overwrites the file on the other side.

Same thing can happen when copying between user shares and disk shares.

 

Scenario 2:

Since we don't know about the actual case you have been using and you sometimes write in upper case and sometimes in lower case, you might also have created a directory tree under /MNT/ which is completely different from /mnt/ and has absolutely nothing with mounted disks to do. So you might have created RAM-based directories with the same name as the mount points but with non-persistant file storage since content copied into the RAM-based file system will not survive a reboot.

 

It's quite probable that it's the second alternative here, since you write:

2 hours ago, dasx86 said:

Note: if I dive into ANY of the unassigned devices via /media/disks on the left, I cannot see the contents of ANY device.

 

It really is imperative that you make sure you get upper/lower case names correct if you want to avoid issues. Both from unexpected "mount" points and from SMB not being able to show multiple entries that only differs in case.

 

 

When it comes to free space, the program doesn't know that it crosses disk boundaries when stepping down into subdirectories.

Share this post


Link to post

Thanks for taking the time to write such a detailed reply, pwm.  I've edited my earlier posts to accurately convey mappings as they are set up.

 

Here's the relevant part of the docker config:

 

E1rlCpF.png

 

Scenario 1: Point well taken.  I do understand that having multiple mappings opens up the kinds of problems one can have when copying from disk shares to user shares (don't do it!)  As I'm only using this docker to move files from user share to user share, OR unassigned device to user share, I don't think I'm opening myself up to that issue in this case.

 

Scenario 2: I don't believe I've set myself up for issues here with my behavior so far, but understood how it would be easy to.  I will ponder this part of your response again in the morning after a full night's sleep :)

 

 

 

At the end of the day, I'll probably switch the container path "/media" to point to host path "/mnt/user", and only access unassigned devices through the container path "/UNASSIGNED" which points to host path "/mnt/disks" which is behaving just fine.  I hope I can get to a solid understanding of why my current container path "/media/disks" shows my unassigned devices as folders, but points to rootfs for each of them....

Share this post


Link to post

I think the problem is related to the way docker handles volumes.

 

From the docker documentation (https://docs.docker.com/storage/volumes/) it states:
"Volumes use rprivate bind propagation, and bind propagation is not configurable for volumes."

And rprivate bind propagation is defined as:
"
The default. The same as private, meaning that no mount points anywhere within the original or replica mount points propagate in either direction."

 

Share this post


Link to post
On 5/28/2018 at 1:24 AM, remotevisitor said:

I think the problem is related to the way docker handles volumes.

 

From the docker documentation (https://docs.docker.com/storage/volumes/) it states:
"Volumes use rprivate bind propagation, and bind propagation is not configurable for volumes."

And rprivate bind propagation is defined as:
"
The default. The same as private, meaning that no mount points anywhere within the original or replica mount points propagate in either direction."

 

 

Does anyone have any insight if this is part of the issue?  eg.  why browsing to /mnt/disks from /mnt within the container points to rootfs, while a separate mount that points to /mnt/disks directly does go to the partitions on drives as intended?

Share this post


Link to post

I am away from my Unraid system so I cannot check, but I think maybe the solution to your problem is relate to the slave option, see 

 

Share this post


Link to post

Ah, everything I had read seemed to be in reference to additional volumes, but it makes complete sense that it would apply to any volume.  If this indeed the case... well, maybe I've missed a recommendation to point the "main" volume directly to user shares, thereby restricting the user from browsing to devices outside the array.  Or perhaps the recommendation was made to use RW/slave across the board and I've missed it?  Or perhaps this isn't the issue?

Share this post


Link to post

Unfortunately I've trashed my docker.img after this initially happened, way before realizing the "rootfs" destination issue we've been talking about in the last 10 posts.  I've also switched from the old Krusader docker to the new one from binhex, so all of that is long gone.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this