Flamingo Posted April 13, 2022 Share Posted April 13, 2022 (edited) Hi fellow forum members, I made a mistake and I only realized my error a minute too late. I have 4 8TB disks in my array and an additional 8TB disk that was mounted to a VM. I was replacing the disk mounted to the VM (both old and replacement disk was present) but in the process somehow two disks from the array got mounted to the VM. I can't think of how it happened. While I was working on the VM I noticed there was some issue with the disks and I cleared them with diskpart in windows. This of course made all the data disappear on the two disks in the array. The parity seems to be ok. I started a parity check. The disks seems to be indicating the old data still being present as they seems to be half full on the dashboard. I busted the partition table on the 2 disks from the array for sure but there was no formatting took place in diskpart. I think all it did was deleting the partition tables. How can I restore the original partition table so the data is visible to the array again? Please help as I'm in a bit of a panic... Thanks, Flamingo Edited April 13, 2022 by Flamingo typo/clarification Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 Just a quick update. I seems that only 1 disk was impacted from the array (sdf). I think when I removed on of the replacement disks physically from the server one of the disks from the array jumped in it's place. And when I reconnected the physical disk the disk from the array remained amounted with the VM. Parity check is running. I'm not sure if it will be able to replace my missing files given the partition table is messed up on sdf. Please let me know if you have any suggestions on this. Quote Link to comment
JorgeB Posted April 13, 2022 Share Posted April 13, 2022 Please post the diagnostics to see the actual array status. Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 (edited) Hi JorgeB, Please see the diagnostics attached. tower-diagnostics-20220413-1237.zip Edited April 13, 2022 by Flamingo Quote Link to comment
Barnoe Posted April 13, 2022 Share Posted April 13, 2022 Apologies ive no suggestions as im new to this, but i do feel for you, i hope you save your data. i lost data on another server (not Unraid) its an awful feeling. that lesson brought me to Unraid and also a back up hard drive. Good luck 🤞 Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 Unraid was working flawlessly so far and still is. I messed it up... Since no formatting happened I hope I can get back the data. Most of my family pics and old scanned documents are on that drive...🙁 Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 I got an error from Fix Common Problems. I don't think currently the parity disk can write to it. Should I just wipe the disk and reconnect to the array. Since only one disk is impacted the parity disk could rewrite the data Quote Link to comment
itimpi Posted April 13, 2022 Share Posted April 13, 2022 You could try starting the array with the disk not connected to see if Unraid successfully emulates it (including its contents) since the rebuild process is all about making a physical drive match an emulated one. Quote Link to comment
JorgeB Posted April 13, 2022 Share Posted April 13, 2022 Running a parity check wasn't the best option since parity will be updated, since the disks were still mounting first thing is to make sure backups are up to date. Diags posted don't show issues with disk3, should post new ones. Quote Link to comment
Frank1940 Posted April 13, 2022 Share Posted April 13, 2022 34 minutes ago, Flamingo said: Most of my family pics and old scanned documents are on that drive...🙁 If the emulated disk is readable, I would copy everything off of it first. (Even if I had to buy another disk to do it!!!) Now, for another thought... One should always have three copies of any data that can not be replaced by googling. Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 Sooo I messed this up big time. My initial assessment was correct. Steps I took in the past hour - I stopped the parity check. - turned off the server - disconnected disk3 - powered on the server - unraid started, the array didn't start. I found it strange it didn't try to emulate Disk3. I opted for the array to be started as I tough emulation might kick in after...at hat point I realized that my original assessment was correct. When I was working with the 2 disks connected to the VM and I removed and reconnected one the disk allocations must have changed without me realizing it. And when the VM started for the second time not the two unassigned disks were connected but the parity disk and Disk3. - I turned off the server again - reconnected Disk 3 The array didn't pick the disk up automatically after the reboot. Probably because the partition table was missing. - I added Disk 3 as a new config. The disk is now in the array sort of but not mounted. I have n I have no idea how the parity disk is is green?!? I have a feeling I have deleted the partition table from that accidentally as Disk3 was never emulated. Data should still be present on Disk3 but now thinking back I might have reformatted as a quick format to ntfs. Is there a way to restore this? Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 tower-diagnostics-20220413-1348.zip Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 How can Disk3 be emulated and un-mountable in the same time? Quote Link to comment
itimpi Posted April 13, 2022 Share Posted April 13, 2022 Any disk can be flagged as unmountable regardless of whether it is emulated or not if there is file system corruption or problems recognising the partition. Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 How can I correct/recreate the partition table on disk3? The raw data should still be present on the disk so should should be largely a logical issue, no? Quote Link to comment
JorgeB Posted April 13, 2022 Share Posted April 13, 2022 If you hadn't run a parity check it would likely be much easier to recover, check filesystem on disk3, it might still be salvageable. Quote Link to comment
JorgeB Posted April 13, 2022 Share Posted April 13, 2022 Forgot to mention, if the filesystem in the emulated disk can't be fixed or there's a lot of lost+found files you can try using testdisk on the actual disk to recover the deleted partition, then try to mount it outside the array with UD. Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 Thanks JorgeB, I stopped the array and restarted it in maintenance mode. I run the check -nv command and this is what I got back. Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .......................................... verified secondary superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now. What attribute should I run the check command to allow it to write a modified primary superblock? I think I should use check -d Does anyone have experience with this command in a similar situation? Quote Link to comment
JorgeB Posted April 13, 2022 Share Posted April 13, 2022 11 minutes ago, Flamingo said: What attribute should I run the check command to allow it to write a modified primary superblock? Same but without -n or nothing will be done. Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 Thanks JorgeB, I run the check -v command now. I will post the outcome here. Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 This is the result of the second check: Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... verified secondary superblock... writing modified primary superblock - block cache size set to 6116280 entries Phase 2 - using internal log - zero log... zero_log: head block 1429578 tail block 1429576 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Could someone please advice what would be the next step? Quote Link to comment
itimpi Posted April 13, 2022 Share Posted April 13, 2022 Run with the -L option - that is a common warning and does not normally indicate any increased likelihood of data loss. Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 Thanks itimpi I started a check -L I will check the results in the morning. Quote Link to comment
Flamingo Posted April 13, 2022 Author Share Posted April 13, 2022 (edited) The process completed There is quite a lot of information in the output. Can someone please check this if it looks OK, and advise what would be the next step? log14APR2022.zip Edited April 14, 2022 by Flamingo updated log Quote Link to comment
itimpi Posted April 14, 2022 Share Posted April 14, 2022 That report indicates major corruption was found. It is likely that not all files have been recovered, and even for those that have the filename information was not found and the files have been placed into a lost-found folder so you can (if you want to) inspect them manually to determine their original names (using the Linux 'file' command to help by determining their type). Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.