Jump to content

Accidentally formatted disk [SOLVED]


Recommended Posts

I have 8 disks, a mixture of 3TB and 4TB and 1 parity running under 6.8.2. I had Disk 4 show signs of failure, so I backed up Disk 4 to another array as a precaution, and pulled a spare 4TB drive that I keep for disk failures to swap in while I submitted a warranty claim.
When I powered the box on again it showed that Disk 6 was not installed and Disk 4 didn't have a file system. I thought that was a little weird but figured maybe I bumped a cable or something while I was swapping Disk 4. I opted to format Disk 4 before opening the case again to check on Disk 6 but discovered that despite Disk 6 claiming to be not installed it was formatted too. I assume that Disk 6 must have also failed while I was swapping Disk 4. Even after the format Disk 6 is still displayed as not installed.
Disk 4 is no big deal as I took a backup before starting. Disk 6, I'm a little puzzled about. I have my critical data backed up so I can sync my most recent backup to my shares and get that back. To my mind formatting the disk, even though the drive was seemingly failed should tell Unraid that I don't care about that data.

I am curious if perhaps I don't understand it as well as I think and if there is a way to make Disk 6 appear as failed again or convince Unraid to recover that disk from parity by some means to restore the non-critical data once I get another drive in there.

Edited by Cyber-Wizard
Link to comment

Connection issues on 2 disks, disk4 and another that might have been disk6, serial ending EZJ3.

 

syslog shows you formatted both disk4 and disk6

Sep 28 19:45:17 durinstower emhttpd: shcmd (484): /sbin/wipefs -a /dev/md4
Sep 28 19:45:17 durinstower root: /dev/md4: 4 bytes were erased at offset 0x00000000 (xfs): 58 46 53 42
...
Sep 28 19:47:19 durinstower emhttpd: shcmd (489): /sbin/wipefs -a /dev/md6
Sep 28 19:47:19 durinstower root: /dev/md6: 4 bytes were erased at offset 0x00000000 (xfs): 58 46 53 42

Did you actually unassign disk 6 before formatting? If not then presumably it formatted all assigned disks that weren't mountable.

 

Since you have backups that will be the simplest way to recover.

 

Of course, formatting isn't the way to fix unmountable disks. You should have asked for help before doing anything.

Link to comment

Yeah, I got cocky since I knew I had a backup of Disk 4 and Disk 6 "appeared" to be disconnected. I presumed that I was going to be able to format Disk 4 on it's own, which was a replacement for an already failed disk and data loss wasn't a concern, and then address my issues with Disk 6 after. Needless to say I was a little surprised to find that Disk 6 was connected. I got suckered in by the "not installed" message, didn't think and wound up wiping Disk 6. 
image.png.5ae0e8d984b79a3158538db2f27d26bc.png
In hindsight unassigning Disk 6 would have been logical but as I said...I got cocky. It won't be an issue to restore the critical data. Needless to say the majority of what was on Disk 6 was non-critical but nice to have. I just don't have much of an idea what was lost.

Thank you for your help!

Link to comment
7 hours ago, trurl said:

Connection issues on 2 disks, disk4 and another that might have been disk6, serial ending EZJ3.

Looks like you are using a Marvell controller for four of your drives, two of which are ones that you were having issues with in this post.  I don't believe Marvell controllers are recommended and all of your connections issues may be caused by this card.  It could have also been the cause of the corrupted file system on disk6.  

 

As for formatting disk6, judging by your logs, it looks like disk6 (sdh) drops in and out because of connection issues (presumably). I can only guess that it looks like it drops out when it should have been assigned the "disk 6" designation but still maintains a md6 presence.  so the format is performed against md6.  Just my educated guess.. in that case, maybe this is a bug

Link to comment

Just to provide closure on the original post (I hate reading posts and not finding out if everything got resolved)...
My Marvell controller had been installed in a PCI-e 1X and I didn't have enough free slots to put in a new card. My original motherboard didn't have onboard video so my two x16 slots were taken up with a video card and the dual port NIC. Astonishingly I didn't have a simple PCI VGA card on hand (remember when we all had boxes of the things?). I swapped in an i5-4460 based motherboard that had onboard video so that I was able to install the NIC and the 9211-8i. I flashed the 9211-8i to IT mode and booted up Unraid. The onboard NIC was picked up as eth0 so I didn't need to manually re-add it to the NIC bond. It ran parity recovery on disk 6, despite there not being anything to recover after it had been formatted.
About an hour before the parity sync finished, the health check sent me an email indicating that disk 6 was invalid but after the sync completed everything checked out just fine. I restored the critical data from the backup NAS and sifted through Kodi to get any missing movies and TV shows re-added to Sonarr and Couchpotato. Time will tell just what other non-critical data was wiped out.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...