Jump to content

Disk Umountable After Format


Go to solution Solved by JorgeB,

Recommended Posts

root@NAtaSha:~# sfdisk /dev/sdf

Welcome to sfdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Checking that no-one is using this disk right now ... OK

Disk /dev/sdf: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD4002FYYZ-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

sfdisk is going to create a new 'dos' disk label.
Use 'label: <name>' before you define a first partition
to override the default.

Type 'help' to get more information.

>>> 2048
The size of this disk is 3.6 TiB (4000787030016 bytes). DOS partition table format cannot be used on drives for volumes larger than 2199023255040 bytes for 512-byte sectors. Use GUID partition table format (GPT).
Created a new DOS disklabel with disk identifier 0xa320b548.
Created a new partition 1 of type 'Linux' and of size 2 TiB.
   /dev/sdf1 :         2048   4294967295 (2T) Linux
/dev/sdf2: ^C

 

Link to comment
  • Solution

I'm afraid there's no trace of a filesystem signature on any of the two partitions start sector Unraid can possibly use, are you sure those disks were ever formatted?

 

If yes, either the partition is truncated or the disk was fully wiped, in any case I don't an option to recover, maybe UFS explorer, if the disk was not fully wiped.

Link to comment

disk2 was definitely formatted when I first setup the NAS and was being used to store data in the array. That data stopped being accessible in the shares when the problem with disk2 began. I have not initiated any wipes or formats since disk2 first appeared as unmountable. Prior to that, I had not formatted disk2 since I first setup the array. I have never wiped any of the drives.

Link to comment

I scanned disk2 with UFS Explorer and thankfully all the data is still there. I'm currently saving it off to another device as a backup.

 

I'll be out of town for the next three days, but after that I'll add disk2 back to the NAS and try your command.

  • Like 1
Link to comment

I got all the data from disk2 copied onto an external device.

 

I put disk2 back in the NAS, started it up, re-assigned disk2 to the array, and started the array. It immediately started a Data-Rebuild. I'm letting it finish since it already started, but I don't expect it to fix anything since the emulated disk2 appeared to have the same problem. Hopefully that wasn't a mistake.

 

disk2 is still /dev/sdd. Here's the output from that command:

root@NAtaSha:~# fdisk -l /dev/sdd
Disk /dev/sdd: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD4002FYYZ-0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1F571DE2-9EC9-409F-BEA0-B1CB5F8D39E5

Device     Start        End    Sectors  Size Type
/dev/sdd1     64 7814037134 7814037071  3.6T Linux filesystem

 

Link to comment

The Data-Rebuild is complete and disk2 shows the same error as before, "Unmountable: Unsupported or no file system".

 

So what should I do now? My instinct is to try to format disk2 and disk3 again. Then, if that works, copy the recovered data back into the array. Is that all there is to it at this point?

 

I'm also uncertain about my original choice to go with btrfs. I don't know if that had anything to do with the issues I've had with disk2 or disk3, but my research into data recovery for disk2 seems to suggest the tooling and support for btrfs is still lacking compared with xfs. Would you recommend I switch to xfs? If so, is there a simple way I can switch disk1 and disk4 over to xfs as well?

Link to comment
10 minutes ago, CivBase said:

So what should I do now? My instinct is to try to format disk2 and disk3 again. Then, if that works, copy the recovered data back into the array. Is that all there is to it at this point?

That's what I would recommend.

 

10 minutes ago, CivBase said:

Would you recommend I switch to xfs? If so, is there a simple way I can switch disk1 and disk4 over to xfs as well?

Unless you care about the extra btrfs features, like checksums and snapshots, I generally recommend xfs for the typical user, since it tends to be more forgiving of issues, and easier to recover when there are some.

Link to comment
Posted (edited)

I formatted disk2 and disk3 and got the data copied back into the array. Then I ran SMART tests and got errors on disk2 (screenshot attached). I guess that explains how the file system got corrupted. It's one of the new disks I got a few weeks ago, so that's a bummer.

 

I ordered a replacement and I'm currently using the unbalanced plugin to move all the data from disk2 onto disk4. I still have all the data I recovered from disk2, so if something goes wrong with the transfer I should still be able to restore it.

 

Screenshot from 2024-03-23 11-48-21.png

Edited by CivBase
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...