Jump to content

(Solved) UnRaid 6.12.10 How to recover btrfs pool after removing all the disk from it


Go to solution Solved by JorgeB,

Recommended Posts

Originally I have two disks in a BTRFS Pool.

 

WDC_WD20EURS-63SPKY0_WD-WMC300767105 2TB (sdc)
WDC_WD10EZEX-60WN4A0_WD-WCC6Y4HPY8L1 - 1 TB (sdb)

 

I plan to swap out the 2TB drive and left the 1TB drive to serve in the pool. Google led me to this thread:

 

I followed the steps and

 

1. Stopped the pool.

2. Unassigned the 2TB drive from the pool.

3. Started the array after clicked "Yes, I want to do this".

 

After performed these actions, my pool looks like

 

Archive		Not installed						btrfs	Unmountable: Unsupported or no file system
Archive 2	WDC_WD10EZEX-60WN4A0_WD-WCC6Y4HPY8L1 - 1 TB (sdb)		Unmountable: Unsupported or no file system
Slots		2

 

Contrary to what is said in the thread, no balance action occurred and unraid did not prompted me anything regarding the pool. Instead, a "Format" option appers in the array tab that says

 

Unmountable disk present:
Archive 2 • WDC_WD10EZEX-60WN4A0_WD-WCC6Y4HPY8L1 (sdb)

Format will create a file system in all Unmountable disks.
口 Yes, I want to do this

 

No, I don't want to format my disk and losing all the data. I then performed the following actions

 

1. I stopped the array

2. I navigate to the pool, click on the now "Unassigned" pool device, and changed it to the 2TB disk I just unassigned.

 

The pool now looks like this:

 

Archive		WDC_WD20EURS-63SPKY0_WD-WMC300767105 - 2 TB (sdc)		All existing data on this device will be OVERWRITTEN when array is Started
Archive 2	WDC_WD10EZEX-60WN4A0_WD-WCC6Y4HPY8L1 - 1 TB (sdb)		
Slots		2

 

I realized something must have gone terribly wrong, and I don't want to end up losing all existing data. During the panic I removed all devices from the pool, now both of them are unassigned.

 

What should I do to reconstruct the pool and get back the data?

 

 

Edited by Lanhua
Link to comment
  • Lanhua changed the title to UnRaid 6.12.10 How to recover btrfs pool after removing all the disk from it
3 minutes ago, Lanhua said:

Probably not

The FAQ entry does mention:

Quote

You can only remove devices from redundant pools (raid1, raid5/6, raid10, etc)

 

Pool may still be recoverable if the old device is still wiped but otherwise intact, post the output of 

btrfs fi show

 

Link to comment
Posted (edited)
1 hour ago, JorgeB said:

The FAQ entry does mention:

 

Pool may still be recoverable if the old device is still wiped but otherwise intact, post the output of 

btrfs fi show

 

 

 

 

 

Output of btrfs fi show

 

root@EryingAIO-NAS:/mnt# btrfs fi show
Label: none  uuid: 3a54dcd8-7ff4-4ca7-a67f-fbcda98cc6b1
	Total devices 1 FS bytes used 271.12GiB
	devid    1 size 931.51GiB used 277.02GiB path /dev/nvme0n1p1

Label: none  uuid: a5964ab5-1bd5-48fa-a993-e3523cb675ea
	Total devices 1 FS bytes used 17.75GiB
	devid    1 size 40.00GiB used 27.52GiB path /dev/loop2

Label: none  uuid: 3033ad3f-2386-493d-a0f7-e184d47bb3cd
	Total devices 1 FS bytes used 2.12MiB
	devid    1 size 1.00GiB used 126.38MiB path /dev/loop3

 

 

 

Output of fdisk -l

 

root@EryingAIO-NAS:/mnt# fdisk -l
Disk /dev/loop0: 63.4 MiB, 66482176 bytes, 129848 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

... Loop device and boot USB are omitted ...

Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: Samsung SSD 980 PRO 1TB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xaa4acb76

Device         Boot Start        End    Sectors   Size Id Type
/dev/nvme0n1p1       2048 1953525167 1953523120 931.5G 83 Linux


Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EZEX-60W
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EURS-63S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

... array drives are omitted ...

Disk /dev/md1p1: 3.64 TiB, 4000786976768 bytes, 7814037064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

 

Screenshot of Main Tab

 

Screenshot 2024-06-21 at 13-04-39 EryingAIO-NAS_Main.png

Edited by Lanhua
Link to comment
9 hours ago, JorgeB said:

Assuming the pool disks are still sdb and sdc, type:

 

sfdisk /dev/sdb


then type

64

and hit enter, post the results from that

 

root@EryingAIO-NAS:~# sfdisk /dev/sdb

Welcome to sfdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Checking that no-one is using this disk right now ... OK

Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EZEX-60W
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

sfdisk is going to create a new 'dos' disk label.
Use 'label: <name>' before you define a first partition
to override the default.

Type 'help' to get more information.

>>> 64
Created a new DOS disklabel with disk identifier 0xc69b999f.
Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
Partition #1 contains a btrfs signature.

Do you want to remove the signature? [Y]es/[N]o: Y
The signature will be removed by a write command.
   /dev/sdb1 :           64   1953525167 (931.5G) Linux

 

I didn't write.

Link to comment
8 minutes ago, Lanhua said:

I didn't write.

Glad you didn't, because I asked for the output after 64, you cannot remove the signature, hit CTRL + C to abort, then start over, but this time keep the signature, then write, repeat the same for the other disk, but only write if a signature is found, then post again the output of btrfs fi show.

Link to comment
Just now, JorgeB said:

Glad you didn't, because I asked for the output after 64, you cannot remove the signature, hit CTRL + C to abort, then start over, but this time keep the signature, then write, repeat the same for the other disk, but only write if a signature is found, then post again the output of btrfs fi show.

root@EryingAIO-NAS:~# sfdisk /dev/sdb

Welcome to sfdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Checking that no-one is using this disk right now ... OK

Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EZEX-60W
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

sfdisk is going to create a new 'dos' disk label.
Use 'label: <name>' before you define a first partition
to override the default.

Type 'help' to get more information.

>>> 64
Created a new DOS disklabel with disk identifier 0xeee8ae42.
Created a new partition 1 of type 'Linux' and of size 931.5 GiB.
Partition #1 contains a btrfs signature.

Do you want to remove the signature? [Y]es/[N]o: n
   /dev/sdb1 :           64   1953525167 (931.5G) Linux
/dev/sdb2: write

New situation:
Disklabel type: dos
Disk identifier: 0xeee8ae42

Device     Boot Start        End    Sectors   Size Id Type
/dev/sdb1          64 1953525167 1953525104 931.5G 83 Linux

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
root@EryingAIO-NAS:~# sfdisk /dev/sdc

Welcome to sfdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Checking that no-one is using this disk right now ... OK

Disk /dev/sdc: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20EURS-63S
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

sfdisk is going to create a new 'dos' disk label.
Use 'label: <name>' before you define a first partition
to override the default.

Type 'help' to get more information.

>>> 64
Created a new DOS disklabel with disk identifier 0x8baeb46b.
Created a new partition 1 of type 'Linux' and of size 1.8 TiB.
Partition #1 contains a btrfs signature.

Do you want to remove the signature? [Y]es/[N]o: n
   /dev/sdc1 :           64   3907029167 (1.8T) Linux
/dev/sdc2: write

New situation:
Disklabel type: dos
Disk identifier: 0x8baeb46b

Device     Boot Start        End    Sectors  Size Id Type
/dev/sdc1          64 3907029167 3907029104  1.8T 83 Linux

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

 

Link to comment
2 minutes ago, JorgeB said:

 

Sorry, I'm little bit too nervous

 

root@EryingAIO-NAS:~#  btrfs fi  show
Label: none  uuid: 3a54dcd8-7ff4-4ca7-a67f-fbcda98cc6b1
	Total devices 1 FS bytes used 267.16GiB
	devid    1 size 931.51GiB used 276.02GiB path /dev/nvme0n1p1

Label: none  uuid: a5964ab5-1bd5-48fa-a993-e3523cb675ea
	Total devices 1 FS bytes used 18.87GiB
	devid    1 size 40.00GiB used 27.52GiB path /dev/loop2

Label: none  uuid: 3033ad3f-2386-493d-a0f7-e184d47bb3cd
	Total devices 1 FS bytes used 2.05MiB
	devid    1 size 1.00GiB used 126.38MiB path /dev/loop3


Label: none  uuid: 5299a408-bd1a-4fae-b0fa-6a44f0cd39f0
	Total devices 2 FS bytes used 464.65GiB
	devid    1 size 1.82TiB used 509.05GiB path /dev/sdc1
	devid    2 size 931.51GiB used 123.03GiB path /dev/sdb1

 

I also noticed these drives appeared to be mountable in the unassigned devices.

 

image.thumb.png.54170625f3566d622d5d7fddd46a429d.png

Link to comment
17 minutes ago, JorgeB said:

Assign both devices to a new pool, leave the fs to auto, start the array and it should mount again.

 

It works and all the data is intact. I'm proceeding to backup the data to the array to prevent such accident from happening again. Thank you so much for your help!

  • Like 1
Link to comment
  • Lanhua changed the title to (Solved) UnRaid 6.12.10 How to recover btrfs pool after removing all the disk from it

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...