devin.buell Posted March 21, 2023 Share Posted March 21, 2023 I made a mistake thinking I could reduce my cache pool size and removed two drives and then restarted the array. This resulted in a now obvious messaging saying two many disks missing and that the pool was unmountable. I added the drives back in the same order and restarted the array but have the same error message. Is it possible to restore my pool or do I need to manually pull of the files from one of the drives and then rebuilt the pool? Diagnostic files attached. buellvault-diagnostics-20230320-2130.zip Quote Link to comment
JorgeB Posted March 21, 2023 Share Posted March 21, 2023 Post the output of: btrfs fi show Quote Link to comment
devin.buell Posted March 23, 2023 Author Share Posted March 23, 2023 Sorry for the delay - was traveling. Output is below bad tree block 2386712559616, bytenr mismatch, want=2386712559616, have=0 Couldn't read tree root Label: none uuid: b242ecb7-0712-4962-8ae2-ac3d49ad64fd Total devices 4 FS bytes used 142.30GiB devid 3 size 465.76GiB used 82.03GiB path /dev/sdb1 devid 4 size 465.76GiB used 82.03GiB path /dev/sde1 *** Some devices missing Quote Link to comment
devin.buell Posted March 23, 2023 Author Share Posted March 23, 2023 THis is the output after I start the array: root@BuellVault:~# btrfs fi show Label: none uuid: 19173620-859f-437e-b39c-fbe64123cf0a Total devices 1 FS bytes used 348.00KiB devid 1 size 1.00GiB used 126.38MiB path /dev/loop2 bad tree block 2386712559616, bytenr mismatch, want=2386712559616, have=0 Couldn't read tree root Label: none uuid: b242ecb7-0712-4962-8ae2-ac3d49ad64fd Total devices 4 FS bytes used 142.30GiB devid 3 size 465.76GiB used 82.03GiB path /dev/sdb1 devid 4 size 465.76GiB used 82.03GiB path /dev/sde1 *** Some devices missing Quote Link to comment
JorgeB Posted March 23, 2023 Share Posted March 23, 2023 Stop the array and post output of: btrfs-select-super -s 1 /dev/sdc1 and btrfs-select-super -s 1 /dev/sdd1 Quote Link to comment
devin.buell Posted March 23, 2023 Author Share Posted March 23, 2023 Output below: root@BuellVault:~# btrfs-select-super -s 1 /dev/sdc1 using SB copy 1, bytenr 67108864 root@BuellVault:~# btrfs-select-super -s 1 /dev/sdd1 using SB copy 1, bytenr 67108864 root@BuellVault:~# Quote Link to comment
JorgeB Posted March 23, 2023 Share Posted March 23, 2023 On 3/21/2023 at 9:13 AM, JorgeB said: Post the output of: btrfs fi show again please Quote Link to comment
devin.buell Posted March 23, 2023 Author Share Posted March 23, 2023 root@BuellVault:~# btrfs fi show Label: none uuid: b242ecb7-0712-4962-8ae2-ac3d49ad64fd Total devices 4 FS bytes used 142.30GiB devid 1 size 447.13GiB used 63.00GiB path /dev/sdc1 devid 2 size 447.13GiB used 63.00GiB path /dev/sdd1 devid 3 size 465.76GiB used 82.03GiB path /dev/sdb1 devid 4 size 465.76GiB used 82.03GiB path /dev/sde1 root@BuellVault:~# Quote Link to comment
Solution JorgeB Posted March 23, 2023 Solution Share Posted March 23, 2023 OK, now with array stopped, unassign all devices from that pool, start array, stop array, re-assign all 4 pool devices, start array, post new diags if it doesn't mount. Quote Link to comment
devin.buell Posted March 23, 2023 Author Share Posted March 23, 2023 You are amazing! The pool has been restored. Going to make a fresh backup of everything now! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.