KW52981 Posted April 26, 2023 Share Posted April 26, 2023 Complete noob here, and fairly lost. I was looking to remove one of my two nvme cache pool drives and attempted to follow this post. After removing the second drive, and starting the array received "unmountable: too many missing/misplaced devices" on the remaining cache drive. I have attempted to remove all cache devices and disable docker/VMs, and add back (as I have seen in a couple of post) and still end up with the above message. How screwed am I? tower-diagnostics-20230426-1418.zip Quote Link to comment
Solution JorgeB Posted April 26, 2023 Solution Share Posted April 26, 2023 Apr 2 12:00:59 Tower kernel: BTRFS warning (device nvme0n1p1): devid 2 uuid 73cb595a-267c-41db-8255-e0208da3535d is missing Pool was already missing a device, you cannot remove another one until this is fixed, problem is that there's a lot of data corruption detected on the pool, so possibly the missing device will fail to delete, try this to see if you can get the existing pool back, stop array, type on the console: btrfs-select-super -s 1 /dev/nvme1n1p1 Then unassign remaining pool member, start array, stop array, now re-assign both pool members, start array, if the pool mounts run a correcting scrub and post the results together with new diags. Quote Link to comment
KW52981 Posted April 26, 2023 Author Share Posted April 26, 2023 Yaaaasssss. Thank you! Scrubbing now. Will follow up with results and diags in a bit for refrence Quote Link to comment
KW52981 Posted April 26, 2023 Author Share Posted April 26, 2023 Attempted scrub twice, and it aborted at about 57% both times. tower-diagnostics-20230426-1418.zip Quote Link to comment
JorgeB Posted April 27, 2023 Share Posted April 27, 2023 Since there are uncorrectable errors you'll need to backup what you can and re-format the pool. 1 Quote Link to comment
KW52981 Posted April 27, 2023 Author Share Posted April 27, 2023 I ended up chasing down the files that were throwing the errors from the syslog and removed them. All were media files, so no real loss there. All containers and VMs seem to be working as expected. Scrub now comes up clean, but still aborts around the 57% point. How big of a worry is that? Quote Link to comment
JorgeB Posted April 27, 2023 Share Posted April 27, 2023 It's still a problem, I would recommend backing up and re-formatting the pool. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.