GossamerSolid Posted August 3, 2021 Share Posted August 3, 2021 I think I might have accidentally killed my cache array, which is incredibly frustrating. For some background, my server has 2x1TB NVMe drives in the cache array (set in a RAID1 config so that the entire 2TB can be used). 3xxGB were being used. I'm about to setup a second unraid server and wanted to take 1 of the 2 drives over to the new machine. I looked up the instructions here and people said: - Shut down the array - Remove the drive you want to remove - Start the array All this did was make it fail to mount the cache. There were no progress bars or warnings about any activity happening otherwise. I then panicked and shut the array down again and put the 2nd drive back into the cache pool. Same thing, same error: "unmountable: no filesystem". When I do a btrfs fi show on the drive label, this is the output: warning, device 2 is missing Label: none uuid: f46e54ec-68b4-48d9-94c5-3b9039d0b25a Total devices 2 FS bytes used 373.74GiB devid 1 size 931.51GiB used 204.03GiB path /dev/nvme0n1p1 *** Some devices missing What troubles me here is that the used on disk 1 clearly doesn't match the used on the entire file system, which would tell me that now the entirety of my cache array is destroyed and unrecoverable. What did I do wrong and is there anything I can do to restore any amount of files? Quote Link to comment
JorgeB Posted August 3, 2021 Share Posted August 3, 2021 13 minutes ago, GossamerSolid said: set in a RAID1 config so that the entire 2TB can be used) Do you mean raid0? Raid1 would only have 1TB usable. 13 minutes ago, GossamerSolid said: All this did was make it fail to mount the cache. GUI only supports removing devices from redundant pools, like raid1. Pool might still be salvage, if you didn't reboot yet please post the diagnostics: Tools -> Diagnostics Quote Link to comment
GossamerSolid Posted August 3, 2021 Author Share Posted August 3, 2021 Just now, JorgeB said: Do you mean raid0? Raid1 would only have 1TB usable. Sorry, yes, I did mean raid0 (striped, I believe it's also called?) Just now, JorgeB said: Pool might still be salvage, if you didn't reboot yet please post the diagnostics: Tools -> Diagnostics I've attached the diagnostics to the thread's attachments. I managed to get the drive to mount to a temporary point on the server using: mount -o degraded,usebackuproot,ro /dev/sdX1 /x From this faq. I'll wait for more instructions before mucking anything else up. odysseus-diagnostics-20210803-0842.zip Quote Link to comment
JorgeB Posted August 3, 2021 Share Posted August 3, 2021 Stop array, unmount the temp mount (umount /x), then on the console type: btrfs-select-super -s 1 /dev/nvme1n1p1 After that, unassign the other cache device (leave the cache pool without any devices assigned), start array for Unraid to "forget" current cache config, stop array, reassign both cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), start array, pool should now mount normally. If you then want to remove a device it can be done manually while pool is raid0 but easiest way is for you to first convert the pool to raid1 then remove one device. Quote Link to comment
GossamerSolid Posted August 3, 2021 Author Share Posted August 3, 2021 Running that command yields: No valid Btrfs found on /dev/nvme1n1p1 Open ctree failed Quote Link to comment
JorgeB Posted August 3, 2021 Share Posted August 3, 2021 Then something else was done after the wipe, you can try the recovery options in the FAQ. Quote Link to comment
GossamerSolid Posted August 3, 2021 Author Share Posted August 3, 2021 Ok I tried the restore options, always the same error that no valid btrfs found. At least I have a week old backup of my 3 VMs. I guess I can restore from those and re-do the changes I made. All of my docker instances are gone, though. While I have your attention, what is the best way for me to completely clean both of those NVMe drives and then re-setup the cache so that it's only 1 of them? Quote Link to comment
JorgeB Posted August 3, 2021 Share Posted August 3, 2021 blkdiscard -f /dev//dev/nvmeXn1 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.