Cornbreadman Posted July 10, 2023 Share Posted July 10, 2023 (edited) Had some issues with server a week ago. So had to rebuild entire plex libary and had to built new parity. To put it lightly, I am done. Anyway, while waiting for all this to happen, I have been reading up on Cache drives and what not. Decided now would be a good time to add a NVME drive to put my plex data on only. Currently have appdata and system files on cache. Tonight final step finished, I installed the new NVME drive, assigned it to the Cache, and in doing so, forgot to backup the current cache at all. Started it all up and then I knew.....I just f*cked up. Unmountable - No Pool UUID Anyway, all jacked up as I just changed the cache drive to cache pool using a XFS system where the old system was btrfs. Tried moving the caches around and can get: Unmountable - Wrong or no file system on the old cache drive. But nothing more. I have a test unraid server, that I popped the old cache in and was able to mount it as Unassigned Device. I can see all the files and access them. Was going to copy them somewhere and then just format and copy back, but its going to take over a day, and I am not even sure if that works that way with the cache drive? Before I start that process, just wanted to ask is there anyway to restore the file system? I tried some stuff listed in the FAW and keep getting error messages, so no luck there. I suck at anything non windows to be honest, and rely on just following instructions for Unraid, so on the fly "trying" things is not good for me, as I probably have no idea what any of it means. Is it time for me to start screaming yet? I want to scream. Ugh, been trying to fix the server in general for a week and thought I was minutes away from back up and running. Update: after trying some of the stuff in FAQ, I now cant get the drive to mount in Unassigned Drives, so thats great. Option is just greyed out. Edited July 10, 2023 by DipNFalls Quote Link to comment
JorgeB Posted July 10, 2023 Share Posted July 10, 2023 Connect the drive to the main server and post diags and the output of: btrfs fi show 1 Quote Link to comment
Cornbreadman Posted July 10, 2023 Author Share Posted July 10, 2023 (edited) 12 minutes ago, JorgeB said: Connect the drive to the main server and post diags and the output of: btrfs fi show Nothing happens at all. matrix-diagnostics-20230710-0547.zip Edited July 10, 2023 by DipNFalls Quote Link to comment
JorgeB Posted July 10, 2023 Share Posted July 10, 2023 Which device was old cache? 1 Quote Link to comment
Cornbreadman Posted July 10, 2023 Author Share Posted July 10, 2023 WDC_WDS500G2B0A_201681482904 - 500 GB (sdk) Quote Link to comment
JorgeB Posted July 10, 2023 Share Posted July 10, 2023 Post the output of: btrfs-select-super -s 1 /dev/sdk1 1 Quote Link to comment
Cornbreadman Posted July 10, 2023 Author Share Posted July 10, 2023 No valid Btrfs found on /dev/sdk1 ERROR: open ctree failed Quote Link to comment
JorgeB Posted July 10, 2023 Share Posted July 10, 2023 If the previous cache was really btrfs looks like the device was fully trimmed, so IMHO not much you can do now. 1 Quote Link to comment
Cornbreadman Posted July 10, 2023 Author Share Posted July 10, 2023 5 minutes ago, JorgeB said: If the previous cache was really btrfs looks like the device was fully trimmed, so IMHO not much you can do now. No way to get the files anymore. Was thinking maybe some file recovery software out there. Not sure if you would know that answer. Quote Link to comment
JorgeB Posted July 10, 2023 Share Posted July 10, 2023 You can try something like UFS explorer, but btrfs not finding a backup superblock suggests the device was fully trimmed, and in that case those programs will also be unable to find anything. 1 Quote Link to comment
Cornbreadman Posted July 10, 2023 Author Share Posted July 10, 2023 Just now, JorgeB said: You can try something like UFS explorer, but btrfs not finding a backup superblock suggests the device was fully trimmed, and in that case those programs will also be unable to find anything. Okay I am trying that now. Sorry for asking questions. As mentioned I am a linux moron. How about XFS? The new cache was installed as that file system and was what I thought screwed something up with the old one, changing the file system. Could it have been changed? Not even sure that is possible. I know atleast enough to know it wouldnt be usable if the FS was changed, but thought maybe data recovery would work. I am just grasping at this point. I feel defeated and dont even know where to start rebuilding docker. Quote Link to comment
Solution JorgeB Posted July 10, 2023 Solution Share Posted July 10, 2023 You can run xfs_repair /dev/sdk1 to see if there was an xfs filesystem there. 1 Quote Link to comment
Cornbreadman Posted July 10, 2023 Author Share Posted July 10, 2023 root@Matrix:~# xfs_repair /dev/sdk1 Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock sb realtime bitmap inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 129 resetting superblock realtime bitmap inode pointer to 129 sb realtime summary inode value 18446744073709551615 (NULLFSINO) inconsistent with calculated value 130 resetting superblock realtime summary inode pointer to 130 Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... sb_icount 0, counted 570016 sb_ifree 0, counted 126 sb_fdblocks 122036997, counted 47513804 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 2 - agno = 1 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Note - stripe unit (0) and width (0) were copied from a backup superblock. Please reset with mount -o sunit=<value>,swidth=<value> if necessary done Quote Link to comment
itimpi Posted July 10, 2023 Share Posted July 10, 2023 Looks like a XFS file system was found and fixed. You should now see if it will mount 1 Quote Link to comment
Cornbreadman Posted July 10, 2023 Author Share Posted July 10, 2023 Ok I can mount the drive again, and I can see the data on shared windows. Now I need to get it saved somewhere!!!! Holy crap. Fingers crossed here. I hope this works. Thank you so much! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.