Abstract7227 Posted October 7, 2022 Share Posted October 7, 2022 Hi all, My nextcloud server recently shut down ungracefully. Upon reboot, I noticed that some files that were listed in my cloud were not accessible anymore. Upon closer inspection, I noticed one of my drives not mounting due to a "Unmountable: Wrong or no file system" error. My current setup (diagnostics attached): 5 4TB drives (2 parity, 3 data), xfs encrypted (not sure if that changes the recovery process) 1 cache pool of 2 1TB nvme drives in RAID1, btrfs encrypted I do have a versioned backup of all my files, but I hope I can avoid recovering from that, as it is a very slow process to do so. trantor-diagnostics-20221007-0822.zip Quote Link to comment
JorgeB Posted October 7, 2022 Share Posted October 7, 2022 Check filesystem on disk1. Quote Link to comment
Abstract7227 Posted October 7, 2022 Author Share Posted October 7, 2022 Thx for the response, I ran the xfs_repair -nv: Phase 1 - find and verify superblock... - block cache size set to 1480768 entries Phase 2 - using internal log - zero log... zero_log: head block 2972053 tail block 2972049 ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 122015995, counted 128830882 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 inode 6959670454 - bad extent starting block number 4503567551028270, offset 0 correcting nextents for inode 6959670454 bad data fork in inode 6959670454 would have cleared inode 6959670454 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 entry "redacted.txt" at block 4 offset 2368 in directory inode 6959661560 references free inode 6959670454 would clear inode number in entry at offset 2368... inode 6959670454 - bad extent starting block number 4503567551028270, offset 0 correcting nextents for inode 6959670454 bad data fork in inode 6959670454 would have cleared inode 6959670454 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 entry "redacted2.txt" in directory inode 6959661560 points to free inode 6959670454, would junk entry bad hash table for directory inode 6959661560 (no data entry): would rebuild would rebuild directory inode 6959661560 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Fri Oct 7 11:04:44 2022 Phase Start End Duration Phase 1: 10/07 11:04:06 10/07 11:04:07 1 second Phase 2: 10/07 11:04:07 10/07 11:04:08 1 second Phase 3: 10/07 11:04:08 10/07 11:04:28 20 seconds Phase 4: 10/07 11:04:28 10/07 11:04:28 Phase 5: Skipped Phase 6: 10/07 11:04:28 10/07 11:04:44 16 seconds Phase 7: 10/07 11:04:44 10/07 11:04:44 Total run time: 38 seconds I am not quite sure what to do, if I run the xfs_repair without the -n flag, it says something like "The filesystem has valuable metadata changes in a log, mount the drive first", which I cannot do in its current state does that mean I have to run the xfs_repair with the -l flag while in maintenance mode? Quote Link to comment
Solution JorgeB Posted October 7, 2022 Solution Share Posted October 7, 2022 19 minutes ago, Abstract7227 said: does that mean I have to run the xfs_repair with the -l flag while in maintenance mode? Correct. Quote Link to comment
Abstract7227 Posted October 7, 2022 Author Share Posted October 7, 2022 Hi JorgeB, Thx for the guidance, The disk is mounted again and the files that I could not previously access are now back. Should I be worried on data loss? I assume the parity check will find some errors? Can I assume the "bad" files will be the ones that were last written? Quote Link to comment
JorgeB Posted October 7, 2022 Share Posted October 7, 2022 Look for a lost+found folder in that disk, any lost/incomplete files would go there, parity might find sync errors because of the shutdown, not the filesystem repair. 10 minutes ago, Abstract7227 said: Can I assume the "bad" files will be the ones that were last written? Any data being written at the time of the unclean shutdown can be damaged/lost, remaining data should be fine. Quote Link to comment
Abstract7227 Posted October 7, 2022 Author Share Posted October 7, 2022 is that in the root directory of that disk? I can't see any lost+found folder (also looked for hidden folders). I also noticed that all my shares are configured as high-water, and that another disk is currently the one getting filled during a move. So I guess I am safe of any data loss then? Quote Link to comment
JorgeB Posted October 7, 2022 Share Posted October 7, 2022 5 minutes ago, Abstract7227 said: is that in the root directory of that disk? Yes, go to Shares, if there's no lost+found share it's because one doesn't exist, and that's good news. Quote Link to comment
Abstract7227 Posted October 7, 2022 Author Share Posted October 7, 2022 awesome thx Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.