dominicmck Posted October 9, 2023 Share Posted October 9, 2023 Hi all, I was trying to make some changes to my server and I noticed that no changes were being saved. When I click Apply my browser just says waiting for 192.168… after a while the values change back to what they were. This happened in Shares, Users, and Disk Settings. So I restarted my server only to be met with Disk1 “Unmountable: Unsupported or no file system”. I have run short and full SMART tests and both are fine. I’ve had no errors on the disk before (3-year-old 4 TB Ironwolf in a HP gen8 microserver). Looking in the logs I see: Oct 8 14:20:31 Tower kernel: XFS (md1p1): Mounting V5 Filesystem Oct 8 14:20:31 Tower root: mount: /mnt/disk1: wrong fs type, bad option, bad superblock on /dev/md1p1, missing codepage or helper program, or other error. Oct 8 14:20:31 Tower root: dmesg(1) may have more information after failed mount system call. Oct 8 14:20:31 Tower kernel: XFS (md1p1): Corruption warning: Metadata has LSN (11:2428865) ahead of current LSN (11:2425326). Please unmount and run xfs_repair (>= v4.3) to resolve. Oct 8 14:20:31 Tower kernel: XFS (md1p1): log mount/recovery failed: error -22 Oct 8 14:20:31 Tower kernel: XFS (md1p1): log mount failed Oct 8 14:20:31 Tower emhttpd: shcmd (34): exit status: 32 Oct 8 14:20:31 Tower emhttpd: /mnt/disk1 mount error: Unsupported or no file system Oct 8 14:20:31 Tower emhttpd: shcmd (35): rmdir /mnt/disk1 Using the GUI Check Filesystem Status I got: Phase 1 - find and verify superblock... superblock read failed, offset 0, size 524288, ag 0, rval -1 fatal error -- Input/output error I also tried: root@Tower:~# xfs_repair -v /dev/md1 /dev/md1: No such file or directory /dev/md1: No such file or directory fatal error -- couldn't initialize XFS library I’ve reseated all the drives. But no change. Can someone please point me in the right direction for the next steps? Diagnostics attached. Thanks for your help. Dom tower-diagnostics-20231009-0716.zip Quote Link to comment
JorgeB Posted October 9, 2023 Share Posted October 9, 2023 1 hour ago, dominicmck said: xfs_repair -v /dev/md1 It needs to be xfs_repair -v /dev/md1p1 but the result you likely be the same, post the output and new diags after running it. Quote Link to comment
dominicmck Posted October 9, 2023 Author Share Posted October 9, 2023 Thanks @JorgeB Run again with your correction: root@Tower:~# xfs_repair -v /dev/md1p1 /dev/md1p1: No such file or directory /dev/md1p1: No such file or directory fatal error -- couldn't initialize XFS library Fresh diags attached. tower-diagnostics-20231009-1104.zip Quote Link to comment
JorgeB Posted October 9, 2023 Share Posted October 9, 2023 18 minutes ago, dominicmck said: /dev/md1p1: No such file or directory Array must be started in maintenance mode. Quote Link to comment
dominicmck Posted October 9, 2023 Author Share Posted October 9, 2023 Apologies. Started in maintenance mode this time. root@Tower:~# xfs_repair -v /dev/md1p1 Phase 1 - find and verify superblock... - block cache size set to 539608 entries Phase 2 - using internal log - zero log... zero_log: head block 2425326 tail block 2425301 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. root@Tower:~# Reading the error, I tried starting the array (to "Mount the filesystem"), but nothing changed. If I destroy the log, how much data might be lost and is there a way to identify it so I can recover it from backup? Thanks. tower-diagnostics-20231009-1149.zip Quote Link to comment
JorgeB Posted October 9, 2023 Share Posted October 9, 2023 Usually -L is fine, just use it. Quote Link to comment
dominicmck Posted October 9, 2023 Author Share Posted October 9, 2023 Results look promising. Drive can be mounded by starting the array, and I can browse the file system. However, parity drive is still disabled, even after a reboot. Any suggestions? Thanks. root@Tower:~# xfs_repair -vL /dev/md1p1 Phase 1 - find and verify superblock... - block cache size set to 539608 entries Phase 2 - using internal log - zero log... zero_log: head block 2425326 tail block 2425301 ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata sb_fdblocks 114794426, counted 115775119 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 5 - agno = 4 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (11:2428865) is ahead of log (1:2). Format log to cycle 14. XFS_REPAIR Summary Mon Oct 9 12:28:03 2023 Phase Start End Duration Phase 1: 10/09 12:25:59 10/09 12:25:59 Phase 2: 10/09 12:25:59 10/09 12:26:09 10 seconds Phase 3: 10/09 12:26:09 10/09 12:26:34 25 seconds Phase 4: 10/09 12:26:34 10/09 12:26:35 1 second Phase 5: 10/09 12:26:35 10/09 12:26:36 1 second Phase 6: 10/09 12:26:36 10/09 12:26:49 13 seconds Phase 7: 10/09 12:26:49 10/09 12:26:49 Total run time: 50 seconds done tower-diagnostics-20231009-1244.zip Quote Link to comment
Solution JorgeB Posted October 9, 2023 Solution Share Posted October 9, 2023 11 minutes ago, dominicmck said: However, parity drive is still disabled, even after a reboot. Any suggestions? That's expected, it will remain disabled until you re-sync it: https://docs.unraid.net/unraid-os/manual/storage-management#rebuilding-a-drive-onto-itself Quote Link to comment
dominicmck Posted October 10, 2023 Author Share Posted October 10, 2023 Re-sync worked as expected. All working now. @JorgeB Thanks so much for your help! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.