LagazHagalaz Posted May 16, 2022 Share Posted May 16, 2022 I recently had a server crash and afterwards the last drive in my array lost its file system, before the crash it was xfs. i tried repairing it via xfs_repair -v -L /dev/md11. this fails to find the primary and secondary superblocks then exits "..Sorry, could not find valid secondary superblock Exiting now." laghagnas-diagnostics-20220516-0808.zip Quote Link to comment
JorgeB Posted May 16, 2022 Share Posted May 16, 2022 No valid filesystem is being detected in that disk, please post output of: fdisk -l /dev/sdj Quote Link to comment
LagazHagalaz Posted May 16, 2022 Author Share Posted May 16, 2022 12 minutes ago, JorgeB said: No valid filesystem is being detected in that disk, please post output of: fdisk -l /dev/sdj here is the ouput of fdisk -l /dev/sdj Disk /dev/sdj: 1.82 TiB, 2000433496064 bytes, 3907096672 sectors Disk model: DKS2C-H2R0SS Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdj1 64 3907096671 3907096608 1.8T 83 Linux Quote Link to comment
JorgeB Posted May 16, 2022 Share Posted May 16, 2022 Partition is there, now post the output of: blkid Quote Link to comment
LagazHagalaz Posted May 16, 2022 Author Share Posted May 16, 2022 1 minute ago, JorgeB said: Partition is there, now post the output of: blkid here you go root@LagHagNas:~# blkid /dev/loop0: TYPE="squashfs" /dev/loop1: TYPE="squashfs" /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/sdb1: UUID="2403352e-5569-49be-9953-91be1f336124" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="4a47d108-6bae-4838-aa1f-648baccd9832" /dev/sdf1: UUID="5d03aab4-bdae-44b2-be6a-21186c7b4944" BLOCK_SIZE="512" TYPE="xfs" /dev/sdg1: UUID="02e3f919-c276-4095-8ee5-a51fbd179bc5" BLOCK_SIZE="512" TYPE="xfs" /dev/sdh1: UUID="695fb732-2d9e-4d48-b3c7-20255bcc0ef2" BLOCK_SIZE="512" TYPE="xfs" /dev/sdi1: UUID="1d8f0269-1495-467b-87c0-3e92fafb2744" BLOCK_SIZE="512" TYPE="xfs" /dev/sdk1: UUID="bdf6b63f-56f0-4d43-a58a-5aa17e612d4e" BLOCK_SIZE="512" TYPE="xfs" /dev/sdl1: UUID="88ee8f5c-9376-4800-82ab-95c706b8c8c9" BLOCK_SIZE="512" TYPE="xfs" /dev/sdm1: UUID="73ac54f5-12fc-4d17-976e-b0f4c4f8f29c" BLOCK_SIZE="512" TYPE="xfs" /dev/sde1: UUID="477d06af-ac52-4d56-a55c-77d2aa13e900" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="cee1f48c-0d92-43bc-97b7-dd59c2ca862a" /dev/sdc1: UUID="be583a6a-2ab8-4c17-a9fe-267bfbb412c7" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="ce5d89c1-e466-47bc-9cb5-e7d7dde2c70e" /dev/md1: UUID="be583a6a-2ab8-4c17-a9fe-267bfbb412c7" BLOCK_SIZE="512" TYPE="xfs" /dev/md2: UUID="2403352e-5569-49be-9953-91be1f336124" BLOCK_SIZE="512" TYPE="xfs" /dev/md3: UUID="477d06af-ac52-4d56-a55c-77d2aa13e900" BLOCK_SIZE="512" TYPE="xfs" /dev/md4: UUID="1d8f0269-1495-467b-87c0-3e92fafb2744" BLOCK_SIZE="512" TYPE="xfs" /dev/md5: UUID="02e3f919-c276-4095-8ee5-a51fbd179bc5" BLOCK_SIZE="512" TYPE="xfs" /dev/md6: UUID="88ee8f5c-9376-4800-82ab-95c706b8c8c9" BLOCK_SIZE="512" TYPE="xfs" /dev/md7: UUID="73ac54f5-12fc-4d17-976e-b0f4c4f8f29c" BLOCK_SIZE="512" TYPE="xfs" /dev/md8: UUID="bdf6b63f-56f0-4d43-a58a-5aa17e612d4e" BLOCK_SIZE="512" TYPE="xfs" /dev/md9: UUID="5d03aab4-bdae-44b2-be6a-21186c7b4944" BLOCK_SIZE="512" TYPE="xfs" /dev/md10: UUID="695fb732-2d9e-4d48-b3c7-20255bcc0ef2" BLOCK_SIZE="512" TYPE="xfs" /dev/sdd1: PARTUUID="728eeff7-dea1-4f88-9497-883e056da94c" root@LagHagNas:~# Quote Link to comment
JorgeB Posted May 16, 2022 Share Posted May 16, 2022 sdj1 (and md11) are missing from that list, it confirms something damaged the superblocks (and possibly more) on that disk, other that using a file recovery util the only other thing that comes to mind is to see if the superblock is correctly emulated by parity, but to do that you'd need to disable disk11, stop array, unassign disk11, start array, post new diags, leave actual disk11 untouched so you can later use a file recovery util if needed. Quote Link to comment
LagazHagalaz Posted May 16, 2022 Author Share Posted May 16, 2022 25 minutes ago, JorgeB said: sdj1 (and md11) are missing from that list, it confirms something damaged the superblocks (and possibly more) on that disk, other that using a file recovery util the only other thing that comes to mind is to see if the superblock is correctly emulated by parity, but to do that you'd need to disable disk11, stop array, unassign disk11, start array, post new diags, leave actual disk11 untouched so you can later use a file recovery util if needed. okay i disabled the drive and started the array (not in maintenance mode) here are the diags after that. i have the allocation method set to high water and this was the last disk in line would there actually be any personal data written? laghagnas-diagnostics-20220516-0905.zip Quote Link to comment
JorgeB Posted May 16, 2022 Share Posted May 16, 2022 At least the emulated disk has an XFS superblock: May 16 06:03:06 LagHagNas kernel: XFS (md11): Invalid superblock magic number But it's corrupt, so now try running xfs_repair on the emulated disk11. Quote Link to comment
LagazHagalaz Posted May 16, 2022 Author Share Posted May 16, 2022 3 minutes ago, JorgeB said: At least the emulated disk has an XFS superblock: May 16 06:03:06 LagHagNas kernel: XFS (md11): Invalid superblock magic number But it's corrupt, so now try running xfs_repair on the emulated disk11. just to clarify the command "xfs-repair -v /dev/md11" would that be correct? Quote Link to comment
JorgeB Posted May 16, 2022 Share Posted May 16, 2022 13 minutes ago, LagazHagalaz said: would that be correct? Yes, after starting the array in maintenance mode, and if it asks for -L use it. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.