shabos Posted August 28, 2023 Share Posted August 28, 2023 Hi For some reason my drive was unmountable after a restart (I am regretting switching from Reiserfs!!!) - now the primary superblock is bad and searching for the secondary superblock has run almost 12 hours - and it's only a 2TB drive. It seems the FS was corrupted because I had to do a reboot while the drives were unmounting - I assume it was this drive that it was stuck on. Do I stop the operation and add -L to the parameters? How do I see if the drive is causing the FS to corrupt? Any helpful suggestions would be appreciated. bitpartnas-syslog-20230828-2054.zip Quote Link to comment
shabos Posted August 28, 2023 Author Share Posted August 28, 2023 I notice on the unmount - the drive in question wasn't in the unmount script: (Disk 6) - this possibly caused the array to stall on unmount and forced me to reboot. Aug 28 18:24:37 BitpartNas emhttpd: shcmd (172): /usr/sbin/zfs unmount -a Aug 28 18:24:37 BitpartNas emhttpd: shcmd (173): umount /mnt/user0 Aug 28 18:24:37 BitpartNas emhttpd: shcmd (174): rmdir /mnt/user0 Aug 28 18:24:37 BitpartNas emhttpd: shcmd (175): umount /mnt/user Aug 28 18:24:37 BitpartNas emhttpd: shcmd (176): rmdir /mnt/user Aug 28 18:24:37 BitpartNas emhttpd: shcmd (178): /usr/local/sbin/update_cron Aug 28 18:24:37 BitpartNas emhttpd: Unmounting disks... Aug 28 18:24:37 BitpartNas emhttpd: shcmd (179): umount /mnt/disk1 Aug 28 18:24:37 BitpartNas kernel: XFS (md1p1): Unmounting Filesystem Aug 28 18:24:37 BitpartNas emhttpd: shcmd (180): rmdir /mnt/disk1 Aug 28 18:24:37 BitpartNas emhttpd: shcmd (181): umount /mnt/disk2 Aug 28 18:24:37 BitpartNas kernel: XFS (md2p1): Unmounting Filesystem Aug 28 18:24:37 BitpartNas emhttpd: shcmd (182): rmdir /mnt/disk2 Aug 28 18:24:37 BitpartNas emhttpd: shcmd (183): umount /mnt/disk3 Aug 28 18:24:38 BitpartNas emhttpd: shcmd (184): rmdir /mnt/disk3 Aug 28 18:24:38 BitpartNas emhttpd: shcmd (185): umount /mnt/disk4 Aug 28 18:24:39 BitpartNas emhttpd: shcmd (186): rmdir /mnt/disk4 Aug 28 18:24:39 BitpartNas emhttpd: shcmd (187): umount /mnt/disk5 Aug 28 18:24:39 BitpartNas kernel: XFS (md5p1): Unmounting Filesystem Aug 28 18:24:39 BitpartNas emhttpd: shcmd (188): rmdir /mnt/disk5 Aug 28 18:24:39 BitpartNas emhttpd: shcmd (189): umount /mnt/disk8 Aug 28 18:24:39 BitpartNas kernel: XFS (md8p1): Unmounting Filesystem Aug 28 18:24:39 BitpartNas emhttpd: shcmd (190): rmdir /mnt/disk8 Aug 28 18:24:39 BitpartNas emhttpd: shcmd (191): umount /mnt/disk9 Aug 28 18:24:39 BitpartNas kernel: XFS (md9p1): Unmounting Filesystem Aug 28 18:24:39 BitpartNas emhttpd: shcmd (192): rmdir /mnt/disk9 Aug 28 18:24:39 BitpartNas emhttpd: shcmd (193): umount /mnt/disk10 Aug 28 18:24:39 BitpartNas kernel: XFS (md10p1): Unmounting Filesystem Aug 28 18:24:39 BitpartNas emhttpd: shcmd (194): rmdir /mnt/disk10 Aug 28 18:24:39 BitpartNas emhttpd: shcmd (195): umount /mnt/disk11 Aug 28 18:24:39 BitpartNas kernel: XFS (md11p1): Unmounting Filesystem Aug 28 18:24:39 BitpartNas emhttpd: shcmd (196): rmdir /mnt/disk11 Aug 28 18:24:39 BitpartNas emhttpd: shcmd (197): umount /mnt/cache Aug 28 18:24:39 BitpartNas emhttpd: shcmd (198): rmdir /mnt/cache Quote Link to comment
JorgeB Posted August 28, 2023 Share Posted August 28, 2023 20 minutes ago, shabos said: now the primary superblock is bad and searching for the secondary superblock has run almost 12 hours - and it's only a 2TB drive. Are you using the GUI or typing the command in the CLI? If the latter post the command. 17 minutes ago, shabos said: I notice on the unmount - the drive in question wasn't in the unmount script: (Disk 6) - this possibly caused the array to stall on unmount and forced me to reboot. That to me suggests the disk was already not mounted when that shutdown occurred. Quote Link to comment
shabos Posted August 28, 2023 Author Share Posted August 28, 2023 I'm using the GUI - with the -n parameter Quote Link to comment
JorgeB Posted August 28, 2023 Share Posted August 28, 2023 And you are sure that disk was formatted before? Post the output of blkid and fdisk -l /dev/sdl Quote Link to comment
shabos Posted August 28, 2023 Author Share Posted August 28, 2023 It was definitely formatted and runnning - I used unBalance to fill it. Quote Link to comment
shabos Posted August 28, 2023 Author Share Posted August 28, 2023 13 minutes ago, JorgeB said: And you are sure that disk was formatted before? Post the output of blkid and fdisk -l /dev/sdl Can I run this while running the repair still? Should I cancel the repair? (It still hasnt found the second superblock) Quote Link to comment
shabos Posted August 28, 2023 Author Share Posted August 28, 2023 root@BitpartNas:~# blkid /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="B4EA-2880" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="04dd5721-01" /dev/loop1: TYPE="squashfs" /dev/sdf1: UUID="03d2824f-a980-43d5-8c1f-3eb12afdff6b" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="4bbd886f-1b64-4713-b938-1fd5f7f7d00e" /dev/md9p1: UUID="c90c83cb-1c67-431f-8fdb-f0efca047f4c" BLOCK_SIZE="512" TYPE="xfs" /dev/sdd1: UUID="c90c83cb-1c67-431f-8fdb-f0efca047f4c" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="75b100a4-1383-468c-9afa-574ef8384a62" /dev/md2p1: UUID="391043f8-a362-42b4-9ca2-4d0a290f7b04" BLOCK_SIZE="512" TYPE="xfs" /dev/sdm1: UUID="c87871ca-05e8-4519-9179-447c1fb0de8c" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="6b60ec31-8d56-4d0b-a9be-e3cd44de10b1" /dev/sdb1: UUID="d97cc953-ee9a-404f-83ba-c998aa13e771" UUID_SUB="063d30a7-a090-43fe-b6e3-7c2873febdec" BLOCK_SIZE="4096" TYPE="btrfs" /dev/md5p1: UUID="e6a6b944-0e65-48a5-9caf-e181a95ec119" BLOCK_SIZE="512" TYPE="xfs" /dev/sdk1: UUID="e6a6b944-0e65-48a5-9caf-e181a95ec119" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="64267629-6e1c-4eb5-a5bc-885bc94d23dd" /dev/md11p1: UUID="03d2824f-a980-43d5-8c1f-3eb12afdff6b" BLOCK_SIZE="512" TYPE="xfs" /dev/md8p1: UUID="226ecde4-d538-4acb-a790-7931a52f70eb" BLOCK_SIZE="512" TYPE="xfs" /dev/sdi1: UUID="65411ed0-2cc7-46ee-822f-c7b8542ba8d2" BLOCK_SIZE="4096" TYPE="reiserfs" /dev/md1p1: UUID="c87871ca-05e8-4519-9179-447c1fb0de8c" BLOCK_SIZE="512" TYPE="xfs" /dev/sdg1: UUID="391043f8-a362-42b4-9ca2-4d0a290f7b04" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="4551df66-7274-4a79-95f8-06c7d71b7584" /dev/md4p1: UUID="65411ed0-2cc7-46ee-822f-c7b8542ba8d2" BLOCK_SIZE="4096" TYPE="reiserfs" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="e4e076d9-a029-4d1f-b232-58c28e5f986f" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="b12b60ba-9824-4bad-b01a-64c18740bef0" /dev/md10p1: UUID="e4e076d9-a029-4d1f-b232-58c28e5f986f" BLOCK_SIZE="512" TYPE="xfs" /dev/sdc1: UUID="226ecde4-d538-4acb-a790-7931a52f70eb" BLOCK_SIZE="512" TYPE="xfs" /dev/sdj1: UUID="0753e2e4-27f9-463f-bb93-34f446824e08" BLOCK_SIZE="4096" TYPE="reiserfs" /dev/md3p1: UUID="0753e2e4-27f9-463f-bb93-34f446824e08" BLOCK_SIZE="4096" TYPE="reiserfs" /dev/sdh1: PARTUUID="16d2ff3f-6584-46dd-9c12-2da4a8484628" root@BitpartNas:~# ^C root@BitpartNas:~# root@BitpartNas:~# fdisk -l /dev/sdl Disk /dev/sdl: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: ST2000DL003-9VT1 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdl1 64 3907029167 3907029104 1.8T 83 Linux root@BitpartNas:~# Quote Link to comment
JorgeB Posted August 29, 2023 Share Posted August 29, 2023 Don't see the blkid output for sdl, did you abort? Quote Link to comment
shabos Posted August 29, 2023 Author Share Posted August 29, 2023 (edited) This is the outcome of a -vL operation: No resolution it seems? I'm out of ideas - did I really just lose 2TB of data because of an interrupted unmount??? Edited August 29, 2023 by shabos Quote Link to comment
JorgeB Posted August 29, 2023 Share Posted August 29, 2023 Like mentioned the disk was already unmounted at last shutdown, if xfs_repair cannot fix the filesystem you can restore from a backup, if there's no backup you try a file recovery util like UFS explorer, there's a free trial to see if it finds anything, must pay to actually recover data. Quote Link to comment
shabos Posted August 29, 2023 Author Share Posted August 29, 2023 6 minutes ago, JorgeB said: Like mentioned the disk was already unmounted at last shutdown, if xfs_repair cannot fix the filesystem you can restore from a backup, if there's no backup you try a file recovery util like UFS explorer, there's a free trial to see if it finds anything, must pay to actually recover data. My point is this filesystem is SO much less robust than reiserfs - I've had multiple power failures, system interrupts, etc over more than 10 years - the moment I move to XFS - this is the second drive that has done this in 2 weeks (first one after a power failure), and they instantly corrupt. How is that remotely better? The point of unraid is to make sure I don't lose data..... this seems to be a massive step backwards. UFS explorer - am I going to have to use that in windows? Quote Link to comment
JorgeB Posted August 29, 2023 Share Posted August 29, 2023 XFS is usually pretty robust, but not infallible, you should have backups of anything important, and probably easier to use Windows for UFS explorer. Quote Link to comment
itimpi Posted August 29, 2023 Share Posted August 29, 2023 2 hours ago, shabos said: My point is this filesystem is SO much less robust than reiserfs I agree that reiserfs is great from its ability to recover from severe corruption. However it has not been maintained for some time and support for it scheduled to be completely removed from the Linux kernel relatively soon. Also it cannot be used on drives larger than 16TB and modern drives are exceeding this size. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.