Jump to content

Singvaldsen

Members
  • Posts

    9
  • Joined

Singvaldsen's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Unable to find a valid superblock so I'm using hetman partition recovery to save some of the files to en external drive. The next step is to format the unmountable drive, mount it and copy the rescued files back into the Array. Have spendt way too much time on this, but I have learnt a lot. Thanks for your help JorgeB, would have been a lot more difficult and time consuming without you.
  2. This is the log for the drive: text error warn system array login Apr 20 19:02:39 Tower kernel: ata25: SATA max UDMA/133 abar m8192@0xfc480000 port 0xfc480a00 irq 79 Apr 20 19:02:39 Tower kernel: ata25: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Apr 20 19:02:39 Tower kernel: ata25.00: ATA-10: ST8000DM004-2CX188, 0001, max UDMA/133 Apr 20 19:02:39 Tower kernel: ata25.00: 15628053168 sectors, multi 16: LBA48 NCQ (depth 32), AA Apr 20 19:02:39 Tower kernel: ata25.00: configured for UDMA/133 Apr 20 19:02:39 Tower kernel: sd 25:0:0:0: [sdh] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB) Apr 20 19:02:39 Tower kernel: sd 25:0:0:0: [sdh] 4096-byte physical blocks Apr 20 19:02:39 Tower kernel: sd 25:0:0:0: [sdh] Write Protect is off Apr 20 19:02:39 Tower kernel: sd 25:0:0:0: [sdh] Mode Sense: 00 3a 00 00 Apr 20 19:02:39 Tower kernel: sd 25:0:0:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 20 19:02:39 Tower kernel: sd 25:0:0:0: [sdh] Preferred minimum I/O size 4096 bytes Apr 20 19:02:39 Tower kernel: sdh: sdh1 Apr 20 19:02:39 Tower kernel: sd 25:0:0:0: [sdh] Attached SCSI disk Apr 20 19:03:10 Tower emhttpd: ST8000DM004-2CX188_ZR12SPSA (sdh) 512 15628053168 Apr 20 19:03:11 Tower emhttpd: read SMART /dev/sdh Apr 20 19:04:16 Tower emhttpd: ST8000DM004-2CX188_ZR12SPSA (sdh) 512 15628053168 Apr 20 19:04:16 Tower kernel: mdcmd (2): import 1 sdh 64 7814026532 0 ST8000DM004-2CX188_ZR12SPSA Apr 20 19:04:16 Tower kernel: md: import disk1: (sdh) ST8000DM004-2CX188_ZR12SPSA size: 7814026532 Apr 20 19:04:16 Tower emhttpd: read SMART /dev/sdh Apr 20 20:11:27 Tower emhttpd: read SMART /dev/sdh Apr 20 20:11:28 Tower emhttpd: ST8000DM004-2CX188_ZR12SPSA (sdh) 512 15628053168 Apr 20 20:11:28 Tower kernel: mdcmd (2): import 1 sdh 64 7814026532 0 ST8000DM004-2CX188_ZR12SPSA Apr 20 20:11:28 Tower kernel: md: import disk1: (sdh) ST8000DM004-2CX188_ZR12SPSA size: 7814026532 Apr 20 20:11:28 Tower emhttpd: read SMART /dev/sdh Apr 20 22:27:05 Tower emhttpd: read SMART /dev/sdh Apr 20 22:27:17 Tower emhttpd: ST8000DM004-2CX188_ZR12SPSA (sdh) 512 15628053168 Apr 20 22:27:17 Tower emhttpd: read SMART /dev/sdh Apr 20 22:30:29 Tower emhttpd: read SMART /dev/sdh Apr 20 22:30:30 Tower emhttpd: ST8000DM004-2CX188_ZR12SPSA (sdh) 512 15628053168 Apr 20 22:30:30 Tower emhttpd: read SMART /dev/sdh Apr 20 22:30:53 Tower unassigned.devices: Disk with ID 'ST8000DM004-2CX188_ZR12SPSA (sdh)' is not set to auto mount. Apr 21 16:08:14 Tower unassigned.devices: Mounting partition 'sdh1' at mountpoint '/mnt/disks/ZR12SPSA'... Apr 21 16:08:14 Tower unassigned.devices: Mount cmd: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime '/dev/sdh1' '/mnt/disks/ZR12SPSA' Apr 21 16:08:14 Tower kernel: XFS (sdh1): Mounting V5 Filesystem Apr 21 16:08:15 Tower kernel: XFS (sdh1): Starting recovery (logdev: internal) Apr 21 16:08:15 Tower kernel: XFS (sdh1): Metadata CRC error detected at xfs_refcountbt_read_verify+0x12/0x5a [xfs], xfs_refcountbt block 0x80000020 Apr 21 16:08:15 Tower kernel: XFS (sdh1): Unmount and run xfs_repair Apr 21 16:08:15 Tower kernel: XFS (sdh1): First 128 bytes of corrupted metadata buffer: Apr 21 16:08:15 Tower kernel: XFS (sdh1): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0x7a/0xc7 [xfs]" at daddr 0x80000020 len 8 error 74 Apr 21 16:08:15 Tower kernel: XFS (sdh1): Failed to recover leftover CoW staging extents, err -117. Apr 21 16:08:15 Tower kernel: XFS (sdh1): Filesystem has been shut down due to log error (0x2). Apr 21 16:08:15 Tower kernel: XFS (sdh1): Please unmount the filesystem and rectify the problem(s). Apr 21 16:08:15 Tower kernel: XFS (sdh1): Ending recovery (logdev: internal) Apr 21 16:08:15 Tower kernel: XFS (sdh1): Error -5 reserving per-AG metadata reserve pool. Apr 21 16:08:18 Tower unassigned.devices: Mount of 'sdh1' failed: 'mount: /mnt/disks/ZR12SPSA: can't read superblock on /dev/sdh1. dmesg(1) may have more information after failed mount system call. '
  3. It will not mount with UD either. Do I need to change the UUID or something else?
  4. Not sure I understand the repair summary from running "xfs_repair -v /dev/sdi1". Is recovery software the only option I'm left with? Thanks for your help! You guys are awesome, would not have been able to get up and running with this without the forum community.
  5. Ran it from command line instead, and got this: root@Tower:~# xfs_repair -v /dev/sdi1 Phase 1 - find and verify superblock... - block cache size set to 2826080 entries Phase 2 - using internal log - zero log... zero_log: head block 664351 tail block 664351 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 6 - agno = 2 - agno = 5 - agno = 3 - agno = 4 - agno = 7 - agno = 1 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Thu Apr 20 22:28:27 2023 Phase Start End Duration Phase 1: 04/20 22:28:00 04/20 22:28:00 Phase 2: 04/20 22:28:00 04/20 22:28:00 Phase 3: 04/20 22:28:00 04/20 22:28:18 18 seconds Phase 4: 04/20 22:28:18 04/20 22:28:18 Phase 5: 04/20 22:28:18 04/20 22:28:18 Phase 6: 04/20 22:28:18 04/20 22:28:27 9 seconds Phase 7: 04/20 22:28:27 04/20 22:28:27 Total run time: 27 seconds done Started the array again, and the issue prevails:
  6. Upgraded to v6.12-rc3. Looking for supeblock now:
  7. root@Tower:~# blkid /dev/sdb1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/sdf1: UUID="a4dbad2b-3c5c-4176-84f9-544e6aca9417" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="128897a9-95ab-4364-9cf8-8f79a3cef61a" /dev/nvme0n1p1: UUID="b70a94fd-fae9-4a3d-9df3-86926581b33d" UUID_SUB="69e8e3ca-f680-4894-8270-1fb71e2e6e40" BLOCK_SIZE="4096" TYPE="btrfs" /dev/sdd1: UUID="26f4779b-a4b1-4f8f-a155-22d335315def" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="e56cbf68-e10a-4e39-9b98-093165294ae8" /dev/sdm1: UUID="36051edd-62cf-45ac-95e4-42055e326dc4" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="9a15989a-93de-4bdf-af1f-db2ab88dddf6" /dev/sdk1: UUID="f6de9002-1da3-4ce5-b285-5fd228cd5808" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="e90888a9-2b51-4632-bbf4-4d98b6759812" /dev/sdi1: UUID="cdf95f3d-3f30-4526-be82-2eaad4972fa4" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="2f9bea3f-3d77-4745-9647-98c0d447bbba" /dev/sdg1: UUID="20f925e2-e44f-4bab-a431-3876f2f58020" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="9df7b061-da75-4105-b5c6-bd866982ee21" /dev/loop0: TYPE="squashfs" /dev/sde1: UUID="2605216f-5547-488c-9469-9ccce8ed50f1" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="a8e2a8f6-6b66-4650-a78b-3b732b0343bc" /dev/sdl1: UUID="2df6b904-8e7e-4a8d-bfb3-c2f933f37bc1" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="8ea023d4-df63-429b-9704-8a7b62dd0ac4" /dev/sdj1: UUID="376f7d80-c1eb-4587-8d4b-d5e831821dc8" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="1ca81f66-7807-4e4a-9f4f-64a7e332a0d7" /dev/sdh1: UUID="9368cb5b-262c-4ebb-beef-f64be5fa6548" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="4c7321ad-d17f-4e90-a0d4-1370f88212a3" /dev/md8: UUID="26f4779b-a4b1-4f8f-a155-22d335315def" BLOCK_SIZE="4096" TYPE="xfs" /dev/md6: UUID="20f925e2-e44f-4bab-a431-3876f2f58020" BLOCK_SIZE="512" TYPE="xfs" /dev/md4: UUID="376f7d80-c1eb-4587-8d4b-d5e831821dc8" BLOCK_SIZE="512" TYPE="xfs" /dev/md2: UUID="f6de9002-1da3-4ce5-b285-5fd228cd5808" BLOCK_SIZE="512" TYPE="xfs" /dev/md9: UUID="2605216f-5547-488c-9469-9ccce8ed50f1" BLOCK_SIZE="512" TYPE="xfs" /dev/md7: UUID="9368cb5b-262c-4ebb-beef-f64be5fa6548" BLOCK_SIZE="4096" TYPE="xfs" /dev/sdc1: UUID="cdf95f3d-3f30-4526-be82-2eaad4972fa4" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="55c38daf-f0fd-4e7d-a6b2-347a1c8be4c1" /dev/md5: UUID="36051edd-62cf-45ac-95e4-42055e326dc4" BLOCK_SIZE="512" TYPE="xfs" /dev/md3: UUID="2df6b904-8e7e-4a8d-bfb3-c2f933f37bc1" BLOCK_SIZE="512" TYPE="xfs" /dev/md1: UUID="cdf95f3d-3f30-4526-be82-2eaad4972fa4" BLOCK_SIZE="512" TYPE="xfs" /dev/loop2: UUID="4e67449a-598a-4abe-9c8b-a4fdb03ceaaf" UUID_SUB="3a3616ca-3692-4e1f-b8ad-6ce2c23cffcf" BLOCK_SIZE="4096" TYPE="btrfs" /dev/loop3: UUID="35307c1b-ed46-45e1-90a1-c7fa957f0d78" UUID_SUB="233f9d7a-a4c9-46c4-b25b-9c409094142a" BLOCK_SIZE="4096" TYPE="btrfs" root@Tower:~# fdisk -l /dev/sdi Disk /dev/sdi: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: ST8000DM004-2CX1 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 8938E43D-0C64-4B9C-A181-A957612B4764 Device Start End Sectors Size Type /dev/sdi1 64 15628053134 15628053071 7.3T Linux filesystem root@Tower:~#
  8. I'm new to Unraid and setup my first server in January. I have a 2tb NVME cache drive and ten 8tb spin-disks with XFS, of which one is the parity drive. A week ago I got "unmountable: wrong or no file system" on DISK 1 after running the parity check. The first thing I did was run the Smart self-test, which it passed. Then I put it in maintenance mode and did a Check file system status from GUI, it took hours, but it did not locate the primary or secondary superblock and exited. Then I did a data rebuild without the disk, but it did not recover my data. Then I put in a pre-cleared 8tb drive and started the array again, it is re-building now, but it still saying "unmountable: wrong or no file system" while doing so and I do not have high expectations. I have attached the diagnostics file. Unfortunately, due to not knowing any better, the server was re-booted before I extracted the file. I feel like I am running out of options to recover the data on the disk. I had backups of the most critical data, but it would still be nice to get it all back. I have not formatted the initial drive. I have tried to mount it in UD, but it will not not mount. However, the disk has read activity since it was taken out of the array and visible in UD. 5 000 000 reads so far, without being mounted... I would greatly appreciate any help or pointers in the right direction, as I have exhausted my options. tower-diagnostics-20230420-1632.zip
×
×
  • Create New...