Jump to content

Reznap

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by Reznap

  1. That fixed it. Thank you! if you want the diags for anything let me know.
  2. /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="2732-64F5" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/loop0: TYPE="squashfs" /dev/sdc1: UUID="db429a5d-4243-4626-854d-a4fd8515757d" BLOCK_SIZE="512" TYPE="xfs" /dev/sdg1: PARTUUID="fc394ee4-7101-4b4f-b841-76cf1b1861a2" Phase 1 - find and verify superblock... - block cache size set to 741352 entries Phase 2 - using internal log - zero log... zero_log: head block 1027262 tail block 1027262 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Tue Jan 31 08:17:04 2023 Phase Start End Duration Phase 1: 01/31 08:16:52 01/31 08:16:52 Phase 2: 01/31 08:16:52 01/31 08:16:53 1 second Phase 3: 01/31 08:16:53 01/31 08:16:58 5 seconds Phase 4: 01/31 08:16:58 01/31 08:16:58 Phase 5: 01/31 08:16:58 01/31 08:16:58 Phase 6: 01/31 08:16:58 01/31 08:17:03 5 seconds Phase 7: 01/31 08:17:03 01/31 08:17:03 Total run time: 11 seconds done
  3. command line, just did xfs_repair -n /dev/sde1 (and sdf1) I think I also did xfs_repair /dev/sde1 (and sdf1) and xfs_repair -l /dev/sde1 (and sdf1)
  4. Yeah, I got lazy and did not want to rebuild parity three times and thought I could use the 'Faster' method in the wiki. my bad....
  5. Hi, Recently filled up my 6 TB pool and decided to get some new hard drives. Bought 4 new 18tb drives to replace my 3 3TB drives. First thing I did was take out the current parity drive and replaced it with a new 18tb drive and ran parity, that finished. Everything looked good. Shutdown. Then installed the remaining 3 18TB drives (Unplugged cache drive as I only have 6 SATA Ports) Loaded up UNRAID and 'Disk-Cleared' the 3 new drives. that finished. Everything looked good. Now I want to move the data from the 2 3 TB drives in the system to 2 of the new drives. Go to Tools ---> New Config --> Main --> Assigned 3 18TB Drives to 1, 2, 3, assigned 2 3TB drives to 4, 5 and leave parity unassigned. Start Array --> my 2 3TB drives both say 'Unmountable: Unsupported or no file system' Do some basic googling/searching, run xfs_repair on them and it shows nothing. Pull 1 of the 18TB drives out and reinstall Cache drive, cache drive mounts fine. Both 3TB drives with all my data still say "Unmountable: Unsupported or no file system" What can I do to recover this? Thank you! skynet-diagnostics-20230130-1639.zip
×
×
  • Create New...