Jump to content

prettyhatem

Members
  • Posts

    19
  • Joined

  • Last visited

Posts posted by prettyhatem

  1. Okay ran it with -L output:
     

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ALERT: The filesystem has valuable metadata changes in a log which is being
    destroyed because the -L option was used.
            - scan filesystem freespace and inode maps...
    clearing needsrepair flag and regenerating metadata
    sb_fdblocks 297972367, counted 300647576
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
    inode 15764878235 - bad extent starting block number 4503567551346641, offset 0
    correcting nextents for inode 15764878235
    bad data fork in inode 15764878235
    cleared inode 15764878235
            - agno = 8
            - agno = 9
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 1
            - agno = 3
            - agno = 0
            - agno = 5
            - agno = 2
            - agno = 9
            - agno = 4
            - agno = 6
            - agno = 8
            - agno = 7
    entry "s_icejumper_attack_spike_02.uasset" at block 0 offset 3624 in directory inode 15764878092 references free inode 15764878235
    	clearing inode number in entry at offset 3624...
    Phase 5 - rebuild AG headers and trees...
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
    bad hash table for directory inode 15764878092 (no data entry): rebuilding
    rebuilding directory inode 15764878092
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    Maximum metadata LSN (30:1702450) is ahead of log (1:2).
    Format log to cycle 33.
    done

     

  2. I let the parity check finish and unmounted and remounted in maintenance mode.I ran xfs_repair with this log:
     

    Phase 1 - find and verify superblock...
    
    Phase 2 - using internal log
    
                 - zero log...
    
    ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.

    I am assuming I should follow the instructions?  Start the array out of maintenance mode and stop it and re run the repair?

  3. I had some odd things happening on my Unraid server, I have physically installed a new disk and was doing a Preclear on it.  At about 90% it paused without it being able to progress.  When looking at the UI, I would often have timeouts and it wouldn't fully populate the docker list.  I attempted to stop the array but it looked like it was stalled on stopping docker.  I attempted to kill the docker containers manually but that didnt work.  At some point I decided I should just force restart the server.  It came back up and I started the array.  A parity check started as it was an unclean shutdown.  Now I am noticing disk 5 is showing "Unmountable: Unsupported or no file system". I have yet to add the new disk to the array, but now I am unsure how to proceed.  Do I need to stop the parity check and unmount the drive and do a filesystem check of some sort?

     

     

    EDIT: I am just now noticing that all of my Docker containers have "not available" under all of their Versions.

     

     

    Appreciate any advice!

    fileserver-diagnostics-20240304-1637.zip

  4. I think this is due to my cache, doing a `zpool status -v` I see a corruption.  These ssd's might be dying.
    ```
     

      pool: cache
     state: ONLINE
    status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
    action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
       see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
    config:

        NAME        STATE     READ WRITE CKSUM
        cache       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdb1    ONLINE       0     0 31.1K
            sde1    ONLINE       0     0 31.1K

    errors: Permanent errors have been detected in the following files:

            /mnt/cache/docker-xfs.img
    ```

×
×
  • Create New...