Jump to content

prettyhatem

Members
  • Posts

    19
  • Joined

  • Last visited

Everything posted by prettyhatem

  1. Okay started the array back up and it looks like it is seeing the disk just fine now. Here are the new set of diags. fileserver-diagnostics-20240305-1327.zip
  2. Okay ran it with -L output: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata sb_fdblocks 297972367, counted 300647576 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 inode 15764878235 - bad extent starting block number 4503567551346641, offset 0 correcting nextents for inode 15764878235 bad data fork in inode 15764878235 cleared inode 15764878235 - agno = 8 - agno = 9 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 3 - agno = 0 - agno = 5 - agno = 2 - agno = 9 - agno = 4 - agno = 6 - agno = 8 - agno = 7 entry "s_icejumper_attack_spike_02.uasset" at block 0 offset 3624 in directory inode 15764878092 references free inode 15764878235 clearing inode number in entry at offset 3624... Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... bad hash table for directory inode 15764878092 (no data entry): rebuilding rebuilding directory inode 15764878092 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (30:1702450) is ahead of log (1:2). Format log to cycle 33. done
  3. I let the parity check finish and unmounted and remounted in maintenance mode.I ran xfs_repair with this log: Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. I am assuming I should follow the instructions? Start the array out of maintenance mode and stop it and re run the repair?
  4. I had some odd things happening on my Unraid server, I have physically installed a new disk and was doing a Preclear on it. At about 90% it paused without it being able to progress. When looking at the UI, I would often have timeouts and it wouldn't fully populate the docker list. I attempted to stop the array but it looked like it was stalled on stopping docker. I attempted to kill the docker containers manually but that didnt work. At some point I decided I should just force restart the server. It came back up and I started the array. A parity check started as it was an unclean shutdown. Now I am noticing disk 5 is showing "Unmountable: Unsupported or no file system". I have yet to add the new disk to the array, but now I am unsure how to proceed. Do I need to stop the parity check and unmount the drive and do a filesystem check of some sort? EDIT: I am just now noticing that all of my Docker containers have "not available" under all of their Versions. Appreciate any advice! fileserver-diagnostics-20240304-1637.zip
  5. I think this is due to my cache, doing a `zpool status -v` I see a corruption. These ssd's might be dying. ``` pool: cache state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A config: NAME STATE READ WRITE CKSUM cache ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdb1 ONLINE 0 0 31.1K sde1 ONLINE 0 0 31.1K errors: Permanent errors have been detected in the following files: /mnt/cache/docker-xfs.img ```
  6. Okay, so I fixed the shfs issue. I had to recreate my cache because it seemed like a corrupt filesystem. I have restarted in Safe Mode and now nginx is not crashing when starting the array! But I have had two hard locks on the machine now, I have included my diags. fileserver-diagnostics-20231026-1718.zip
  7. I just updated from 6.11 to 6.12.4 and after the initial reboot I can access the UI. After starting the Array the UI stops working. I can ssh in and see the array is up, shares are working and Docker is working. To test I restarted the server and this time started the Array with Docker disabled. Same results. I have included the diagnostics. fileserver-diagnostics-20231005-1039.zip
×
×
  • Create New...