Crocs

Members
  • Posts

    33
  • Joined

  • Last visited

Everything posted by Crocs

  1. Never mind, reran and we're back. Thank you!!!
  2. Still failed.... motherlode-diagnostics-20230816-1319.zip
  3. I am on 6.9.2. I've heard others say to upgrade to 6.1.0 but wasn't sure.
  4. Phase 1 - find and verify superblock... - block cache size set to 1415808 entries Phase 2 - using internal log - zero log... * ERROR: mismatched uuid in log * SB : edfe3834-877a-426a-82ce-7b77aafb082e * log: d68381a6-f261-4fd4-aca3-446415ce8102 zero_log: head block 163732 tail block 163732 - scan filesystem freespace and inode maps... sb_icount 64, counted 32 sb_ifree 61, counted 29 sb_fdblocks 3905948049, counted 3905948053 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 9 - agno = 5 - agno = 13 - agno = 4 - agno = 3 - agno = 7 - agno = 11 - agno = 8 - agno = 12 - agno = 10 - agno = 14 - agno = 6 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... SB summary counter sanity check failed Metadata corruption detected at 0x47518b, xfs_sb block 0x0/0x200 libxfs_bwrite: write verifier failed on xfs_sb bno 0x0/0x200 xfs_repair: Releasing dirty buffer to free list! xfs_repair: Refusing to write a corrupt buffer to the data device! xfs_repair: Lost a write to the data device! fatal error -- File system metadata writeout failed, err=117. Re-run xfs_repair. This is drive1's first run with the "-v" tag Phase 1 - find and verify superblock... - block cache size set to 1415808 entries Phase 2 - using internal log - zero log... * ERROR: mismatched uuid in log * SB : edfe3834-877a-426a-82ce-7b77aafb082e * log: d68381a6-f261-4fd4-aca3-446415ce8102 zero_log: head block 163732 tail block 163732 - scan filesystem freespace and inode maps... sb_icount 64, counted 32 sb_ifree 61, counted 29 sb_fdblocks 3905948049, counted 3905948053 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 4 - agno = 5 - agno = 3 - agno = 10 - agno = 11 - agno = 1 - agno = 7 - agno = 6 - agno = 8 - agno = 12 - agno = 13 - agno = 14 - agno = 9 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... SB summary counter sanity check failed Metadata corruption detected at 0x47518b, xfs_sb block 0x0/0x200 libxfs_bwrite: write verifier failed on xfs_sb bno 0x0/0x200 xfs_repair: Releasing dirty buffer to free list! xfs_repair: Refusing to write a corrupt buffer to the data device! xfs_repair: Lost a write to the data device! fatal error -- File system metadata writeout failed, err=117. Re-run xfs_repair. This is drive1's second run. I'm getting a similar output from drive2
  5. I'm in need of some big help here... I had a power outage that caused 2 of my 3 drives to show as "unmountable" after a reboot. I'm not sure what next steps to take to insure I don't loose data. I have sense ordered a UPS motherlode-diagnostics-20230815-0927.zip
  6. Like the title says, I've been having issues on 6.11.5. I blew it off as bad hardware (I was planning a pretty sizable upgrade so I dealt with it.) After getting my new system up and running (new CPU, MB, RAM, PSU) the issue continues. Only thing I could really think of is my cache drive is failing, but SMART shows that it's fine. I've mirrored sys logs to flash, but nothing is jumping out at me. Wondering if any of you brainiacs can tell whats going on tower-diagnostics-20230805-1713.zip syslog.txt
  7. I'm getting, a 'command not found' when I try and run that. I'm excited for that update, I saw the info on github! I'm working on a script right now, so the GUI wouldn't be helpful in this use case
  8. Is there a way to check the status of the mover from the terminal, either in this plugin or natively? Not in the sense of percentage, but if it's running or not?
  9. https://discord.com/invite/AFwz8nE7BK Can you come to our discord server and we can diagnose further?
  10. AS OF 3/30/2022 CHANGE YOUR TubeArchivist-ES CONTAINER REPO TO "bbilly1/tubearchivist-es:latest" FOR AUTOMATIC UPDATES GOING FORWARD
  11. Can you share a screenshot of you ES and TA container setups?
  12. Hey! - Just do a rescan filesystem in TA and that will remove the video. From there you can redownload in a different format. - Output name is not a feature that TA yet supports. As it was built to be an "all-in-one solution, it was built from the ground up thinking that we'd never need this feature. As things are changing, and people are requesting different feature, it's something that we're looking into. But for the time being, it's simply not possible -If you go to the channel in TA, you can click “configure” and change the settings per channel If you need any more immediate support, come on over to our discord!
  13. Likely root cause: java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes Looks like a permissions error. Make sure that your UID and GID are set to 1000 and 0 and that your mount point (appdata) has the same permissions.
  14. That's pretty much how I have the updated template. I used '/mnt/user/appdata/warrior/' to map it to the appdata share vs directly to to cache drive.
  15. https://github.com/bbilly1/tubearchivist#redis-on-a-custom-port
  16. Your Redis variable is set up wrong. It's just the IP address, your variable should be '192.168.1.70'. No 'http://' or '6379' Let me know if that works!
  17. Good catch! I updated the template, you should be able to force recheck and get an update.
  18. Great question, ES has already addressed it here: https://discuss.elastic.co/t/apache-log4j2-remote-code-execution-rce-vulnerability-cve-2021-44228-esa-2021-31/291476 Basically, it was never vulnerable. However, we just pushed an update to 7.16.1, the latest stable release. Thank you for bringing that up
  19. Nope, all three dependencies are in CA. Check the notes in the TA container.
  20. Official Repos: https://hub.docker.com/r/archiveteam/warrior-dockerfile/ https://github.com/ArchiveTeam/warrior-dockerfile
  21. I posted screenshots above for the correct templates needed.
  22. I'm not seeing anything obvious either. The only thing that I can think is that you're using different IP addresses for each container. Not being the dev of the project, I can only assume that this might be your issue.
  23. Check the screenshots above, do you have Redis and ES mapped to the correct volumes inside of the TubeArchivsit directory?