DuzAwe

Members
  • Posts

    123
  • Joined

  • Last visited

Everything posted by DuzAwe

  1. Disk 3 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 3 - agno = 4 - agno = 1 - agno = 5 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. Disk 5 Phase 1 - find and verify superblock... bad primary superblock - bad CRC in superblock !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... would write modified primary superblock Primary superblock would have been modified. Cannot proceed further in no_modify mode. Exiting now.
  2. it was a new disk. I have the old disk next to me.
  3. I was rebuilding a healthy system. Disk 5 was a 6TB -> 10TB upgrade started yesterday. The box froze today shortly before my post and on reboot itsaid disk 3 was unmountable. As it started in normal mode before i got to the GUI it must have restored some contnet to it. So I need to get disk 3 mounted again and then restart the rebuild of Disk 5
  4. I just noticed disk 5 is also unmoutabel and its the one I have been rebuilding.
  5. New fresh diag thelibrary-diagnostics-20240118-1854.zip
  6. Sorry panic taking over. Start in maintance or normal
  7. Hi so I had a crash and when I turned everything back on disk 3 wasa dead. Now I need to say I was rebuilding disk 5 during this. So maybe it is a disk dieing but its had a clean bill of health in all my tests. I have run repair but when it all starts back up it still says "device disabled contents emulated" Help please Repair -n Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 3 - agno = 4 - agno = 5 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. Repair Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 4 - agno = 2 - agno = 3 - agno = 5 - agno = 1 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done thelibrary-diagnostics-20240118-1837.zip
  8. For me it was unriad & TDAR. So unraid was just using higher than normal resources on the issues above, which was fine most of the time. When TDAR did anything though it would just die. All my other dockers and Plugins did nothing to the over all usage.
  9. @sunbear Still hunting. So far PLEX and the ARR suite has a PASS from me. I have quite a few dockers so it will be a number of days before I have gone through them all. I have also ruled out all my plugins @JorgeB When I click on the first drive in the ZFS array/cluster it shows like above an incomplete interface, if I go to any other disk in the machine, Unraid Array, BTRFS and the other ZFS disks I get the normal interfaces. ie Scrub options, Smart Info, Free space setting. But for the first disk in the the ZFS it is missing all these options.
  10. @JorgeB Should I spin up an other thread for the missing gui for my ZFS pool?
  11. So, if I don't have a crash today. Is the course of action to add one thing back at a time or ?
  12. Safe Mode no Docker Jun 9 11:02:38 TheLibrary shfs: set -o pipefail ; /usr/sbin/zfs destroy -r 'ingress/Back Up' |& logger thelibrary-diagnostics-20230609-1110.zip
  13. Those are images I got from sperate instances of lock up. I just had another one and manged to get an image and diag out. I have to roll back to rc6 at this point. thelibrary-diagnostics-20230608-1934.zip
  14. OK so I had one lock up without docker or plugins, gui dead but ssh and netwrok mounts worked. After the reboot RAM started low but hit 80% quite quickly. Network mounts still worked. The control panel for ZFS was gone when I went into the drives for them. thelibrary-diagnostics-20230608-1651.zip
  15. Yes, not as quickly. In safe mode in rc7 when I login ram is in the 60% range on rc7 and seems to steadily climb over the hours in tandem with CPU usage. In rc6 after 9 hours of usage the RAM is sitting at the 40% range and CPU usage is nominal. With NetData installed as a Docker it reports completely different usage numbers than the UnRaid GUI with rc7 and matches the GUI in rc6. When using Top, I see Dockerd, SFSH at the top trading places almost exclusively in rc7 in rc6 first place is more fluid as to be expected. I had been running rc6 since almost release and have had to date no issues with it. With rc7 I haven't had more then a few hours uptime. As whatever is happening also kills all my dockers and network mounts.
  16. I have a 100% CPU usage and 80% RAM usage again. I cant keep my box up at all on RC7 its maxing out every few hours and becoming unresponsive. I managed to get one Diag out during my multiple reboots today. thelibrary-diagnostics-20230607-1104.zip
  17. Update: So I thought I found the issue being Dynamix Cache Directories. I reinstalled everything and disabled Dynamix Cache Directories in hopes it may be patched at a later date. I woke up this morning to 100% CPU usage and 80% RAM usage I was unable to get anything out of the server. I had to reboot it via ssh again.
  18. Got a reboot, not a clean one but. It looks like things are behaving again. Guess I now play that whack a plugin game.
  19. Should have specified I am sshing into the box. Reboot crashed with SIGKILL
  20. Is there away to do this from the CLI? GUI has become unusable, with this issue.
  21. Jun 6 12:02:11 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 15670 exited on signal 9 (SIGKILL) after 456.455878 seconds from start Jun 6 12:02:15 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 28993 exited on signal 9 (SIGKILL) after 281.040782 seconds from start Jun 6 12:02:29 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 26795 exited on signal 9 (SIGKILL) after 16.541999 seconds from start Jun 6 12:02:30 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 30046 exited on signal 9 (SIGKILL) after 14.761019 seconds from start Jun 6 12:02:41 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 3637 exited on signal 9 (SIGKILL) after 12.463393 seconds from start Jun 6 12:02:44 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 3692 exited on signal 9 (SIGKILL) after 13.842282 seconds from start Jun 6 12:02:57 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 4181 exited on signal 9 (SIGKILL) after 14.610889 seconds from start Jun 6 12:02:59 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 5044 exited on signal 9 (SIGKILL) after 14.410527 seconds from start Jun 6 12:03:10 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 12666 exited on signal 9 (SIGKILL) after 12.416831 seconds from start Jun 6 12:03:13 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 12702 exited on signal 9 (SIGKILL) after 13.517078 seconds from start Jun 6 12:03:25 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 17475 exited on signal 9 (SIGKILL) after 14.049300 seconds from start Jun 6 12:03:38 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 18195 exited on signal 9 (SIGKILL) after 13.041803 seconds from start Jun 6 12:03:50 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 25690 exited on signal 9 (SIGKILL) after 12.074396 seconds from start Jun 6 12:04:08 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 32568 exited on signal 9 (SIGKILL) after 15.341012 seconds from start Jun 6 12:04:25 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 8194 exited on signal 9 (SIGKILL) after 16.351577 seconds from start Jun 6 12:04:39 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 13806 exited on signal 9 (SIGKILL) after 13.355236 seconds from start Jun 6 12:04:52 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 21202 exited on signal 9 (SIGKILL) after 12.777710 seconds from start Jun 6 12:05:11 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 27381 exited on signal 9 (SIGKILL) after 13.180648 seconds from start Jun 6 12:05:27 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 2566 exited on signal 9 (SIGKILL) after 14.681882 seconds from start Jun 6 12:05:40 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 9672 exited on signal 9 (SIGKILL) after 11.977200 seconds from start Jun 6 12:06:04 TheLibrary php-fpm[10134]: [WARNING] [pool www] child 11342 exited on signal 9 (SIGKILL) after 22.298786 seconds from start thelibrary-diagnostics-20230606-1220.zip
  22. @ich777 Howdy, Is there a way to fire off a warning that a steam login has expired? Found today during some messing around that my cron job had been failing due to steam being logged out. Also should there be Epic settings in the template? Mine only has Steam and Battle.net.