jackfalveyiv

Members
  • Posts

    104
  • Joined

  • Last visited

Posts posted by jackfalveyiv

  1. Getting some strange errors that I can't parse out in the logs.  My Unraid Fix Common Problems plugin has alerted me over teh past few days that I was getting Out of Memory errors.  AFter looking into that on the forums, a user helped me figure out that Nginx was the culprit.  My proxy host entries look normal, and I haven't made any changes to the app in over a year.  When I took a quick look at the logs, I see a ton of '[emerg] bind() to ...failed (98: Address already in use)' messages.  I'm not sure where to start on this, hoping for some guidance, thanks.fallback_error.log

    proxy-host-11_access.log proxy-host-19_access.log fallback_access.log

  2. Looks like that's my nginx docker.  Months back I needed to replace my flash drive, and I've had some weird problems around different dockers at different times.  In most cases (Tdarr, Radarr/Sonarr, Plex) I had to build new dockers but plug in the old configs.  I didn't do that with Nginx.  Is it possible that I need to?

  3. I'm experiencing some really odd behavior in the past few days.  My server (6.12.3, updated over a month ago) is crashing at random intervals.  I haven't been able to find a common denominator yet and I'm hoping someone has a clue as to where I can begin looking.  I have not tried to reformat the cache yet.  At one server reboot cycle, the cache was unable to be found, yet another reboot and it came up just fine.

    trescommas-diagnostics-20231006-2242.zip

  4. 3 hours ago, jackfalveyiv said:

    Some additional context in the attached screenshot.  I see what looks like an IP address conflict, but I can't see one in my docker allocations, either for port or IP.

    Screen Shot 2023-09-04 at 10.56.26 AM.png

    Just following up in case anyone has an issue like mine.  I don't have an explanation for why this worked, but I wiped out the nginx docker and installed another instance, pointing to the same appdata directory, and things are working just as they had been before.  If anyone has upgraded the OS, or downgraded, and runs into this, try my fix and see if that helps.

  5. 23 hours ago, jackfalveyiv said:

    Ran into an issue after the Unraid upgrade to 6.12.4 with Nginx.  When I try to browse to the local installation, I get 'ERR_CONNECTION_REFUSED' in all browsers.  It's the only docker that's giving me this issue but I cannot get it to come back online.  I tried restoring a backup but it was unsuccessful.  I've attached a screenshot of my config, and a screenshot of the docker log.  Any and all help would be appreciated.

    Screen Shot 2023-09-03 at 11.48.05 AM.png

    Screen Shot 2023-09-03 at 11.49.33 AM.png

    Some additional context in the attached screenshot.  I see what looks like an IP address conflict, but I can't see one in my docker allocations, either for port or IP.

    Screen Shot 2023-09-04 at 10.56.26 AM.png

  6. Ran into an issue after the Unraid upgrade to 6.12.4 with Nginx.  When I try to browse to the local installation, I get 'ERR_CONNECTION_REFUSED' in all browsers.  It's the only docker that's giving me this issue but I cannot get it to come back online.  I tried restoring a backup but it was unsuccessful.  I've attached a screenshot of my config, and a screenshot of the docker log.  Any and all help would be appreciated.

    Screen Shot 2023-09-03 at 11.48.05 AM.png

    Screen Shot 2023-09-03 at 11.49.33 AM.png

  7. 22 minutes ago, blaine07 said:


    The other day JC21 mentioned he was working on/considering making it a variable. I guess at some point the old certificates pileup and slow it down and it times out. It was discussed that using using very prune helped. 
     

    Maybe here: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2713

     

    or here: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2708

     

    poke around though; I know I just saw others Fussing about same thing recently 😀

     

    Thanks for the tip, I'll see what I can see...

  8. I was just made aware that my server wasn't accessible by a user, I went and took a look and found the attached screenshot of my Docker tab.  I immediately deleted, then recreated my docker image, but that has not changed the status of this page.  I'm currently rebooting my machine to see if that makes a difference, but I'm not sure where to start on this.  I did have a queue of files processing but was using the server less than an hour ago without any indication of a problem.  Diagnostic also attached, any help appreciated.

    Screen Shot 2023-03-25 at 3.20.51 PM.png

    trescommas-diagnostics-20230325-1521.zip

  9. Quite a week...replaced the cache, then ended up with read errors on one of my array disks.  Had to eventually start in Maint Mode, run a check filesystem with -L parameter to get things up and running again.  Mods have recommended that my cables might be an issue, so I've got replacement SATA and power cables arriving tomorrow to hook up.  I have the system back up now, and I'm seeing more nginx related errors, curious what these are indicating.

    Screen Shot 2023-03-09 at 5.27.04 PM.png

  10. My system is back up and running.  To summarize, when migrating data off the cache for an upgrade, then back again, it looks like my System share was still on disk3 when I started up the docker service.  This looks like it caused the btrfs errors that eventually crashed the disk and made it unmountable.  Thanks JorgeB and itimpi for your suggestions and getting me the correct solution.

  11. Thank you.  Here's the output from running with the -L option:

     

    
    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ALERT: The filesystem has valuable metadata changes in a log which is being
    destroyed because the -L option was used.
            - scan filesystem freespace and inode maps...
    clearing needsrepair flag and regenerating metadata
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 8
            - agno = 9
            - agno = 10
            - agno = 11
            - agno = 12
            - agno = 13
            - agno = 14
            - agno = 15
            - agno = 16
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 2
            - agno = 5
            - agno = 8
            - agno = 13
            - agno = 6
            - agno = 7
            - agno = 1
            - agno = 10
            - agno = 11
            - agno = 14
            - agno = 12
            - agno = 16
            - agno = 15
            - agno = 3
            - agno = 4
            - agno = 9
    Phase 5 - rebuild AG headers and trees...
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    Maximum metadata LSN (4:1198044) is ahead of log (1:2).
    Format log to cycle 7.
    done

     

  12. Booted to maint mode, tried a Check Filesystem Status -nv and got the following:

     

    
    Phase 1 - find and verify superblock...
            - block cache size set to 1404320 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 1197993 tail block 1197987
    ALERT: The filesystem has valuable metadata changes in a log which is being
    ignored because the -n option was used.  Expect spurious inconsistencies
    which may be resolved by first mounting the filesystem to replay the log.
            - scan filesystem freespace and inode maps...
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 8
            - agno = 9
            - agno = 10
            - agno = 11
            - agno = 12
            - agno = 13
            - agno = 14
            - agno = 15
            - agno = 16
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 1
            - agno = 2
            - agno = 5
            - agno = 3
            - agno = 9
            - agno = 15
            - agno = 4
            - agno = 13
            - agno = 7
            - agno = 10
            - agno = 11
            - agno = 12
            - agno = 0
            - agno = 14
            - agno = 16
            - agno = 6
            - agno = 8
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 8
            - agno = 9
            - agno = 10
            - agno = 11
            - agno = 12
            - agno = 13
            - agno = 14
            - agno = 15
            - agno = 16
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    Maximum metadata LSN (4:1198031) is ahead of log (4:1197993).
    Would format log to cycle 7.
    No modify flag set, skipping filesystem flush and exiting.
    
            XFS_REPAIR Summary    Wed Mar  8 15:27:15 2023
    
    Phase		Start		End		Duration
    Phase 1:	03/08 15:27:06	03/08 15:27:06
    Phase 2:	03/08 15:27:06	03/08 15:27:07	1 second
    Phase 3:	03/08 15:27:07	03/08 15:27:11	4 seconds
    Phase 4:	03/08 15:27:11	03/08 15:27:11
    Phase 5:	Skipped
    Phase 6:	03/08 15:27:11	03/08 15:27:15	4 seconds
    Phase 7:	03/08 15:27:15	03/08 15:27:15
    
    Total run time: 9 seconds