fmp4m

Members
  • Posts

    415
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by fmp4m

  1. Ok - I found a MUCH easier way..... After making the changes to goaccess.conf to be: time-format %T date-format %d/%b/%Y log_format [%d:%t %^] %^ %^ %s - %m %^ %v "%U" [Client %h] [Length %b] [Gzip %^] [Sent-to %^] "%u" "%R" log-file /opt/log/proxy_logs.log Simply add the following line to each proxy host in NGINX Proxy Manager - Official "advanced" access_log /data/logs/proxy_logs.log proxy; like so: (if you already have advanced stuff here, add the line to the VERY top) Now they all log to the same file, and same format, simply add the line to all proxy_hosts and remember to add it to any new ones.
  2. Nevermind, I got it: goaccess.conf: comment out the existing time/date/log formats and add this: time-format %T date-format %d/%b/%Y log_format [%d:%t %^] %^ %^ %s - %m %^ %v "%U" [Client %h] [Length %b] [Gzip %^] [Sent-to %^] "%u" "%R" Then add under the log file your list of proxy-host-log files like so: (note this is my list and is not the same as your list, find this in your NGINX Proxy Manager - Official appdata logs and add each you want to track. log-file /opt/log/proxy-host-12_access.log log-file /opt/log/proxy-host-13_access.log log-file /opt/log/proxy-host-14_access.log log-file /opt/log/proxy-host-15_access.log log-file /opt/log/proxy-host-3_access.log log-file /opt/log/proxy-host-4_access.log log-file /opt/log/proxy-host-5_access.log log-file /opt/log/proxy-host-6_access.log log-file /opt/log/proxy-host-8_access.log log-file /opt/log/proxy-host-9_access.log
  3. Here is the log format that NGINX Proxy Manager - Official uses for: Proxy hosts: '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"' standard: '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"' Someone that knows the variables for goaccess, will need to convert the "proxy" one.
  4. This is a great thought - I shall try that! thanks!
  5. I don't need to add additional rules, the log_format is invalid for Argo Tunnel I can not modify the log_format via advanced. I was able to find the files in console shell for the container (which will retain changes until I update the docker). debian-buster slim is nice.
  6. where is nginx.conf and tun.domain.com.ssl.con files for NginxProxyManager-Official? I would like to modify the log_format for it, but can not find it.
  7. I spent time with this and got several things to work, but it seems with the Nginx Proxy Manager - Official container, the log files are different between fallback_access.log and the proxy host logs. Adding them all will not work and missing info will occur since you can have either all the proxy host logs OR fallback_access since they're formatted differently. As for the permissions, use a folder or share not in NginxProxyManager/GoAccess'es appdata and it will allow it to read correctly. I made a share called Logs that I use for various logging and mapped to /mnt/user/Logs/NPM/
  8. I have been up for 24 hours on Version: 6.10.0-rc2g without the issue so far. Will report any changes. This still occurs in rc2g
  9. I have the same issue as above however if you fix: https://raw.githubusercontent.com/xthursdayx/docker-templates/master/xthursdayx/whoogle-search.xml it wants https://raw.githubusercontent.com/FoxxMD/unraid-docker-templates/master/foxxmd/whoogle-search.xml and if you fix that it wants the other and is stuck in a cyclic loop.
  10. Problem still exists in Version: 6.10.0-rc2f and still resolves by issuing the conntrack max command.
  11. Within Settings - Docker with it disabled/stopped, with advanced ticked. I believe you can set default network there.
  12. I am having a problem with Argo Tunnel and NPM. Any client connecting to my environment, through https- gets logged as the Docker Network IP and not the CF Connecting IP. Has anyone gotten the Real IP to come through both Argo Tunnel into NPM? Its a config issue in NPM that I can not ascertain.
  13. Note to all that come here for help with Argo Tunnel config. You should not use "noTLSVerify: true" for anything other than troubleshooting in your config.yaml. It is less safe to leave this way. If you are having issues that this resolves in troubleshooting, It is fixable to be secure, don't stop there and use it just because it works. Tips: originServerName: domain.com ^ rarely works correctly, instead use: originServerName: subdomain.domain.com ^ use this that has a VALID CNAME record pointed to the root of the domain "@" In my example config here: tunnel: XXX credentials-file: XXX.json ingress: - service: https://proxysdockerip:18443 originRequest: originServerName: service.domain.ext proxydockerip can be the docker name if you are using a custom docker network, or the IP of the docker that serves as your reverse proxy, like SWAG or NPM. service.domain.dom is a valid CNAME of "service" pointed to "@" in the DNS of "domain.dom". This allows cloudflared / CF Argo Tunnel to validate correctly.
  14. SMART checkes drives health - not data health. You have corruption and will need to repair the corruption.
  15. SDB is usually Cache. SDB1 is usually cache disk one.
  16. @mkono87 you have XFS corruption, hopefully @JorgeB can assist with that. Thats the last entry before the crash: Sep 7 14:22:55 NAS kernel: XFS (sdb1): Metadata corruption detected at xfs_dinode_verify+0xa7/0x567 [xfs], inode 0xe997421 dinode Sep 7 14:22:55 NAS kernel: XFS (sdb1): Unmount and run xfs_repair Sep 7 14:22:55 NAS kernel: XFS (sdb1): First 128 bytes of corrupted metadata buffer: Sep 7 14:22:55 NAS kernel: 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 63 00 00 00 64 IN.........c...d Sep 7 14:22:55 NAS kernel: 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ Sep 7 14:22:55 NAS kernel: 00000020: f0 be 68 03 81 88 ff ff 60 d6 04 ab 27 03 36 41 ..h.....`...'.6A Sep 7 14:22:55 NAS kernel: 00000030: 60 d6 04 ab 27 03 36 41 00 00 00 00 00 08 de c1 `...'.6A........ Sep 7 14:22:55 NAS kernel: 00000040: 00 00 00 00 00 00 00 8e 00 00 00 00 00 00 00 01 ................ Sep 7 14:22:55 NAS kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 5f 57 70 8c ............_Wp. Sep 7 14:22:55 NAS kernel: 00000060: ff ff ff ff be bc 40 17 00 00 00 00 00 00 00 06 ......@......... Sep 7 14:22:55 NAS kernel: 00000070: 00 00 1d dd 00 00 f5 f6 00 00 00 00 00 00 00 00 ................ Sep 7 16:13:34 NAS kernel: XFS (sdb1): Metadata corruption detected at xfs_dinode_verify+0xa7/0x567 [xfs], inode 0xe997421 dinode Sep 7 16:13:34 NAS kernel: XFS (sdb1): Unmount and run xfs_repair Sep 7 16:13:34 NAS kernel: XFS (sdb1): First 128 bytes of corrupted metadata buffer: Sep 7 16:13:34 NAS kernel: 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 63 00 00 00 64 IN.........c...d Sep 7 16:13:34 NAS kernel: 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ Sep 7 16:13:34 NAS kernel: 00000020: f0 be 68 03 81 88 ff ff 60 d6 04 ab 27 03 36 41 ..h.....`...'.6A Sep 7 16:13:34 NAS kernel: 00000030: 60 d6 04 ab 27 03 36 41 00 00 00 00 00 08 de c1 `...'.6A........ Sep 7 16:13:34 NAS kernel: 00000040: 00 00 00 00 00 00 00 8e 00 00 00 00 00 00 00 01 ................ Sep 7 16:13:34 NAS kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 5f 57 70 8c ............_Wp. Sep 7 16:13:34 NAS kernel: 00000060: ff ff ff ff be bc 40 17 00 00 00 00 00 00 00 06 ......@......... Sep 7 16:13:34 NAS kernel: 00000070: 00 00 1d dd 00 00 f5 f6 00 00 00 00 00 00 00 00 ................ Sep 7 17:25:13 NAS kernel: XFS (sdb1): Metadata corruption detected at xfs_dinode_verify+0xa7/0x567 [xfs], inode 0xe997421 dinode Sep 7 17:25:13 NAS kernel: XFS (sdb1): Unmount and run xfs_repair Sep 7 17:25:13 NAS kernel: XFS (sdb1): First 128 bytes of corrupted metadata buffer: Sep 7 17:25:13 NAS kernel: 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 63 00 00 00 64 IN.........c...d Sep 7 17:25:13 NAS kernel: 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ Sep 7 17:25:13 NAS kernel: 00000020: f0 be 68 03 81 88 ff ff 60 d6 04 ab 27 03 36 41 ..h.....`...'.6A Sep 7 17:25:13 NAS kernel: 00000030: 60 d6 04 ab 27 03 36 41 00 00 00 00 00 08 de c1 `...'.6A........ Sep 7 17:25:13 NAS kernel: 00000040: 00 00 00 00 00 00 00 8e 00 00 00 00 00 00 00 01 ................ Sep 7 17:25:13 NAS kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 5f 57 70 8c ............_Wp. Sep 7 17:25:13 NAS kernel: 00000060: ff ff ff ff be bc 40 17 00 00 00 00 00 00 00 06 ......@......... Sep 7 17:25:13 NAS kernel: 00000070: 00 00 1d dd 00 00 f5 f6 00 00 00 00 00 00 00 00 ................
  17. Hi @danioj I wanted to check in and see if you still had zero traces
  18. We are awaiting the full Syslog, however you are running V1 of the AsRockRack motherboard bios from 2015. You need to update to atleast 2.6 of the bios as that fixed a lot of known issues. 2.7 is current and I know of no reason not to update to 2.7 so I suggest it.
  19. Thank you for detailing this. I am looking for extra information on how this solves the issue to begin with and how it reacts for others. Hopefully someone better than I can chime in and expand.
  20. This error can be ignored if its functioning correctly. It's due to the formatting of the upstream in the code. @selexinwill have to resolve that in a new push as there is a code issue.
  21. Almost any use of setting it within google search came back with 131072. The start of my spiral was after reading this https://github.com/kubernetes-sigs/kind/issues/2240 Your link seems to be a more in-depth fix that could be better than my temp fix that I got lucky enough for it to work.
  22. the field would be: Movies;TV Shows no quotes.
  23. Without re-issuing the command, after a reboot, have you had the call trace? The reason I ask, is because the number will change, as designed, so verifying it with cat /proc/sys/net/netfilter/nf_conntrack_max or similar is not valid.