DieFalse

Members
  • Posts

    432
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by DieFalse

  1. I have the same issue as above however if you fix: https://raw.githubusercontent.com/xthursdayx/docker-templates/master/xthursdayx/whoogle-search.xml it wants https://raw.githubusercontent.com/FoxxMD/unraid-docker-templates/master/foxxmd/whoogle-search.xml and if you fix that it wants the other and is stuck in a cyclic loop.
  2. Problem still exists in Version: 6.10.0-rc2f and still resolves by issuing the conntrack max command.
  3. Within Settings - Docker with it disabled/stopped, with advanced ticked. I believe you can set default network there.
  4. I am having a problem with Argo Tunnel and NPM. Any client connecting to my environment, through https- gets logged as the Docker Network IP and not the CF Connecting IP. Has anyone gotten the Real IP to come through both Argo Tunnel into NPM? Its a config issue in NPM that I can not ascertain.
  5. Note to all that come here for help with Argo Tunnel config. You should not use "noTLSVerify: true" for anything other than troubleshooting in your config.yaml. It is less safe to leave this way. If you are having issues that this resolves in troubleshooting, It is fixable to be secure, don't stop there and use it just because it works. Tips: originServerName: domain.com ^ rarely works correctly, instead use: originServerName: subdomain.domain.com ^ use this that has a VALID CNAME record pointed to the root of the domain "@" In my example config here: tunnel: XXX credentials-file: XXX.json ingress: - service: https://proxysdockerip:18443 originRequest: originServerName: service.domain.ext proxydockerip can be the docker name if you are using a custom docker network, or the IP of the docker that serves as your reverse proxy, like SWAG or NPM. service.domain.dom is a valid CNAME of "service" pointed to "@" in the DNS of "domain.dom". This allows cloudflared / CF Argo Tunnel to validate correctly.
  6. SMART checkes drives health - not data health. You have corruption and will need to repair the corruption.
  7. @mkono87 you have XFS corruption, hopefully @JorgeB can assist with that. Thats the last entry before the crash: Sep 7 14:22:55 NAS kernel: XFS (sdb1): Metadata corruption detected at xfs_dinode_verify+0xa7/0x567 [xfs], inode 0xe997421 dinode Sep 7 14:22:55 NAS kernel: XFS (sdb1): Unmount and run xfs_repair Sep 7 14:22:55 NAS kernel: XFS (sdb1): First 128 bytes of corrupted metadata buffer: Sep 7 14:22:55 NAS kernel: 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 63 00 00 00 64 IN.........c...d Sep 7 14:22:55 NAS kernel: 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ Sep 7 14:22:55 NAS kernel: 00000020: f0 be 68 03 81 88 ff ff 60 d6 04 ab 27 03 36 41 ..h.....`...'.6A Sep 7 14:22:55 NAS kernel: 00000030: 60 d6 04 ab 27 03 36 41 00 00 00 00 00 08 de c1 `...'.6A........ Sep 7 14:22:55 NAS kernel: 00000040: 00 00 00 00 00 00 00 8e 00 00 00 00 00 00 00 01 ................ Sep 7 14:22:55 NAS kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 5f 57 70 8c ............_Wp. Sep 7 14:22:55 NAS kernel: 00000060: ff ff ff ff be bc 40 17 00 00 00 00 00 00 00 06 ......@......... Sep 7 14:22:55 NAS kernel: 00000070: 00 00 1d dd 00 00 f5 f6 00 00 00 00 00 00 00 00 ................ Sep 7 16:13:34 NAS kernel: XFS (sdb1): Metadata corruption detected at xfs_dinode_verify+0xa7/0x567 [xfs], inode 0xe997421 dinode Sep 7 16:13:34 NAS kernel: XFS (sdb1): Unmount and run xfs_repair Sep 7 16:13:34 NAS kernel: XFS (sdb1): First 128 bytes of corrupted metadata buffer: Sep 7 16:13:34 NAS kernel: 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 63 00 00 00 64 IN.........c...d Sep 7 16:13:34 NAS kernel: 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ Sep 7 16:13:34 NAS kernel: 00000020: f0 be 68 03 81 88 ff ff 60 d6 04 ab 27 03 36 41 ..h.....`...'.6A Sep 7 16:13:34 NAS kernel: 00000030: 60 d6 04 ab 27 03 36 41 00 00 00 00 00 08 de c1 `...'.6A........ Sep 7 16:13:34 NAS kernel: 00000040: 00 00 00 00 00 00 00 8e 00 00 00 00 00 00 00 01 ................ Sep 7 16:13:34 NAS kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 5f 57 70 8c ............_Wp. Sep 7 16:13:34 NAS kernel: 00000060: ff ff ff ff be bc 40 17 00 00 00 00 00 00 00 06 ......@......... Sep 7 16:13:34 NAS kernel: 00000070: 00 00 1d dd 00 00 f5 f6 00 00 00 00 00 00 00 00 ................ Sep 7 17:25:13 NAS kernel: XFS (sdb1): Metadata corruption detected at xfs_dinode_verify+0xa7/0x567 [xfs], inode 0xe997421 dinode Sep 7 17:25:13 NAS kernel: XFS (sdb1): Unmount and run xfs_repair Sep 7 17:25:13 NAS kernel: XFS (sdb1): First 128 bytes of corrupted metadata buffer: Sep 7 17:25:13 NAS kernel: 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 63 00 00 00 64 IN.........c...d Sep 7 17:25:13 NAS kernel: 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ Sep 7 17:25:13 NAS kernel: 00000020: f0 be 68 03 81 88 ff ff 60 d6 04 ab 27 03 36 41 ..h.....`...'.6A Sep 7 17:25:13 NAS kernel: 00000030: 60 d6 04 ab 27 03 36 41 00 00 00 00 00 08 de c1 `...'.6A........ Sep 7 17:25:13 NAS kernel: 00000040: 00 00 00 00 00 00 00 8e 00 00 00 00 00 00 00 01 ................ Sep 7 17:25:13 NAS kernel: 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 5f 57 70 8c ............_Wp. Sep 7 17:25:13 NAS kernel: 00000060: ff ff ff ff be bc 40 17 00 00 00 00 00 00 00 06 ......@......... Sep 7 17:25:13 NAS kernel: 00000070: 00 00 1d dd 00 00 f5 f6 00 00 00 00 00 00 00 00 ................
  8. Hi @danioj I wanted to check in and see if you still had zero traces
  9. We are awaiting the full Syslog, however you are running V1 of the AsRockRack motherboard bios from 2015. You need to update to atleast 2.6 of the bios as that fixed a lot of known issues. 2.7 is current and I know of no reason not to update to 2.7 so I suggest it.
  10. Thank you for detailing this. I am looking for extra information on how this solves the issue to begin with and how it reacts for others. Hopefully someone better than I can chime in and expand.
  11. This error can be ignored if its functioning correctly. It's due to the formatting of the upstream in the code. @selexinwill have to resolve that in a new push as there is a code issue.
  12. Almost any use of setting it within google search came back with 131072. The start of my spiral was after reading this https://github.com/kubernetes-sigs/kind/issues/2240 Your link seems to be a more in-depth fix that could be better than my temp fix that I got lucky enough for it to work.
  13. the field would be: Movies;TV Shows no quotes.
  14. Without re-issuing the command, after a reboot, have you had the call trace? The reason I ask, is because the number will change, as designed, so verifying it with cat /proc/sys/net/netfilter/nf_conntrack_max or similar is not valid.
  15. I can state that I set it once, it survived many reboots and now upgrade to 6.10-rc2d
  16. You should not need to, It is a once and done command UNLESS something in the system sets the conntrack too high which shouldn't happen and hasn't since Kernel 5.12.2 https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.12.2 Since this netfilter conntrack is working as of now, the limitation for the nic was a false flag. I don't know what NIC's you are using but some consumer level ones do not like multiple mac addresses and fail as low as two. creating a virtual vlan or br0 ip etc creates new virtual MAC addresses and the card can only handle so much. Example of a enterprise card: "Many Mellanox adapters are capable of exposing up to 127 virtual instances." Consumer card: "Realtek 1Gb NICs are often limited to 6-12 virtual instances" Looking at your config, you have I believe 7 instances on one card and 1 on the other which is controlled by a Intel® i210AT dual port 1Gb integrated, which is limited to 5 vectors (instances) per port, so it is technically over limit - HOWEVER you're not assigning IP's to the VLANS and I believe this stops them from being true virtual instances, so theoretically it should be fine. Give it some more time, but please advise if you experience any call trace and if so, post diagnostics with the syslog please. I don't expect you to have any though.
  17. Ok - awaiting your reply on testing. I am 100% seeing netfilter call trace cause. your expierence on IPVLAN is expected, and you can overcome this by properly building your docker network with creating custom networks and some other config. "host access being enabled" will cause issues and is generally not advised, it would take you some time to correct your config not to need it. But - that's a different thread.
  18. I reviewed your logs and you are experiencing call traces due to your networking adapter, however it appears you are somehow reaching the limit of your NIC. If the fix above doesn't work, splitting between a couple NICs may, or upgrading your existing NIC or it's firmware even. I didn't review the logs enough to find the systems hardware as it's late, so I will relook tomorrow and let you know if I can see anything deeper.
  19. On Aug 20th, I posted the above due to debugging seeming to point to nVidia, however post troubleshooting in depth it was determined netfilter was causing the call traces. I was asked to try "ipvlan" instead of "macvlan" - this made no change, so reverted back to macvlan. :: Place holder for details and oulying the issue, original values etc :: :: at work so limited on what I can pull, will edit to add later :: I have since, after reviewing other similar call traces, found reference to setting the conn track max in an effort to resolve the call traces. just over 36 hours ago I made the following change: "sysctl net/netfilter/nf_conntrack_max=131072" in terminal and verified it with "cat /proc/sys/net/netfilter/nf_conntrack_max" showing the new value of 131072. I have not had a single call trace since. TLDR: setting this sysctl net/netfilter/nf_conntrack_max=131072 stopped my call traces. If anyone knows how to help me gather what's needed to see why this stopped the call traces and prevent them from happening to others - please assist.
  20. Ok - I have spent the latter part of 2 days trying to implement one single change that seems to make sense.... docker create network insertnamehere then edit a docker to put on that network...... Sounds simple right? nope....... for some reason I get "docker: Error response from daemon: network insertnamehere not found." ummmm... ok.... maybe I missed something: root@GSA:~# docker network create insertnamehere a414f1fd4REDACTED415e8d330 root@GSA:~# docker network ls NETWORK ID NAME DRIVER SCOPE ecREDACTED48 name1redacted bridge local a4REDACTED54 insertnamehere bridge local 2eREDACTEDcb br0 macvlan local 04REDACTEDb0 bridge bridge local ffREDACTEDa6 host host local 7eREDACTED08 none null local Nope... doesn't seem like I did.... why wont it work? Host access is disabled and preserve networks is no
  21. Ok - bug. It changes all docker icon's to the same as the folder icon. IE Cloudflared/Redis now have my Network icon instead of their own when collapsed. when expanded they have their icon. If the folder icon over-rules the docker application icon, it should strip it and only show the text.
  22. Tracking this down with discord assistance, it was determined that the servers was blocking/routing incorrectly due to Jumbo Frames on the NICs. when the MTU is higher than 1500 on the NICs no web access across the network was possible, yes jumbo frames is correctly configured on the switch's and router, and jumbo frames worked on <6.9.2. When the MTU is set to 1500, WebUI loads correctly.