mikeofoslo

Members
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mikeofoslo's Achievements

Noob

Noob (1/14)

2

Reputation

1

Community Answers

  1. I can now confirm that I have had the server running 6.12.3 with disabled IPv6 for: Uptime 1 month 4 days 15 hours 43 minutes
  2. Then I figure the two lines below need to be hashed out in /etc/rc.d/rc.nginx # echo "${t}listen [::1]:$PORT; # lo" # echo "${t}listen [::1]:$PORTSSL; # lo" (I'm a linux n00b and want to be sure)
  3. Is /etc/nginx/conf.d/servers.conf the correct place to remove the ::1 referance? # Always accept http requests from localhost # ex: http://localhost # ex: http://127.0.0.1 # ex: http://[::1] # server { listen 127.0.0.1:80; # lo listen 127.0.0.1:443; # lo listen [::1]:80; # lo listen [::1]:443; # lo
  4. Update: The nginex crashes and open socket have stopped. No crach for 24h. Changes made in Unraid: - set Interface eth0 to IPv4 only Changes made in WIN10: - Chrome updated to Version 114.0.5735.199 (Official Build) (64-bit) Most of my crashes happed while using Chrome. Fall back to EDGE for som days but is chrashed while using it but not as fast as when using Chrome. Chenge made in router: - Disabled IPv6 -> resulted in a DNS trouble that mede me rebuild the AiMesh from screatch. I hade to rebuild the AiMesh (running AC5300 as main , AC88U and AC68U as nodes) 24h has passed since the router rebuild (reinstall) and the unraid server WebGUI is still running. Fingers crossed!
  5. Same thing happed with my server. Remaned the vifo-pci.cfg and it booted.
  6. I'm having the same problems with the nginex crashing. Upgraded to 6.12 then it started. Updated to 6.12.1 but the same error comes back. WebGUI is down, everything else seems to be running. Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: worker process 25930 exited on signal 6 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: shared memory zone "memstore" was locked by 25930 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: worker process 25931 exited on signal 6 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: shared memory zone "memstore" was locked by 25931 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: worker process 25932 exited on signal 6 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: shared memory zone "memstore" was locked by 25932 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: worker process 25933 exited on signal 6 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: shared memory zone "memstore" was locked by 25933 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: worker process 25941 exited on signal 6 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: shared memory zone "memstore" was locked by 25941 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: worker process 25964 exited on signal 6 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: shared memory zone "memstore" was locked by 25964 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: worker process 25986 exited on signal 6 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: shared memory zone "memstore" was locked by 25986 Jun 27 12:31:28 NEST nginx: 2023/06/27 12:31:28 [alert] 7261#7261: worker process 26001 exited on signal 6 Jun 27 12:46:49 NEST nginx: 2023/06/27 12:46:49 [alert] 5701#5701: *40 open socket #22 left in connection 12 Jun 27 12:46:49 NEST nginx: 2023/06/27 12:46:49 [alert] 5701#5701: *34 open socket #23 left in connection 13 Jun 27 12:46:49 NEST nginx: 2023/06/27 12:46:49 [alert] 5701#5701: *53 open socket #29 left in connection 19 Jun 27 12:46:49 NEST nginx: 2023/06/27 12:46:49 [alert] 5701#5701: aborting Jun 27 12:46:51 NEST nginx: 2023/06/27 12:46:51 [alert] 7083#7083: *287 open socket #3 left in connection 10 Jun 27 12:46:51 NEST nginx: 2023/06/27 12:46:51 [alert] 7083#7083: *289 open socket #4 left in connection 11 Jun 27 12:46:51 NEST nginx: 2023/06/27 12:46:51 [alert] 7083#7083: *291 open socket #15 left in connection 12 Jun 27 12:46:51 NEST nginx: 2023/06/27 12:46:51 [alert] 7083#7083: *293 open socket #24 left in connection 13 Has anyone managed to downgrade without the error coming back? nest-diagnostics-20230627-1301.zip
  7. SOLVED: - enabled ip6 on the unraid server and on my router Tip: try checking the frigate log. I hade removed mqtt section from settings. That caused it to stop loading.
  8. I have had a setup running for over 2 weeks testing and figuring out how to set up my system. I have had frigate running in docker for most of the time with a pcie coral tpu chip. After a reboot of the system last night the frigate docker refuses to start. The frigate container restarts and will not come online. I have already had it running and was just doing some tweaking of the frigate config when it stopped working. My log is filling up with: May 30 09:26:13 NEST avahi-daemon[7954]: Withdrawing address record for fe80::f458:52ff:fe07:8805 on veth8dc7042. May 30 09:26:14 NEST kernel: docker0: port 1(veth2564b00) entered blocking state May 30 09:26:14 NEST kernel: docker0: port 1(veth2564b00) entered disabled state May 30 09:26:14 NEST kernel: device veth2564b00 entered promiscuous mode May 30 09:26:14 NEST kernel: docker0: port 1(veth2564b00) entered blocking state May 30 09:26:14 NEST kernel: docker0: port 1(veth2564b00) entered forwarding state May 30 09:26:14 NEST kernel: docker0: port 1(veth2564b00) entered disabled state May 30 09:26:16 NEST kernel: eth0: renamed from vethbd13dac May 30 09:26:16 NEST kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2564b00: link becomes ready May 30 09:26:16 NEST kernel: docker0: port 1(veth2564b00) entered blocking state May 30 09:26:16 NEST kernel: docker0: port 1(veth2564b00) entered forwarding state Enclosing the logs hope some one has a minute to review them an give me a pointer. nest-diagnostics-20230530-0912.zip