CorneliousJD

Members
  • Posts

    691
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by CorneliousJD

  1. Swapped back to macvlan mode w/ a dedicated docker vlan for few containers that need static IPs. so far issue has not resurfaced. bug report created here.
  2. Just wanted to note that I reverted back to macvlan and a separate docker VLAN for the few containers that need static IPs set. so far issue has not resurfaced. bug report created here. - if anyone is interested in chasing this further.
  3. Note, at time of writing, 6.10.3 is current and this happened on all 6.10.X releases thus far. I previously had crashing issues at least once every 72 hours with 6.9 using docker containers w/ static IPs. To combat that at the time I created a docker-specific VLAN for the few containers that needed static IPs. Fast forward to 6.10.X and the release of the ipvlan mode which was supposed to help people with this issue. I went ahead and removed my VLAN and swapped docker to ipvlan mode and everything works great. That is, up until I do my weekly CA appdata backup (via Squid's excellent plugin, of course) When running it would finish and then restart docker containers but my entire system would actually lose outside access to the internet, and I have no idea why. The network/internet access would stay down until I rebooted, or if left alone long enough would eventually restore itself after 5-10 hours or so. I was able to seemingly track this down to docker service itself, since stopping docker service restored network connectivity immediately. Very weird... I wasn't able to troubleshoot/chase this further as I rely on this server far too much for my day to day life at this point. I have since reverted back to macvlan mode with a docker-specific VLAN and so far my issue has not recurred. here are two threads where I report the problems but wasn't able to get anywhere with it. I figure submitting this as an official bug report may help get it some attention, as it's certainly a weird issue that I couldn't narrow down... I have also marked this as urgent because it's a pretty MAJOR bug to have everything lose internet access.
  4. bumping this - hoping someone has an option for me to at least try for troubleshooting this, as it's an odd one to say the least.
  5. During a ping test I stopped docker so I could check on some network configs to see if making any changes would resolve the issue, but it looks like just stopping the docker service itself corrected the issue. EDIT: Here are my docker settings. Nothing unusual here that would cause this, and again it only seems to happen when I run a CA appdata backup where it stops all containers, backs them up, and then starts them again.
  6. The issue is that after doing a CA apdpata backup, my system loses Internet access for quite some time (6-12 hours usually) before coming back, it's very weird. the CA appdata backup only takes about an hour to backup. I noticed that I got errors connecting to github to check for updates, and errors connecting to dockerhub to check for docker updates, etc. It has been a weird issue to track down. I am on 6.10.3, and this issue only started after the 6.10.x branch. I changed from having a docker VLAN to combat the macvlan docker issue, and just used the new ipvlan issue to prevent crashing. No crashes so far, but now I'm running into this really bizarre issue. no idea if they are related. Everything is totally fine until I run the CA appdata backup, at which point things go offline and I'm unable to connect to the Internet from the server or some containers as well. Very strange indeed. Any help is appreciated. Note, I turned on checking for app updates after an appdata backup, and it hangs there too now. Here's relevant system log info below that happens when invoking a CA appdata backup. I am unable to ping known-good addreses, such as 8.8.8.8 or 9.9.9.9 I just get From 10.0.0.10 icmp_seq=1 Destination Host Unreachable Jun 27 11:45:41 Server CA Backup/Restore: Stopping Vikunja-API Jun 27 11:46:41 Server kernel: docker0: port 11(veth56fb8cf) entered disabled state Jun 27 11:46:41 Server kernel: vethd5c5710: renamed from eth0 Jun 27 11:46:41 Server avahi-daemon[9164]: Interface veth56fb8cf.IPv6 no longer relevant for mDNS. Jun 27 11:46:41 Server avahi-daemon[9164]: Leaving mDNS multicast group on interface veth56fb8cf.IPv6 with address fe80::c41a:f2ff:fea7:160c. Jun 27 11:46:41 Server kernel: docker0: port 11(veth56fb8cf) entered disabled state Jun 27 11:46:41 Server kernel: device veth56fb8cf left promiscuous mode Jun 27 11:46:41 Server kernel: docker0: port 11(veth56fb8cf) entered disabled state Jun 27 11:46:41 Server avahi-daemon[9164]: Withdrawing address record for fe80::c41a:f2ff:fea7:160c on veth56fb8cf. Jun 27 11:46:41 Server CA Backup/Restore: docker stop -t 60 Vikunja-API Jun 27 11:46:41 Server CA Backup/Restore: Stopping WizNote Jun 27 11:47:42 Server kernel: docker0: port 12(vethce1e5eb) entered disabled state Jun 27 11:47:42 Server kernel: veth2a8fc33: renamed from eth0 Jun 27 11:47:42 Server avahi-daemon[9164]: Interface vethce1e5eb.IPv6 no longer relevant for mDNS. Jun 27 11:47:42 Server avahi-daemon[9164]: Leaving mDNS multicast group on interface vethce1e5eb.IPv6 with address fe80::4002:8ff:fec8:c984. Jun 27 11:47:42 Server kernel: docker0: port 12(vethce1e5eb) entered disabled state Jun 27 11:47:42 Server kernel: device vethce1e5eb left promiscuous mode Jun 27 11:47:42 Server kernel: docker0: port 12(vethce1e5eb) entered disabled state Jun 27 11:47:42 Server avahi-daemon[9164]: Withdrawing address record for fe80::4002:8ff:fec8:c984 on vethce1e5eb. Jun 27 11:47:42 Server CA Backup/Restore: docker stop -t 60 WizNote Jun 27 11:47:42 Server CA Backup/Restore: Backing up USB Flash drive config folder to Jun 27 11:47:42 Server CA Backup/Restore: Using command: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" /boot/ "/mnt/user/backups/unRAID/flash/" > /dev/null 2>&1 Jun 27 11:47:51 Server CA Backup/Restore: Changing permissions on backup Jun 27 11:47:53 Server CA Backup/Restore: Backing up libvirt.img to /mnt/user/backups/unRAID/libvirt/ Jun 27 11:47:53 Server CA Backup/Restore: Using Command: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" "/mnt/user/system/libvirt/libvirt.img" "/mnt/user/backups/unRAID/libvirt/" > /dev/null 2>&1 Jun 27 11:47:53 Server CA Backup/Restore: Changing permissions on backup Jun 27 11:47:53 Server CA Backup/Restore: Backing Up appData from /mnt/user/appdata/ to /mnt/user/backups/unRAID/appdata/[email protected] Jun 27 11:47:53 Server CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/backups/unRAID/appdata/[email protected]/CA_backup.tar' --exclude "plex/Library/Application Support/Plex Media Server/Cache/PhotoTranscoder" --exclude "plex/Library/Application Support/Plex Media Server/Cache/Transcode" --exclude "sonarr/MediaCover" --exclude "sonarr-uhd/MediaCover" --exclude "radarr/MediaCover" --exclude "radarr-uhd/MediaCover" --exclude "lidarr/MediaCover" --exclude "tautulli/cache" --exclude "joplinapp" * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress Jun 27 12:07:48 Server emhttpd: spinning down /dev/sdk Jun 27 12:09:15 Server emhttpd: spinning down /dev/sdn Jun 27 12:10:14 Server emhttpd: spinning down /dev/sdh Jun 27 12:18:23 Server nginx: 2022/06/27 12:18:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:18:23 Server nginx: 2022/06/27 12:18:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:18:26 Server nginx: 2022/06/27 12:18:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:18:26 Server nginx: 2022/06/27 12:18:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:24:22 Server nginx: 2022/06/27 12:24:22 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:24:22 Server nginx: 2022/06/27 12:24:22 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:24:25 Server nginx: 2022/06/27 12:24:25 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:24:25 Server nginx: 2022/06/27 12:24:25 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:30:23 Server nginx: 2022/06/27 12:30:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:30:23 Server nginx: 2022/06/27 12:30:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:30:26 Server nginx: 2022/06/27 12:30:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:30:26 Server nginx: 2022/06/27 12:30:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:36:23 Server nginx: 2022/06/27 12:36:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:36:23 Server nginx: 2022/06/27 12:36:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:36:26 Server nginx: 2022/06/27 12:36:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:36:26 Server nginx: 2022/06/27 12:36:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:42:22 Server nginx: 2022/06/27 12:42:22 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:42:22 Server nginx: 2022/06/27 12:42:22 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:42:25 Server nginx: 2022/06/27 12:42:25 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:42:25 Server nginx: 2022/06/27 12:42:25 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:48:23 Server nginx: 2022/06/27 12:48:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:48:23 Server nginx: 2022/06/27 12:48:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:48:26 Server nginx: 2022/06/27 12:48:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:48:26 Server nginx: 2022/06/27 12:48:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:52:39 Server CA Backup/Restore: Backup Complete Jun 27 12:52:39 Server Docker Auto Update: Community Applications Docker Autoupdate running Jun 27 12:52:39 Server Docker Auto Update: Checking for available updates Jun 27 12:54:21 Server nginx: 2022/06/27 12:54:21 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:54:21 Server nginx: 2022/06/27 12:54:21 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:54:24 Server nginx: 2022/06/27 12:54:24 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:54:24 Server nginx: 2022/06/27 12:54:24 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
  7. I stand corrected on the manual run, I turned on checking for app updates after a backup, and it hangs there too now. Here's relevant system log info below that happens when invoking a CA appdata backup. I am unable to ping known-good addreses, such as 8.8.8.8 or 9.9.9.9 I just get From 10.0.0.10 icmp_seq=1 Destination Host Unreachable EDIT - I decided to make a new thread for this as this seems to be some pretty weird, possibly deeper and/or system related issue, and I don't want to muck up Squid's thread here. New threat here for anyone who has any suggestions to help me on this. Jun 27 11:45:41 Server CA Backup/Restore: Stopping Vikunja-API Jun 27 11:46:41 Server kernel: docker0: port 11(veth56fb8cf) entered disabled state Jun 27 11:46:41 Server kernel: vethd5c5710: renamed from eth0 Jun 27 11:46:41 Server avahi-daemon[9164]: Interface veth56fb8cf.IPv6 no longer relevant for mDNS. Jun 27 11:46:41 Server avahi-daemon[9164]: Leaving mDNS multicast group on interface veth56fb8cf.IPv6 with address fe80::c41a:f2ff:fea7:160c. Jun 27 11:46:41 Server kernel: docker0: port 11(veth56fb8cf) entered disabled state Jun 27 11:46:41 Server kernel: device veth56fb8cf left promiscuous mode Jun 27 11:46:41 Server kernel: docker0: port 11(veth56fb8cf) entered disabled state Jun 27 11:46:41 Server avahi-daemon[9164]: Withdrawing address record for fe80::c41a:f2ff:fea7:160c on veth56fb8cf. Jun 27 11:46:41 Server CA Backup/Restore: docker stop -t 60 Vikunja-API Jun 27 11:46:41 Server CA Backup/Restore: Stopping WizNote Jun 27 11:47:42 Server kernel: docker0: port 12(vethce1e5eb) entered disabled state Jun 27 11:47:42 Server kernel: veth2a8fc33: renamed from eth0 Jun 27 11:47:42 Server avahi-daemon[9164]: Interface vethce1e5eb.IPv6 no longer relevant for mDNS. Jun 27 11:47:42 Server avahi-daemon[9164]: Leaving mDNS multicast group on interface vethce1e5eb.IPv6 with address fe80::4002:8ff:fec8:c984. Jun 27 11:47:42 Server kernel: docker0: port 12(vethce1e5eb) entered disabled state Jun 27 11:47:42 Server kernel: device vethce1e5eb left promiscuous mode Jun 27 11:47:42 Server kernel: docker0: port 12(vethce1e5eb) entered disabled state Jun 27 11:47:42 Server avahi-daemon[9164]: Withdrawing address record for fe80::4002:8ff:fec8:c984 on vethce1e5eb. Jun 27 11:47:42 Server CA Backup/Restore: docker stop -t 60 WizNote Jun 27 11:47:42 Server CA Backup/Restore: Backing up USB Flash drive config folder to Jun 27 11:47:42 Server CA Backup/Restore: Using command: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" /boot/ "/mnt/user/backups/unRAID/flash/" > /dev/null 2>&1 Jun 27 11:47:51 Server CA Backup/Restore: Changing permissions on backup Jun 27 11:47:53 Server CA Backup/Restore: Backing up libvirt.img to /mnt/user/backups/unRAID/libvirt/ Jun 27 11:47:53 Server CA Backup/Restore: Using Command: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" "/mnt/user/system/libvirt/libvirt.img" "/mnt/user/backups/unRAID/libvirt/" > /dev/null 2>&1 Jun 27 11:47:53 Server CA Backup/Restore: Changing permissions on backup Jun 27 11:47:53 Server CA Backup/Restore: Backing Up appData from /mnt/user/appdata/ to /mnt/user/backups/unRAID/appdata/[email protected] Jun 27 11:47:53 Server CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/backups/unRAID/appdata/[email protected]/CA_backup.tar' --exclude "plex/Library/Application Support/Plex Media Server/Cache/PhotoTranscoder" --exclude "plex/Library/Application Support/Plex Media Server/Cache/Transcode" --exclude "sonarr/MediaCover" --exclude "sonarr-uhd/MediaCover" --exclude "radarr/MediaCover" --exclude "radarr-uhd/MediaCover" --exclude "lidarr/MediaCover" --exclude "tautulli/cache" --exclude "joplinapp" * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress Jun 27 12:07:48 Server emhttpd: spinning down /dev/sdk Jun 27 12:09:15 Server emhttpd: spinning down /dev/sdn Jun 27 12:10:14 Server emhttpd: spinning down /dev/sdh Jun 27 12:18:23 Server nginx: 2022/06/27 12:18:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:18:23 Server nginx: 2022/06/27 12:18:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:18:26 Server nginx: 2022/06/27 12:18:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:18:26 Server nginx: 2022/06/27 12:18:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:24:22 Server nginx: 2022/06/27 12:24:22 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:24:22 Server nginx: 2022/06/27 12:24:22 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:24:25 Server nginx: 2022/06/27 12:24:25 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:24:25 Server nginx: 2022/06/27 12:24:25 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:30:23 Server nginx: 2022/06/27 12:30:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:30:23 Server nginx: 2022/06/27 12:30:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:30:26 Server nginx: 2022/06/27 12:30:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:30:26 Server nginx: 2022/06/27 12:30:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:36:23 Server nginx: 2022/06/27 12:36:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:36:23 Server nginx: 2022/06/27 12:36:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:36:26 Server nginx: 2022/06/27 12:36:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:36:26 Server nginx: 2022/06/27 12:36:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:42:22 Server nginx: 2022/06/27 12:42:22 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:42:22 Server nginx: 2022/06/27 12:42:22 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:42:25 Server nginx: 2022/06/27 12:42:25 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:42:25 Server nginx: 2022/06/27 12:42:25 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:48:23 Server nginx: 2022/06/27 12:48:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:48:23 Server nginx: 2022/06/27 12:48:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:48:26 Server nginx: 2022/06/27 12:48:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:48:26 Server nginx: 2022/06/27 12:48:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:52:39 Server CA Backup/Restore: Backup Complete Jun 27 12:52:39 Server Docker Auto Update: Community Applications Docker Autoupdate running Jun 27 12:52:39 Server Docker Auto Update: Checking for available updates Jun 27 12:54:21 Server nginx: 2022/06/27 12:54:21 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:54:21 Server nginx: 2022/06/27 12:54:21 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:54:24 Server nginx: 2022/06/27 12:54:24 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem" Jun 27 12:54:24 Server nginx: 2022/06/27 12:54:24 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
  8. Sadly this did not help, still experiencing odd issues. Got a common problem plugin nofification that the server couldn't reach github.com and I'm still having issues with containers not having appropriate network access for hours after CA appdata backup completion. It only happens when running on a schedule at 3AM on Sunday mornings, if I manually run it, everything works fine. Only thing that runs at 3AM besides that is mover (which shouldn't actually be moving anything as my appdata should all prefer the cache first anyways, and VM data is cache-only). I'm at a loss for what to even look at here to begin troubleshooting. @Squid - I hate to tag you and bother you but I'm out of ideas here. Only after the 6.10.x upgrade and switch to ipvlan from macvlan am I seeing this issue, and it only ever occurs sundays after the CA appdata backup kicks off on the schedule. A few posts above from me have more details. Is there any other reports of this?
  9. I decided to do some more testing and removed all VLAN setups I had made before the ipvlan (replacing macvlan) docker setup was available in 6.10.x I removed it from any remaining containers and removed all VLAN traces from the network config and manually ran a CA appdata backup and everything now came back up correctly. I'll see how it goes next weekend, fingers crossed!
  10. 4th week in a row and same issue. System log itself starts repeating that it's trying to backup my flash w/ the MyServers plugin over and over and it doesn't seem to have connection. Whatever is happening seems to be affecting my entire servers network access somehow and I can't seem to track down why. It only ever happens when I use CA backup, it's fine until Sunday at 3AM when it kicks off, then it doesn't seem to want to come back properly after that until I reboot. I'm at a total loss for what this could be and how to check/prove/fix/etc.
  11. Interesting, i haven't touched newperms on anything at all.
  12. Bumping w/ my own post to see if it gets any attention - would love to try something to see if it fixes it before the scheduled run this weekend. Any advice is appreciated at this point. I'm not sure what to do.
  13. FYI for anyone in the future, I figured this out. i needed to change the /appdata/linkace/logs folder to be RW/RW/RW permissions. I had to turn on APP_DEBUG=true in the .env to figure this out, but the debug console helped point me to the issue
  14. I'm not sure what the heck I am doing wrong but I cannot get LinkAce up and running. Logs also really don't show much. Just basic/default config, using MariaDB container, but the database never gets populated w/ any data. ## LINKACE CONFIGURATION ## Basic app configuration COMPOSE_PROJECT_NAME=LinkAce # The app key is generated later, please leave it blank APP_KEY=base64:redacted= ## Configuration of the database connection ## Attention: Those settings are configured during the web setup, please do not modify them now. # Set the database driver (mysql, pgsql, sqlsrv) DB_CONNECTION=mysql # Set the host of your database here DB_HOST=10.0.0.10 # Set the port of your database here DB_PORT=3306 # Set the database name here DB_DATABASE=linkace # Set both username and password of the user accessing the database DB_USERNAME=linkace DB_PASSWORD=redacted ## Redis cache configuration # Set the Redis connection here if you want to use it #REDIS_HOST=redis #REDIS_PASSWORD=ChangeThisToASecurePassword! #REDIS_PORT=6379
  15. So this is the 3rd week in a row -- my docker containers went down at 3AM on Sunday morning and I didn't get notice that the website served by reverse proxy came back up until 1PM, so for 10 hours it was down. Not sure what to do or where to look. I have a few screenshots that may or may not be helpful of a few containers that don't have proper access to things after a CA appdata backup and restarting the containers. Notably my CloudflareDDNS container doesn't start up properly and homeassistant can't actually communicate w/ sensors Seems like a networking issue somehow after the 6.10.x updates. All I've done is change to ipvlan instead of macvlan as suggested to avoid crashes (which I was certainly experiencing on macvlan w/out a docker VLAN setup separately) Any assistance or advice here is appreciate, or what I could possibly check for in logs to find out what is happening. Thanks in advance.
  16. This is beyond the scope of unraid-specific support, you'll likely have better luck posting in their project page/github for assistance with this one, sorry!
  17. I removed the memory limit on that container and it seems to have resolved this issue. Fingers crossed!
  18. thanks, I just tried setting up a brand new container of AMP as well and see the same thing. @MitchTalmadge are you able to see what might be up for deploying brand new containers w/ the :latest image? It no longer seems to be working, can't get web UI to load. thx in advance! Note to @xXMrZombie20Xx if Mitch doesn't reply here within a few days it might be best to post these logs on his github for the project here: https://github.com/MitchTalmadge/AMP-dockerized/issues
  19. Did you have another automated backup that worked correctly? I've only had two fire off and both have caused the same issue so far.
  20. Two weeks in a row now I have experienced a really strange issue. CA Backups run just fine and my tarball is created at the size I expect, but after the backup is completed my containers restart but many of them end up without any real Internet access, it is very weird. My reverse proxy stops serving my hosts outside my home, pihole stops being able to fetch DNS requests from the outside DNS servers (like 8.8.8.8), it's very strange. I have fixed all the odd issues both time so far by just rebooting the server, however I feel like something is wrong at this point since it's happened two weeks in a row. I did NOT happen ever on 6.9.X, but it is now on 6.10.X Any help or advice appreciated, my config is attached.
  21. Thanks for the info -- I actually no longer use that nextcloud share, I will just be getting rid of it entirely. problem solved w/ the odd permissions then Regarding the out of memory error, if it's possible that it's caused by a docker, then I think it is the 1GB docker limit I set on Nginx Proxy Manager I am seeing this in one of the lines of the logs. memory: usage 1048576kB, limit 1048576kB, failcnt 12250 That kb count equals exactly the 1GB limit which I had set on this container. I have now changed it to 2GB for now, but I may remove the limit entirely I guess, as I have 128GB of RAM, so plenty to spare. This template came this way from CA w/ the 1GB limit, so I left it, but I may be using it a bit more heavily than most others...
  22. Here's my setup - I must have moved it originally thinking that the system share was always going to be cache-only or something? Here's the ls -al /mnt/user/system/docker as well
  23. Here's the screenshots -- the User Share list also matches the output of the ls -al /mnt/user Side note - could this error because by a docker container running out of memory? I did limit a container to 1GB of memory recently, this could correlate to when I started seeing the issue.
  24. Bump, as I've rebooted after this and now a little over one day (about 30-35 hours) I am getting the same * **Out Of Memory errors detected on your server**