[Plugin] CA Appdata Backup / Restore v2


Squid

Recommended Posts

  • 2 weeks later...

Two weeks in a row now I have experienced a really strange issue.

 

CA Backups run just fine and my tarball is created at the size I expect, but after the backup is completed my containers restart but many of them end up without any real Internet access, it is very weird.

 

My reverse proxy stops serving my hosts outside my home, pihole stops being able to fetch DNS requests from the outside DNS servers (like 8.8.8.8), it's very strange.

 

I have fixed all the odd issues both time so far by just rebooting the server, however I feel like something is wrong at this point since it's happened two weeks in a row.

 

I did NOT happen ever on 6.9.X, but it is now on 6.10.X

 

Any help or advice appreciated, my config is attached.

 

image.thumb.png.405c95013793351a5e2cfd5275320652.png

Edited by CorneliousJD
  • Thanks 1
Link to comment
17 hours ago, CorneliousJD said:

CA Backups run just fine and my tarball is created at the size I expect, but after the backup is completed my containers restart but many of them end up without any real Internet access, it is very weird.

 

My reverse proxy stops serving my hosts outside my home, pihole stops being able to fetch DNS requests from the outside DNS servers (like 8.8.8.8), it's very strange.

 

I have fixed all the odd issues both time so far by just rebooting the server, however I feel like something is wrong at this point since it's happened two weeks in a row.

 

I had something similar happen with 6.10.2 where Community Applications could not detect if installed Plugins were new or current. The last popup message on the UI was about "CA Appdata Backup" running and finishing.

 

I also had a command window open at the time and could not issue a "newperms" command. It said command not found. 

 

A quick reboot and everything is working as it should. 

 

I did not spend much time investigating things and sadly did not keep the syslog. I did not see anything obviously wrong from a quick DF before the reboot.

 

image.png.7a3b2d4e4386ff130cf2b8e4fa0d1934.png

 

Link to comment
5 hours ago, BRiT said:

 

I had something similar happen with 6.10.2 where Community Applications could not detect if installed Plugins were new or current. The last popup message on the UI was about "CA Appdata Backup" running and finishing.

 

I also had a command window open at the time and could not issue a "newperms" command. It said command not found. 

 

A quick reboot and everything is working as it should. 

 

I did not spend much time investigating things and sadly did not keep the syslog. I did not see anything obviously wrong from a quick DF before the reboot.

 

 

Did you have another automated backup that worked correctly? I've only had two fire off and both have caused the same issue so far. 

Link to comment
59 minutes ago, CorneliousJD said:

 

Did you have another automated backup that worked correctly? I've only had two fire off and both have caused the same issue so far. 

 

Like you, I had USB and VirtXMl backups configured too. I had the backup set as weekly Monday. So it was this morning when it ran. I think this was the first time it would have kicked off when I was on something newer than 6.8.3. I upgraded to 6.10 after a bit and then to 6.10.2 but did not really use the system between those updates and reboots to know if it was in a troubled situation.

 

I should have grabbed more logs and done more research, but I didn't have time and hoped for quick reboot to resolve the condition. 

 

Here's the backup directory showing it ran:

 

image.png.d04d2a4dc3ae3f075c3ef6f5895f2ff5.png

 

 

For now, I no longer have the plugin installed. I'm not doing docker development on the system anymore so don't need appdata to be backed up as frequently, and using the backup from this morning will suffice for a while.

Link to comment

When I use CA Backup and Restore, it's able to shut down the containers but when it restarts them, the containers that have network_mode: "service:wireguard" do not start. I believe this happens because the containers with network_mode: "service:wireguard" are trying to start before the WireGuard container. Is there someway, via a CA Backup script, to fix this issue?

Edited by Metle
Tried Compose Tweak, Didn't Work
Link to comment
On 6/5/2022 at 10:19 PM, CorneliousJD said:

Two weeks in a row now I have experienced a really strange issue.

 

CA Backups run just fine and my tarball is created at the size I expect, but after the backup is completed my containers restart but many of them end up without any real Internet access, it is very weird.

 

My reverse proxy stops serving my hosts outside my home, pihole stops being able to fetch DNS requests from the outside DNS servers (like 8.8.8.8), it's very strange.

 

I have fixed all the odd issues both time so far by just rebooting the server, however I feel like something is wrong at this point since it's happened two weeks in a row.

 

I did NOT happen ever on 6.9.X, but it is now on 6.10.X

 

Any help or advice appreciated, my config is attached.


So this is the 3rd week in a row -- my docker containers went down at 3AM on Sunday morning and I didn't get notice that the website served by reverse proxy came back up until 1PM, so for 10 hours it was down. 

 

Not sure what to do or where to look.

 

I have a few screenshots that may or may not be helpful of a few containers that don't have proper access to things after a CA appdata backup and restarting the containers.

 

Notably my CloudflareDDNS container doesn't start up properly and homeassistant can't actually communicate w/ sensors 

 

Seems like a networking issue somehow after the 6.10.x updates.

All I've done is change to ipvlan instead of macvlan as suggested to avoid crashes (which I was certainly experiencing on macvlan w/out a docker VLAN setup separately)

 

Any assistance or advice here is appreciate, or what I could possibly check for in logs to find out what is happening.

 

Thanks in advance. 

2022-06-12 13_25_00-Overview – Home Assistant.png

2022-06-12 13_26_05-docker logs -f -n 80 HomeAssistant (Server).png

2022-06-12 13_31_07-docker logs -f -n 80 Organizr (Server).png

Link to comment
On 6/5/2022 at 10:19 PM, CorneliousJD said:

Two weeks in a row now I have experienced a really strange issue.

 

CA Backups run just fine and my tarball is created at the size I expect, but after the backup is completed my containers restart but many of them end up without any real Internet access, it is very weird.

 

My reverse proxy stops serving my hosts outside my home, pihole stops being able to fetch DNS requests from the outside DNS servers (like 8.8.8.8), it's very strange.

 

I have fixed all the odd issues both time so far by just rebooting the server, however I feel like something is wrong at this point since it's happened two weeks in a row.

 

I did NOT happen ever on 6.9.X, but it is now on 6.10.X

 

Any help or advice appreciated, my config is attached.

 

image.thumb.png.405c95013793351a5e2cfd5275320652.png

 

Bumping w/ my own post to see if it gets any attention - would love to try something to see if it fixes it before the scheduled run this weekend. Any advice is appreciated at this point. I'm not sure what to do.

Link to comment
On 6/16/2022 at 7:44 PM, CorneliousJD said:

 

Bumping w/ my own post to see if it gets any attention - would love to try something to see if it fixes it before the scheduled run this weekend. Any advice is appreciated at this point. I'm not sure what to do.

 

I figured out some of my grief was caused by using "newperms ." at the command prompt, that should set new permissions for the current working directory, which I normally had always done when managing media content with unraid 6.8.3 or earlier. Issuing that command instead completely clobbers the permissions of several scripts at /usr/local/emhttp/plugins/dynamix/scripts/ instead, which leads to all sorts of bad behavior from the UI.

Link to comment
8 hours ago, BRiT said:

 

I figured out some of my grief was caused by using "newperms ." at the command prompt, that should set new permissions for the current working directory, which I normally had always done when managing media content with unraid 6.8.3 or earlier. Issuing that command instead completely clobbers the permissions of several scripts at /usr/local/emhttp/plugins/dynamix/scripts/ instead, which leads to all sorts of bad behavior from the UI.

 

Interesting, i haven't touched newperms on anything at all. 

Link to comment
On 6/12/2022 at 1:32 PM, CorneliousJD said:


So this is the 3rd week in a row -- my docker containers went down at 3AM on Sunday morning and I didn't get notice that the website served by reverse proxy came back up until 1PM, so for 10 hours it was down. 

 

Not sure what to do or where to look.

 

I have a few screenshots that may or may not be helpful of a few containers that don't have proper access to things after a CA appdata backup and restarting the containers.

 

Notably my CloudflareDDNS container doesn't start up properly and homeassistant can't actually communicate w/ sensors 

 

Seems like a networking issue somehow after the 6.10.x updates.

All I've done is change to ipvlan instead of macvlan as suggested to avoid crashes (which I was certainly experiencing on macvlan w/out a docker VLAN setup separately)

 

Any assistance or advice here is appreciate, or what I could possibly check for in logs to find out what is happening.

 

Thanks in advance. 

(attachments omitted from reply)

4th week in a row and same issue.

 

System log itself starts repeating that it's trying to backup my flash w/ the MyServers plugin over and over and it doesn't seem to have connection.

 

Whatever is happening seems to be affecting my entire servers network access somehow and I can't seem to track down why. It only ever happens when I use CA backup, it's fine until Sunday at 3AM when it kicks off, then it doesn't seem to want to come back properly after that until I reboot.

 

I'm at a total loss for what this could be and how to check/prove/fix/etc.

Link to comment
1 hour ago, CorneliousJD said:

4th week in a row and same issue.

 

System log itself starts repeating that it's trying to backup my flash w/ the MyServers plugin over and over and it doesn't seem to have connection.

 

Whatever is happening seems to be affecting my entire servers network access somehow and I can't seem to track down why. It only ever happens when I use CA backup, it's fine until Sunday at 3AM when it kicks off, then it doesn't seem to want to come back properly after that until I reboot.

 

I'm at a total loss for what this could be and how to check/prove/fix/etc.

 

I decided to do some more testing and removed all VLAN setups I had made before the ipvlan (replacing macvlan) docker setup was available in 6.10.x

 

I removed it from any remaining containers and removed all VLAN traces from the network config and manually ran a CA appdata backup and everything now came back up correctly. I'll see how it goes next weekend, fingers crossed!

Link to comment
On 6/18/2021 at 10:03 PM, itimpi said:


The diagnostics seem to indicate the array is running fine, so no obvious reason you should get that error message :( 

 

Only thing I can think off to suggest is to reboot but no good reason think that will fix things other than the fact it is often good practice just to confirm the error is persistent.

 

maybe someone else will have an idea :) 

Hey, I know why it can't be backed up. I have always used "Disk Sharing" instead of "User Sharing" because it is more in line with my own usage habits. But this plugin will look for the /user directory under /mnt.If no user directory exists, it will prompt that the array is not running and exit the backup. After I create /mnt/user manually, I can back up my appdata to /mnt/disk3/backup. Can you modify the plugin to adapt to someone who not using " user shared"?😢@Squid

Edited by limonchoms
Link to comment

hi all,

 

Is it possible to just restore one app from a backup ?  I have a good backup from last week and and I wish to just bring back one app OMBI... as the last OMBI update broke my OMBI.

 

I am trying to avoid messing up my plex wathced DB.  Last time I restored from backup I had to go into Plex and manaully mark all my TV show and Movie as watched that I had watched in the past 2 weeks after that backups restore date.

Link to comment
On 3/28/2022 at 1:49 PM, Squid said:

The assumption the plugin makes is that everything is a bash script (because most people would store it on the flash drive), so it executes it as /bin/bash

 

Create another script (.sh) that then calls it

 

(Or if that is a typo (ph), then make sure you're using Linux style line endings and not dos - Config File Editor / Notepad++ will be able to convert for you

Oh wow, that was stupid of me. Ya I just made it a bash script and it worked. Thanks!

Link to comment
On 6/19/2022 at 9:00 PM, CorneliousJD said:

 

I decided to do some more testing and removed all VLAN setups I had made before the ipvlan (replacing macvlan) docker setup was available in 6.10.x

 

I removed it from any remaining containers and removed all VLAN traces from the network config and manually ran a CA appdata backup and everything now came back up correctly. I'll see how it goes next weekend, fingers crossed!

 

Sadly this did not help, still experiencing odd issues. Got a common problem plugin nofification that the server couldn't reach github.com and I'm still having issues with containers not having appropriate network access for hours after CA appdata backup completion. 

 

It only happens when running on a schedule at 3AM on Sunday mornings, if I manually run it, everything works fine.

Only thing that runs at 3AM besides that is mover (which shouldn't actually be moving anything as my appdata should all prefer the cache first anyways, and VM data is cache-only). 

 

I'm at a loss for what to even look at here to begin troubleshooting.

@Squid - I hate to tag you and bother you but I'm out of ideas here.

Only after the 6.10.x upgrade and switch to ipvlan from macvlan am I seeing this issue, and it only ever occurs sundays after the CA appdata backup kicks off on the schedule. A few posts above from me have more details. Is there any other reports of this? 

Link to comment
1 hour ago, CorneliousJD said:

 

Sadly this did not help, still experiencing odd issues. Got a common problem plugin nofification that the server couldn't reach github.com and I'm still having issues with containers not having appropriate network access for hours after CA appdata backup completion. 

 

It only happens when running on a schedule at 3AM on Sunday mornings, if I manually run it, everything works fine.

Only thing that runs at 3AM besides that is mover (which shouldn't actually be moving anything as my appdata should all prefer the cache first anyways, and VM data is cache-only). 

 

I'm at a loss for what to even look at here to begin troubleshooting.

@Squid - I hate to tag you and bother you but I'm out of ideas here.

Only after the 6.10.x upgrade and switch to ipvlan from macvlan am I seeing this issue, and it only ever occurs sundays after the CA appdata backup kicks off on the schedule. A few posts above from me have more details. Is there any other reports of this? 

 

I stand corrected on the manual run, I turned on checking for app updates after a backup, and it hangs there too now. Here's relevant system log info below that happens when invoking a CA appdata backup. 

 

I am unable to ping known-good addreses, such as 8.8.8.8 or 9.9.9.9 I just get

From 10.0.0.10 icmp_seq=1 Destination Host Unreachable

 

EDIT - I decided to make a new thread for this as this seems to be some pretty weird, possibly deeper and/or system related issue, and I don't want to muck up Squid's thread here.

 

New threat here for anyone who has any suggestions to help me on this.

 

Jun 27 11:45:41 Server CA Backup/Restore: Stopping Vikunja-API
Jun 27 11:46:41 Server kernel: docker0: port 11(veth56fb8cf) entered disabled state
Jun 27 11:46:41 Server kernel: vethd5c5710: renamed from eth0
Jun 27 11:46:41 Server avahi-daemon[9164]: Interface veth56fb8cf.IPv6 no longer relevant for mDNS.
Jun 27 11:46:41 Server avahi-daemon[9164]: Leaving mDNS multicast group on interface veth56fb8cf.IPv6 with address fe80::c41a:f2ff:fea7:160c.
Jun 27 11:46:41 Server kernel: docker0: port 11(veth56fb8cf) entered disabled state
Jun 27 11:46:41 Server kernel: device veth56fb8cf left promiscuous mode
Jun 27 11:46:41 Server kernel: docker0: port 11(veth56fb8cf) entered disabled state
Jun 27 11:46:41 Server avahi-daemon[9164]: Withdrawing address record for fe80::c41a:f2ff:fea7:160c on veth56fb8cf.
Jun 27 11:46:41 Server CA Backup/Restore: docker stop -t 60 Vikunja-API
Jun 27 11:46:41 Server CA Backup/Restore: Stopping WizNote
Jun 27 11:47:42 Server kernel: docker0: port 12(vethce1e5eb) entered disabled state
Jun 27 11:47:42 Server kernel: veth2a8fc33: renamed from eth0
Jun 27 11:47:42 Server avahi-daemon[9164]: Interface vethce1e5eb.IPv6 no longer relevant for mDNS.
Jun 27 11:47:42 Server avahi-daemon[9164]: Leaving mDNS multicast group on interface vethce1e5eb.IPv6 with address fe80::4002:8ff:fec8:c984.
Jun 27 11:47:42 Server kernel: docker0: port 12(vethce1e5eb) entered disabled state
Jun 27 11:47:42 Server kernel: device vethce1e5eb left promiscuous mode
Jun 27 11:47:42 Server kernel: docker0: port 12(vethce1e5eb) entered disabled state
Jun 27 11:47:42 Server avahi-daemon[9164]: Withdrawing address record for fe80::4002:8ff:fec8:c984 on vethce1e5eb.
Jun 27 11:47:42 Server CA Backup/Restore: docker stop -t 60 WizNote
Jun 27 11:47:42 Server CA Backup/Restore: Backing up USB Flash drive config folder to 
Jun 27 11:47:42 Server CA Backup/Restore: Using command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" /boot/ "/mnt/user/backups/unRAID/flash/" > /dev/null 2>&1
Jun 27 11:47:51 Server CA Backup/Restore: Changing permissions on backup
Jun 27 11:47:53 Server CA Backup/Restore: Backing up libvirt.img to /mnt/user/backups/unRAID/libvirt/
Jun 27 11:47:53 Server CA Backup/Restore: Using Command: /usr/bin/rsync  -avXHq --delete  --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" "/mnt/user/system/libvirt/libvirt.img" "/mnt/user/backups/unRAID/libvirt/" > /dev/null 2>&1
Jun 27 11:47:53 Server CA Backup/Restore: Changing permissions on backup
Jun 27 11:47:53 Server CA Backup/Restore: Backing Up appData from /mnt/user/appdata/ to /mnt/user/backups/unRAID/appdata/[email protected]
Jun 27 11:47:53 Server CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/backups/unRAID/appdata/[email protected]/CA_backup.tar'  --exclude "plex/Library/Application Support/Plex Media Server/Cache/PhotoTranscoder" --exclude "plex/Library/Application Support/Plex Media Server/Cache/Transcode" --exclude "sonarr/MediaCover" --exclude "sonarr-uhd/MediaCover" --exclude "radarr/MediaCover" --exclude "radarr-uhd/MediaCover" --exclude "lidarr/MediaCover" --exclude "tautulli/cache" --exclude "joplinapp"  * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress
Jun 27 12:07:48 Server emhttpd: spinning down /dev/sdk
Jun 27 12:09:15 Server emhttpd: spinning down /dev/sdn
Jun 27 12:10:14 Server emhttpd: spinning down /dev/sdh
Jun 27 12:18:23 Server nginx: 2022/06/27 12:18:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:18:23 Server nginx: 2022/06/27 12:18:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:18:26 Server nginx: 2022/06/27 12:18:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:18:26 Server nginx: 2022/06/27 12:18:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:24:22 Server nginx: 2022/06/27 12:24:22 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:24:22 Server nginx: 2022/06/27 12:24:22 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:24:25 Server nginx: 2022/06/27 12:24:25 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:24:25 Server nginx: 2022/06/27 12:24:25 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:30:23 Server nginx: 2022/06/27 12:30:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:30:23 Server nginx: 2022/06/27 12:30:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:30:26 Server nginx: 2022/06/27 12:30:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:30:26 Server nginx: 2022/06/27 12:30:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:36:23 Server nginx: 2022/06/27 12:36:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:36:23 Server nginx: 2022/06/27 12:36:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:36:26 Server nginx: 2022/06/27 12:36:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:36:26 Server nginx: 2022/06/27 12:36:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:42:22 Server nginx: 2022/06/27 12:42:22 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:42:22 Server nginx: 2022/06/27 12:42:22 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:42:25 Server nginx: 2022/06/27 12:42:25 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:42:25 Server nginx: 2022/06/27 12:42:25 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:48:23 Server nginx: 2022/06/27 12:48:23 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:48:23 Server nginx: 2022/06/27 12:48:23 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:48:26 Server nginx: 2022/06/27 12:48:26 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:48:26 Server nginx: 2022/06/27 12:48:26 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:52:39 Server CA Backup/Restore: Backup Complete
Jun 27 12:52:39 Server Docker Auto Update: Community Applications Docker Autoupdate running
Jun 27 12:52:39 Server Docker Auto Update: Checking for available updates
Jun 27 12:54:21 Server nginx: 2022/06/27 12:54:21 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:54:21 Server nginx: 2022/06/27 12:54:21 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.168:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:54:24 Server nginx: 2022/06/27 12:54:24 [error] 4045#4045: recv() failed (113: No route to host) while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"
Jun 27 12:54:24 Server nginx: 2022/06/27 12:54:24 [error] 4045#4045: OCSP responder prematurely closed connection while requesting certificate status, responder: r3.o.lencr.org, peer: 76.73.236.177:80, certificate: "/boot/config/ssl/certs/certificate_bundle.pem"

 

Edited by CorneliousJD
Link to comment

I've just noticed that this plugin stopped my weekly backups approx 2 months ago. Any suggestions -- I'm running 6.9.1 and was just verifying my backup status before upgrading to 10.x. Running it manually displays the below.

 

Edit: It hasn't been backing up since May 2021. Ugh.

 

 

Backup / Restore Status: Not Running
Backing Up appData from /mnt/cache/appdata/ to /mnt/user/backups/appdatabackup/[email protected]
Executing tar: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" "/mnt/user/system/libvirt/libvirt.img" "/mnt/user/backups/libvertbackup/" > /dev/null 2>&1
2022/07/03 07:23:06 [10626] building file list
2022/07/03 07:23:06 [10626] sent 75 bytes received 19 bytes 188.00 bytes/sec
2022/07/03 07:23:06 [10626] total size is 1,073,741,824 speedup is 11,422,785.36
Verifying Backup
Using command: cd '/mnt/cache/appdata/' && /usr/bin/tar --diff -C '/mnt/cache/appdata/' -af '/mnt/user/backups/appdatabackup/[email protected]/CA_backup.tar.gz' > /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress
Searching for updates to docker applications
Backup/Restore Complete. tar Return Value: 0
Backup / Restore Completed

 

Edited by stultus
Link to comment
11 minutes ago, Squid said:

Is there anything in /mnt/cache/appdata?  Looks like there's nothing to actually backup.

I uninstalled plugin and then reinstalled which forced me to remap path. Now I'm backing up /mnt/user/appdata and it's humming along again!

Link to comment
On 6/27/2022 at 1:01 PM, CorneliousJD said:

 

I stand corrected on the manual run, I turned on checking for app updates after a backup, and it hangs there too now. Here's relevant system log info below that happens when invoking a CA appdata backup. 

 

I am unable to ping known-good addreses, such as 8.8.8.8 or 9.9.9.9 I just get

From 10.0.0.10 icmp_seq=1 Destination Host Unreachable

 

EDIT - I decided to make a new thread for this as this seems to be some pretty weird, possibly deeper and/or system related issue, and I don't want to muck up Squid's thread here.

 

New thread here for anyone who has any suggestions to help me on this.

 

 

Just wanted to note that I reverted back to macvlan and a separate docker VLAN for the few containers that need static 
IPs set. 

 

so far issue has not resurfaced.

bug report created here. - if anyone is interested in chasing this further. 

 

 

Link to comment
  • 2 weeks later...

Just to jump in on this issue, I recently updated to 6.10.3 and today I went to do a manual backup using the plugin but no backup was initiated.  I reverted back to 6.10.2 and all is working fine.  Before reverting to 6.10.2 I tried using macvlan instead of ipvlan but that made no difference.  There is definitely an issue with the plugin and 6.10.3.  For now I'll just stay with 6.10.2.

Link to comment

was wondering is there any reason why you cant restore only specific containers other then have to restore all
from what i see you can exclude the containers you want to backup, but seems that this showed be reverse 
it would be nice to backup all and then when restore you be able to choose witch ones you want to restore
always wonder why isnt this part of the unraid core
is there some type of limitations that dont allows us to have this type of feature?

Edited by Encarnacao
  • Like 2
Link to comment
  • Squid locked this topic
Guest
This topic is now closed to further replies.