[Plugin] CA Appdata Backup / Restore - Deprecated


Squid

Recommended Posts

3 minutes ago, themaxxz said:

There is no rush.

 

I had a quick look at this myself and adding basename( ) around $excluded appears to work, at least when clicking the 'Backup now' button.

 

In the /usr/local/emhttp/plugins/ca.backup/scripts/backup.php

 

$rsyncExcluded .= '--exclude "'.basename($excluded).'" ';

 

2017.09.23a was released that fixes everything. 

 

NB:  basename won't work if the user manually adds in subfolders.  Instead, I remove $source from the excluded path

Edited by Squid
Link to comment

Does the CA Backup/Restore plugin respect the settings of CA Docker Autostart Manager when bringing containers up after a scheduled backup runs as well? Also, when it shuts down containers before running the backup, any chance it does this in reverse? Not only would I like to be sure certain containers come up first, but also they shutdown last. A good example is MariaDB where it needs to be running before things that use it (of course), but also don't want it to shut down first and not be available for containers to do any last writes/flushes to it before they go down.

 

Also, tangentially related, after a scheduled backup runs, what containers does it bring back up? Only those set to autostart, eveything, or only things that were previously running? I have some containers I'm just playing with so they don't run all the time, and wondering if they would get started after a backup completes.

Link to comment
4 hours ago, deusxanime said:

Does the CA Backup/Restore plugin respect the settings of CA Docker Autostart Manager

Yes

4 hours ago, deusxanime said:

Also, when it shuts down containers before running the backup, any chance it does this in reverse?

No.  Alphabetical

4 hours ago, deusxanime said:

what containers does it bring back up?

All the ones that were running

 

Link to comment

Is it normal for the system to be completely unresponsive during a backup?  Right now the main unraid UI will not load, ssh'ing into the system means 10-15 seconds of wait time before I see a prompt, sysload is 13/13/13, all commands from the shell are slow/hang, 'ps -aux > /mnt/user/bob/ps.aux.10.2.17" has been working for over ten minutes, the server is not visible on the network, files and shares are unavailable..... 

 

htop won't load at all, top displays the first screen the freezes (no updates), sysload is now 15+ across the board, rsync (x3) and unrraid are the processes using the most CPU.  I can't even get a read on i/o, system is running like an 8088.

 

--

 

root@ffs2:~# ps -Ao user,uid,comm,pid,pcpu,tty --sort=-pcpu | head -n 6
USER       UID COMMAND           PID %CPU TT
root         0 unraidd         18972  1.4 ?
root         0 mdrecoveryd     18608  0.5 ?
root         0 rsync           27779  0.5 ?
root         0 rsync           27780  0.5 ?
root         0 rsync           27781  0.4 ?
root@ffs2:~#
 

--

 

login as: root
[email protected]'s password:
Last login: Mon Oct  2 06:09:10 2017 from 192.168.0.51
Linux 4.9.30-unRAID.
root@ffs2:~# iostat -c
Linux 4.9.30-unRAID (ffs2)      10/02/2017      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.12    0.39    0.93    0.54    0.00   97.03

root@ffs2:~# iostat -d
Linux 4.9.30-unRAID (ffs2)      10/02/2017      _x86_64_        (4 CPU)

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             1.13         4.49       270.81   18255720 1101094532
sda               0.01         0.47         0.01    1910116      38352
sdb               4.22      1930.20         0.02 7848087954      86480
sdc               4.44      1966.14        15.41 7994186038   62667384
sdd              10.05      2046.40      1938.02 8320543902 7879879916
sde               4.22      1929.01         0.04 7843253710     169012
sdf               5.76      2933.25        14.64 11926427510   59515132
sdg               4.30        43.41      2370.89  176521033 9639881435
sdh               0.88        71.96       176.82  292564905  718934752
sdi               8.94      4231.88       382.08 17206559638 1553528652
sdj               6.37      3132.00       241.21 12734516378  980759664
sdk               8.51      3868.15         0.00 15727639522       9776
sdl               4.28      1937.88         0.02 7879293706      94884
sdm               4.23      1932.66         2.17 7858055862    8835484
sdn               4.20      1929.04         0.01 7843352342      32708
sdo               3.12      1443.96         0.02 5871056118      76604
sdp               8.40      3862.24         0.00 15703619642      19768
sdq               8.42      3855.84         0.01 15677601186      30960
sdr               9.67      4082.29        98.62 16598321798  400998084
md1               1.01       121.30         9.89  493217926   40221989
md2               0.11        11.95         0.00   48603403      13734
md3               0.17        17.86         0.00   72634829       4883
md4               1.19       133.39        98.62  542338732  400976865
md5               0.04         2.29         2.17    9308474    8825273
md6               0.02         0.84         0.01    3408013      24923
md7               0.08         5.55         0.01   22577745      23366
md8               0.01         0.29         0.02    1160884      67654
md9               0.09         9.66         0.02   39291547      85629
md10              0.03         1.99         0.02    8095153      74658
md11              0.23        22.53        15.41   91616252   62655428
md12              0.04         0.78         0.04    3188537     155432
md13              0.49         1.65       241.16    6707814  980539053
md14              0.49        29.31        14.59  119189684   59339054

root@ffs2:~# uptime
 06:31:54 up 47 days,  1:26,  1 user,  load average: 17.00, 17.00, 16.57
root@ffs2:~#
 

 

--

 

 

now 'iostat -x 1' is reporting this:

 

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.25    0.00    0.00   99.75

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
loop0             0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdg               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdh               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdi               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdj               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdk               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdl               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdm               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdn               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdo               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdp               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdq               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdr               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md1               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md2               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md3               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md4               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md5               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md6               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md7               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md8               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md9               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md10              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md11              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md12              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md13              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md14              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

 

 

but system is still unresponsive.

 

 

 

 

Edited by tucansam
Link to comment

There is some discussion from about a month ago with some users having issues when deleting backups.

 

What is the best way to delete old backups?  Can I do it from Windows?  Should I do it with an ssh session to my root ID on my unRAID server?  Or is there another recommended way to do this?

Edited by wayner
Link to comment
On 10/2/2017 at 9:22 AM, wayner said:

There is some discussion from about a month ago with some users having issues when deleting backups.

 

What is the best way to delete old backups?  Can I do it from Windows?  Should I do it with an ssh session to my root ID on my unRAID server?  Or is there another recommended way to do this?

 

I have been trying to delete old backups also to no avail. I have had to do 2 unclean shutdowns to get the system back to respond. Delete starts ok but after a few mins it just freezes my system. I have tried in windows but holy moly the plex docker in itself is over 30 gigs with folder after folder of junk. SSH and rmdir does not work. and krusader does not work.

I am going to move everything but my backup folder off the drive I have configured and pull the drive out, shrink the array and reformat that drive. 

Just too many folders taking up way to much space. And with a weekly backup running since July, it is getting out of hand.

 

If anyone reads this, I recommend to not backup your plex folder. Seems all the other dockers delete ok.

Link to comment

I, too, am trying to recover from having my plex folder backed up.  I've stopped running backups due to my system becoming unresponsive.  Right now I'm just trying to delete the old backups first so I can try to get back to a state where I can run backups again (minus plex, of course).

 

I'm probably also going to have to go the route where I move everything but backups off of a disk, pull the disk, shrink the array.  The backup directory just can't be killed with 'rm -rf' or 'find -delete' without freezing my system leading to an unclean shutdown.

Link to comment
On 10/2/2017 at 8:38 AM, tucansam said:

Is it normal for the system to be completely unresponsive during a backup?  Right now the main unraid UI will not load, ssh'ing into the system means 10-15 seconds of wait time before I see a prompt, sysload is 13/13/13, all commands from the shell are slow/hang, 'ps -aux > /mnt/user/bob/ps.aux.10.2.17" has been working for over ten minutes, the server is not visible on the network, files and shares are unavailable..... 

 

htop won't load at all, top displays the first screen the freezes (no updates), sysload is now 15+ across the board, rsync (x3) and unrraid are the processes using the most CPU.  I can't even get a read on i/o, system is running like an 8088.

 

--

 

root@ffs2:~# ps -Ao user,uid,comm,pid,pcpu,tty --sort=-pcpu | head -n 6
USER       UID COMMAND           PID %CPU TT
root         0 unraidd         18972  1.4 ?
root         0 mdrecoveryd     18608  0.5 ?
root         0 rsync           27779  0.5 ?
root         0 rsync           27780  0.5 ?
root         0 rsync           27781  0.4 ?
root@ffs2:~#
 

--

 

login as: root
[email protected]'s password:
Last login: Mon Oct  2 06:09:10 2017 from 192.168.0.51
Linux 4.9.30-unRAID.
root@ffs2:~# iostat -c
Linux 4.9.30-unRAID (ffs2)      10/02/2017      _x86_64_        (4 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.12    0.39    0.93    0.54    0.00   97.03

root@ffs2:~# iostat -d
Linux 4.9.30-unRAID (ffs2)      10/02/2017      _x86_64_        (4 CPU)

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
loop0             1.13         4.49       270.81   18255720 1101094532
sda               0.01         0.47         0.01    1910116      38352
sdb               4.22      1930.20         0.02 7848087954      86480
sdc               4.44      1966.14        15.41 7994186038   62667384
sdd              10.05      2046.40      1938.02 8320543902 7879879916
sde               4.22      1929.01         0.04 7843253710     169012
sdf               5.76      2933.25        14.64 11926427510   59515132
sdg               4.30        43.41      2370.89  176521033 9639881435
sdh               0.88        71.96       176.82  292564905  718934752
sdi               8.94      4231.88       382.08 17206559638 1553528652
sdj               6.37      3132.00       241.21 12734516378  980759664
sdk               8.51      3868.15         0.00 15727639522       9776
sdl               4.28      1937.88         0.02 7879293706      94884
sdm               4.23      1932.66         2.17 7858055862    8835484
sdn               4.20      1929.04         0.01 7843352342      32708
sdo               3.12      1443.96         0.02 5871056118      76604
sdp               8.40      3862.24         0.00 15703619642      19768
sdq               8.42      3855.84         0.01 15677601186      30960
sdr               9.67      4082.29        98.62 16598321798  400998084
md1               1.01       121.30         9.89  493217926   40221989
md2               0.11        11.95         0.00   48603403      13734
md3               0.17        17.86         0.00   72634829       4883
md4               1.19       133.39        98.62  542338732  400976865
md5               0.04         2.29         2.17    9308474    8825273
md6               0.02         0.84         0.01    3408013      24923
md7               0.08         5.55         0.01   22577745      23366
md8               0.01         0.29         0.02    1160884      67654
md9               0.09         9.66         0.02   39291547      85629
md10              0.03         1.99         0.02    8095153      74658
md11              0.23        22.53        15.41   91616252   62655428
md12              0.04         0.78         0.04    3188537     155432
md13              0.49         1.65       241.16    6707814  980539053
md14              0.49        29.31        14.59  119189684   59339054

root@ffs2:~# uptime
 06:31:54 up 47 days,  1:26,  1 user,  load average: 17.00, 17.00, 16.57
root@ffs2:~#
 

 

--

 

 

now 'iostat -x 1' is reporting this:

 

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.00    0.00    0.25    0.00    0.00   99.75

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
loop0             0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sde               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdf               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdg               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdh               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdi               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdj               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdk               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdl               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdm               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdn               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdo               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdp               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdq               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdr               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md1               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md2               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md3               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md4               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md5               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md6               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md7               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md8               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md9               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md10              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md11              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md12              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md13              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
md14              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

 

 

but system is still unresponsive.

 

 

 

 

On 10/2/2017 at 8:38 AM, tucansam said:

 

 

I woke up to the same issue this morning (I have it scheduled to backup at 3AM on the 5th of every month). All other times it only takes a couple of hours to complete but this morning the dashboard wouldn't load, and docker images were never started backup. I look at the backup folder and I'm  not sure anything was backed up last night

 

Had to cut power and boot it back up just to get the dashboard to be responsive and am trying to do a manual backup now

Link to comment
  • 2 weeks later...

My unRaid just ran an scheduled backup, and it didn't update the dockers. I have the CA Auto Update Applications installed. Is there any specific settings that has to be checked other then "Update Applications On Restart? Yes"?

 

EDIT: I found this line in the log: 

Oct 17 10:06:23 Tower Docker Auto Update: No settings file found

Which leads to the  question, do I manually create this setting file or does it get created when the correct settings is done?

 

Quote

Oct 17 10:00:01 Tower root: #######################################
Oct 17 10:00:01 Tower root: Community Applications appData Backup
Oct 17 10:00:01 Tower root: Applications will be unavailable during
Oct 17 10:00:01 Tower root: this process.  They will automatically
Oct 17 10:00:01 Tower root: be restarted upon completion.
Oct 17 10:00:01 Tower root: #######################################
Oct 17 10:00:01 Tower sSMTP[25886]: Creating SSL connection to host
Oct 17 10:00:01 Tower sSMTP[25886]: SSL connection using AES128-SHA256
Oct 17 10:00:03 Tower sSMTP[25886]: Sent mail for ***@**** (221 ****) uid=0 username=root outbytes=638
Oct 17 10:00:03 Tower root: Stopping couchpotato
Oct 17 10:00:07 Tower kernel: docker0: port 1(veth114a706) entered disabled state
Oct 17 10:00:07 Tower kernel: vethf510cb7: renamed from eth0
Oct 17 10:00:07 Tower kernel: docker0: port 1(veth114a706) entered disabled state
Oct 17 10:00:07 Tower kernel: device veth114a706 left promiscuous mode
Oct 17 10:00:07 Tower kernel: docker0: port 1(veth114a706) entered disabled state
Oct 17 10:00:07 Tower root: docker stop -t 120 couchpotato
Oct 17 10:00:07 Tower root: Stopping Krusader
Oct 17 10:00:08 Tower kernel: vethf1f1fcb: renamed from eth0
Oct 17 10:00:08 Tower kernel: docker0: port 2(vethc80b678) entered disabled state
Oct 17 10:00:08 Tower kernel: docker0: port 2(vethc80b678) entered disabled state
Oct 17 10:00:08 Tower kernel: device vethc80b678 left promiscuous mode
Oct 17 10:00:08 Tower kernel: docker0: port 2(vethc80b678) entered disabled state
Oct 17 10:00:08 Tower root: docker stop -t 120 Krusader
Oct 17 10:00:08 Tower root: Stopping libresonic
Oct 17 10:00:11 Tower kernel: vethb46c2d3: renamed from eth0
Oct 17 10:00:11 Tower kernel: docker0: port 3(veth19fbdcb) entered disabled state
Oct 17 10:00:11 Tower kernel: docker0: port 3(veth19fbdcb) entered disabled state
Oct 17 10:00:11 Tower kernel: device veth19fbdcb left promiscuous mode
Oct 17 10:00:11 Tower kernel: docker0: port 3(veth19fbdcb) entered disabled state
Oct 17 10:00:11 Tower root: docker stop -t 120 libresonic
Oct 17 10:00:11 Tower root: Stopping mylar
Oct 17 10:00:15 Tower kernel: veth7a55ded: renamed from eth0
Oct 17 10:00:15 Tower kernel: docker0: port 4(vethcf3fe72) entered disabled state
Oct 17 10:00:15 Tower kernel: docker0: port 4(vethcf3fe72) entered disabled state
Oct 17 10:00:15 Tower kernel: device vethcf3fe72 left promiscuous mode
Oct 17 10:00:15 Tower kernel: docker0: port 4(vethcf3fe72) entered disabled state
Oct 17 10:00:15 Tower root: docker stop -t 120 mylar
Oct 17 10:00:15 Tower root: Stopping plex
Oct 17 10:00:18 Tower root: docker stop -t 120 plex
Oct 17 10:00:18 Tower root: Stopping plexpy
Oct 17 10:00:22 Tower kernel: veth04cb545: renamed from eth0
Oct 17 10:00:22 Tower kernel: docker0: port 5(veth87671cf) entered disabled state
Oct 17 10:00:22 Tower kernel: docker0: port 5(veth87671cf) entered disabled state
Oct 17 10:00:22 Tower kernel: device veth87671cf left promiscuous mode
Oct 17 10:00:22 Tower kernel: docker0: port 5(veth87671cf) entered disabled state
Oct 17 10:00:22 Tower root: docker stop -t 120 plexpy
Oct 17 10:00:22 Tower root: Stopping rutorrent
Oct 17 10:00:25 Tower kernel: docker0: port 6(vethb610355) entered disabled state
Oct 17 10:00:25 Tower kernel: vethc116493: renamed from eth0
Oct 17 10:00:25 Tower kernel: docker0: port 6(vethb610355) entered disabled state
Oct 17 10:00:25 Tower kernel: device vethb610355 left promiscuous mode
Oct 17 10:00:25 Tower kernel: docker0: port 6(vethb610355) entered disabled state
Oct 17 10:00:26 Tower root: docker stop -t 120 rutorrent
Oct 17 10:00:26 Tower root: Stopping sonarr
Oct 17 10:00:29 Tower kernel: docker0: port 7(veth9970127) entered disabled state
Oct 17 10:00:29 Tower kernel: veth54fd2fd: renamed from eth0
Oct 17 10:00:29 Tower kernel: docker0: port 7(veth9970127) entered disabled state
Oct 17 10:00:29 Tower kernel: device veth9970127 left promiscuous mode
Oct 17 10:00:29 Tower kernel: docker0: port 7(veth9970127) entered disabled state
Oct 17 10:00:29 Tower root: docker stop -t 120 sonarr
Oct 17 10:00:29 Tower root: Stopping ubooquity
Oct 17 10:00:33 Tower kernel: docker0: port 8(veth4dd6e33) entered disabled state
Oct 17 10:00:33 Tower kernel: vethb6d21f4: renamed from eth0
Oct 17 10:00:33 Tower kernel: docker0: port 8(veth4dd6e33) entered disabled state
Oct 17 10:00:33 Tower kernel: device veth4dd6e33 left promiscuous mode
Oct 17 10:00:33 Tower kernel: docker0: port 8(veth4dd6e33) entered disabled state
Oct 17 10:00:33 Tower root: docker stop -t 120 ubooquity
Oct 17 10:00:33 Tower root: Deleting Old USB Backup
Oct 17 10:00:33 Tower root: Backing up USB Flash drive config folder to /mnt/user/Backup/unRaid/USB/
Oct 17 10:00:33 Tower root: Using command: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup.datastore/appdata_backup.log" /boot/ "/mnt/user/Backup/unRaid/USB/" > /dev/null 2>&1
Oct 17 10:00:35 Tower ntpd[1855]: Deleting interface #5 docker0, 172.17.0.1#123, interface stats: received=0, sent=0, dropped=0, active_time=595404 secs
Oct 17 10:00:56 Tower root: Backing up libvirt.img to /mnt/user/Backup/unRaid/VM_XML/
Oct 17 10:00:59 Tower root: Backing Up appData from /mnt/user/appdata/ to /mnt/user/Backup/unRaid/AppData/[email protected]
Oct 17 10:00:59 Tower root: Using command: /usr/bin/rsync -avXHq --delete --exclude "/mnt/cache/docker.img"  --log-file="/var/lib/docker/unraid/ca.backup.datastore/appdata_backup.log" "/mnt/user/appdata/" "/mnt/user/Backup/unRaid/AppData/[email protected]" > /dev/null 2>&1
Oct 17 10:02:21 Tower kernel: unregister_netdevice: waiting for lo to become free. Usage count = 1
Oct 17 10:02:32 Tower kernel: unregister_netdevice: waiting for lo to become free. Usage count = 1
Oct 17 10:06:05 Tower root: Backup Complete
Oct 17 10:06:05 Tower Docker Auto Update: Community Applications Docker Autoupdate running
Oct 17 10:06:05 Tower Docker Auto Update: Checking for available updates
Oct 17 10:06:23 Tower Docker Auto Update: No settings file found
Oct 17 10:06:23 Tower root: Restarting couchpotato
Oct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered blocking state
Oct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered disabled state
Oct 17 10:06:23 Tower kernel: device vetheab4039 entered promiscuous mode
Oct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered blocking state
Oct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered forwarding state
Oct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered disabled state
Oct 17 10:06:23 Tower kernel: eth0: renamed from vethf848440
Oct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered blocking state
Oct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered forwarding state
Oct 17 10:06:23 Tower root: Restarting Krusader
Oct 17 10:06:23 Tower kernel: docker0: port 2(veth9652c65) entered blocking state
Oct 17 10:06:23 Tower kernel: docker0: port 2(veth9652c65) entered disabled state
Oct 17 10:06:23 Tower kernel: device veth9652c65 entered promiscuous mode
Oct 17 10:06:23 Tower kernel: docker0: port 2(veth9652c65) entered blocking state
Oct 17 10:06:23 Tower kernel: docker0: port 2(veth9652c65) entered forwarding state
Oct 17 10:06:23 Tower kernel: eth0: renamed from veth4101158
Oct 17 10:06:23 Tower root: Restarting libresonic
Oct 17 10:06:23 Tower kernel: docker0: port 3(veth87c0859) entered blocking state
Oct 17 10:06:23 Tower kernel: docker0: port 3(veth87c0859) entered disabled state
Oct 17 10:06:23 Tower kernel: device veth87c0859 entered promiscuous mode
Oct 17 10:06:23 Tower kernel: docker0: port 3(veth87c0859) entered blocking state
Oct 17 10:06:23 Tower kernel: docker0: port 3(veth87c0859) entered forwarding state
Oct 17 10:06:24 Tower kernel: eth0: renamed from veth92cac45
Oct 17 10:06:24 Tower root: Restarting mylar
Oct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered blocking state
Oct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered disabled state
Oct 17 10:06:24 Tower kernel: device veth32aba3e entered promiscuous mode
Oct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered blocking state
Oct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered forwarding state
Oct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered disabled state
Oct 17 10:06:24 Tower kernel: eth0: renamed from veth7687b1e
Oct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered blocking state
Oct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered forwarding state
Oct 17 10:06:24 Tower root: Restarting plex
Oct 17 10:06:24 Tower root: Restarting plexpy
Oct 17 10:06:24 Tower kernel: docker0: port 5(veth8c3dbbf) entered blocking state
Oct 17 10:06:24 Tower kernel: docker0: port 5(veth8c3dbbf) entered disabled state
Oct 17 10:06:24 Tower kernel: device veth8c3dbbf entered promiscuous mode
Oct 17 10:06:24 Tower kernel: docker0: port 5(veth8c3dbbf) entered blocking state
Oct 17 10:06:24 Tower kernel: docker0: port 5(veth8c3dbbf) entered forwarding state
Oct 17 10:06:25 Tower kernel: eth0: renamed from veth65c8ef2
Oct 17 10:06:25 Tower root: Restarting rutorrent
Oct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered blocking state
Oct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered disabled state
Oct 17 10:06:25 Tower kernel: device veth80c74c2 entered promiscuous mode
Oct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered blocking state
Oct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered forwarding state
Oct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered disabled state
Oct 17 10:06:25 Tower kernel: eth0: renamed from veth27d0b83
Oct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered blocking state
Oct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered forwarding state
Oct 17 10:06:25 Tower root: Restarting sonarr
Oct 17 10:06:25 Tower kernel: docker0: port 7(vethca1008c) entered blocking state
Oct 17 10:06:25 Tower kernel: docker0: port 7(vethca1008c) entered disabled state
Oct 17 10:06:25 Tower kernel: device vethca1008c entered promiscuous mode
Oct 17 10:06:25 Tower kernel: docker0: port 7(vethca1008c) entered blocking state
Oct 17 10:06:25 Tower kernel: docker0: port 7(vethca1008c) entered forwarding state
Oct 17 10:06:26 Tower kernel: eth0: renamed from veth5ab2e62
Oct 17 10:06:26 Tower root: Restarting ubooquity
Oct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered blocking state
Oct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered disabled state
Oct 17 10:06:26 Tower kernel: device veth1547595 entered promiscuous mode
Oct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered blocking state
Oct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered forwarding state
Oct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered disabled state
Oct 17 10:06:26 Tower kernel: eth0: renamed from vetha24828b
Oct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered blocking state
Oct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered forwarding state
Oct 17 10:06:26 Tower root: #######################
Oct 17 10:06:26 Tower root: appData Backup complete
Oct 17 10:06:26 Tower root: #######################
Oct 17 10:06:26 Tower root: Rsync log to flash drive disabled
Oct 17 10:06:26 Tower sSMTP[7507]: Creating SSL connection to host
Oct 17 10:06:26 Tower sSMTP[7507]: SSL connection using AES128-SHA256
Oct 17 10:06:28 Tower ntpd[1855]: Listen normally on 6 docker0 172.17.0.1:123
Oct 17 10:06:28 Tower ntpd[1855]: new interface(s) found: waking up resolver
Oct 17 10:06:28 Tower sSMTP[7507]: Sent mail for ***** (221 ****) uid=0 username=root outbytes=627
Oct 17 10:06:29 Tower root: Backup / Restore Completed

 

Edited by Hogwind
Link to comment
My unRaid just ran an scheduled backup, and it didn't update the dockers. I have the CA Auto Update Applications installed. Is there any specific settings that has to be checked other then "Update Applications On Restart? Yes"?
 
EDIT: I found this line in the log: 
Oct 17 10:06:23 Tower Docker Auto Update: No settings file found

Which leads to the  question, do I manually create this setting file or does it get created when the correct settings is done?
 

Oct 17 10:00:01 Tower root: #######################################Oct 17 10:00:01 Tower root: Community Applications appData BackupOct 17 10:00:01 Tower root: Applications will be unavailable duringOct 17 10:00:01 Tower root: this process.  They will automaticallyOct 17 10:00:01 Tower root: be restarted upon completion.Oct 17 10:00:01 Tower root: #######################################Oct 17 10:00:01 Tower sSMTP[25886]: Creating SSL connection to hostOct 17 10:00:01 Tower sSMTP[25886]: SSL connection using AES128-SHA256Oct 17 10:00:03 Tower sSMTP[25886]: Sent mail for ***@**** (221 ****) uid=0 username=root outbytes=638Oct 17 10:00:03 Tower root: Stopping couchpotatoOct 17 10:00:07 Tower kernel: docker0: port 1(veth114a706) entered disabled stateOct 17 10:00:07 Tower kernel: vethf510cb7: renamed from eth0Oct 17 10:00:07 Tower kernel: docker0: port 1(veth114a706) entered disabled stateOct 17 10:00:07 Tower kernel: device veth114a706 left promiscuous modeOct 17 10:00:07 Tower kernel: docker0: port 1(veth114a706) entered disabled stateOct 17 10:00:07 Tower root: docker stop -t 120 couchpotatoOct 17 10:00:07 Tower root: Stopping KrusaderOct 17 10:00:08 Tower kernel: vethf1f1fcb: renamed from eth0Oct 17 10:00:08 Tower kernel: docker0: port 2(vethc80b678) entered disabled stateOct 17 10:00:08 Tower kernel: docker0: port 2(vethc80b678) entered disabled stateOct 17 10:00:08 Tower kernel: device vethc80b678 left promiscuous modeOct 17 10:00:08 Tower kernel: docker0: port 2(vethc80b678) entered disabled stateOct 17 10:00:08 Tower root: docker stop -t 120 KrusaderOct 17 10:00:08 Tower root: Stopping libresonicOct 17 10:00:11 Tower kernel: vethb46c2d3: renamed from eth0Oct 17 10:00:11 Tower kernel: docker0: port 3(veth19fbdcb) entered disabled stateOct 17 10:00:11 Tower kernel: docker0: port 3(veth19fbdcb) entered disabled stateOct 17 10:00:11 Tower kernel: device veth19fbdcb left promiscuous modeOct 17 10:00:11 Tower kernel: docker0: port 3(veth19fbdcb) entered disabled stateOct 17 10:00:11 Tower root: docker stop -t 120 libresonicOct 17 10:00:11 Tower root: Stopping mylarOct 17 10:00:15 Tower kernel: veth7a55ded: renamed from eth0Oct 17 10:00:15 Tower kernel: docker0: port 4(vethcf3fe72) entered disabled stateOct 17 10:00:15 Tower kernel: docker0: port 4(vethcf3fe72) entered disabled stateOct 17 10:00:15 Tower kernel: device vethcf3fe72 left promiscuous modeOct 17 10:00:15 Tower kernel: docker0: port 4(vethcf3fe72) entered disabled stateOct 17 10:00:15 Tower root: docker stop -t 120 mylarOct 17 10:00:15 Tower root: Stopping plexOct 17 10:00:18 Tower root: docker stop -t 120 plexOct 17 10:00:18 Tower root: Stopping plexpyOct 17 10:00:22 Tower kernel: veth04cb545: renamed from eth0Oct 17 10:00:22 Tower kernel: docker0: port 5(veth87671cf) entered disabled stateOct 17 10:00:22 Tower kernel: docker0: port 5(veth87671cf) entered disabled stateOct 17 10:00:22 Tower kernel: device veth87671cf left promiscuous modeOct 17 10:00:22 Tower kernel: docker0: port 5(veth87671cf) entered disabled stateOct 17 10:00:22 Tower root: docker stop -t 120 plexpyOct 17 10:00:22 Tower root: Stopping rutorrentOct 17 10:00:25 Tower kernel: docker0: port 6(vethb610355) entered disabled stateOct 17 10:00:25 Tower kernel: vethc116493: renamed from eth0Oct 17 10:00:25 Tower kernel: docker0: port 6(vethb610355) entered disabled stateOct 17 10:00:25 Tower kernel: device vethb610355 left promiscuous modeOct 17 10:00:25 Tower kernel: docker0: port 6(vethb610355) entered disabled stateOct 17 10:00:26 Tower root: docker stop -t 120 rutorrentOct 17 10:00:26 Tower root: Stopping sonarrOct 17 10:00:29 Tower kernel: docker0: port 7(veth9970127) entered disabled stateOct 17 10:00:29 Tower kernel: veth54fd2fd: renamed from eth0Oct 17 10:00:29 Tower kernel: docker0: port 7(veth9970127) entered disabled stateOct 17 10:00:29 Tower kernel: device veth9970127 left promiscuous modeOct 17 10:00:29 Tower kernel: docker0: port 7(veth9970127) entered disabled stateOct 17 10:00:29 Tower root: docker stop -t 120 sonarrOct 17 10:00:29 Tower root: Stopping ubooquityOct 17 10:00:33 Tower kernel: docker0: port 8(veth4dd6e33) entered disabled stateOct 17 10:00:33 Tower kernel: vethb6d21f4: renamed from eth0Oct 17 10:00:33 Tower kernel: docker0: port 8(veth4dd6e33) entered disabled stateOct 17 10:00:33 Tower kernel: device veth4dd6e33 left promiscuous modeOct 17 10:00:33 Tower kernel: docker0: port 8(veth4dd6e33) entered disabled stateOct 17 10:00:33 Tower root: docker stop -t 120 ubooquityOct 17 10:00:33 Tower root: Deleting Old USB BackupOct 17 10:00:33 Tower root: Backing up USB Flash drive config folder to /mnt/user/Backup/unRaid/USB/Oct 17 10:00:33 Tower root: Using command: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup.datastore/appdata_backup.log" /boot/ "/mnt/user/Backup/unRaid/USB/" > /dev/null 2>&1Oct 17 10:00:35 Tower ntpd[1855]: Deleting interface #5 docker0, 172.17.0.1#123, interface stats: received=0, sent=0, dropped=0, active_time=595404 secsOct 17 10:00:56 Tower root: Backing up libvirt.img to /mnt/user/Backup/unRaid/VM_XML/Oct 17 10:00:59 Tower root: Backing Up appData from /mnt/user/appdata/ to /mnt/user/Backup/unRaid/AppData/[email protected] 17 10:00:59 Tower root: Using command: /usr/bin/rsync -avXHq --delete --exclude "/mnt/cache/docker.img"  --log-file="/var/lib/docker/unraid/ca.backup.datastore/appdata_backup.log" "/mnt/user/appdata/" "/mnt/user/Backup/unRaid/AppData/[email protected]" > /dev/null 2>&1Oct 17 10:02:21 Tower kernel: unregister_netdevice: waiting for lo to become free. Usage count = 1Oct 17 10:02:32 Tower kernel: unregister_netdevice: waiting for lo to become free. Usage count = 1Oct 17 10:06:05 Tower root: Backup CompleteOct 17 10:06:05 Tower Docker Auto Update: Community Applications Docker Autoupdate runningOct 17 10:06:05 Tower Docker Auto Update: Checking for available updatesOct 17 10:06:23 Tower Docker Auto Update: No settings file foundOct 17 10:06:23 Tower root: Restarting couchpotatoOct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered blocking stateOct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered disabled stateOct 17 10:06:23 Tower kernel: device vetheab4039 entered promiscuous modeOct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered blocking stateOct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered forwarding stateOct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered disabled stateOct 17 10:06:23 Tower kernel: eth0: renamed from vethf848440Oct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered blocking stateOct 17 10:06:23 Tower kernel: docker0: port 1(vetheab4039) entered forwarding stateOct 17 10:06:23 Tower root: Restarting KrusaderOct 17 10:06:23 Tower kernel: docker0: port 2(veth9652c65) entered blocking stateOct 17 10:06:23 Tower kernel: docker0: port 2(veth9652c65) entered disabled stateOct 17 10:06:23 Tower kernel: device veth9652c65 entered promiscuous modeOct 17 10:06:23 Tower kernel: docker0: port 2(veth9652c65) entered blocking stateOct 17 10:06:23 Tower kernel: docker0: port 2(veth9652c65) entered forwarding stateOct 17 10:06:23 Tower kernel: eth0: renamed from veth4101158Oct 17 10:06:23 Tower root: Restarting libresonicOct 17 10:06:23 Tower kernel: docker0: port 3(veth87c0859) entered blocking stateOct 17 10:06:23 Tower kernel: docker0: port 3(veth87c0859) entered disabled stateOct 17 10:06:23 Tower kernel: device veth87c0859 entered promiscuous modeOct 17 10:06:23 Tower kernel: docker0: port 3(veth87c0859) entered blocking stateOct 17 10:06:23 Tower kernel: docker0: port 3(veth87c0859) entered forwarding stateOct 17 10:06:24 Tower kernel: eth0: renamed from veth92cac45Oct 17 10:06:24 Tower root: Restarting mylarOct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered blocking stateOct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered disabled stateOct 17 10:06:24 Tower kernel: device veth32aba3e entered promiscuous modeOct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered blocking stateOct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered forwarding stateOct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered disabled stateOct 17 10:06:24 Tower kernel: eth0: renamed from veth7687b1eOct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered blocking stateOct 17 10:06:24 Tower kernel: docker0: port 4(veth32aba3e) entered forwarding stateOct 17 10:06:24 Tower root: Restarting plexOct 17 10:06:24 Tower root: Restarting plexpyOct 17 10:06:24 Tower kernel: docker0: port 5(veth8c3dbbf) entered blocking stateOct 17 10:06:24 Tower kernel: docker0: port 5(veth8c3dbbf) entered disabled stateOct 17 10:06:24 Tower kernel: device veth8c3dbbf entered promiscuous modeOct 17 10:06:24 Tower kernel: docker0: port 5(veth8c3dbbf) entered blocking stateOct 17 10:06:24 Tower kernel: docker0: port 5(veth8c3dbbf) entered forwarding stateOct 17 10:06:25 Tower kernel: eth0: renamed from veth65c8ef2Oct 17 10:06:25 Tower root: Restarting rutorrentOct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered blocking stateOct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered disabled stateOct 17 10:06:25 Tower kernel: device veth80c74c2 entered promiscuous modeOct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered blocking stateOct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered forwarding stateOct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered disabled stateOct 17 10:06:25 Tower kernel: eth0: renamed from veth27d0b83Oct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered blocking stateOct 17 10:06:25 Tower kernel: docker0: port 6(veth80c74c2) entered forwarding stateOct 17 10:06:25 Tower root: Restarting sonarrOct 17 10:06:25 Tower kernel: docker0: port 7(vethca1008c) entered blocking stateOct 17 10:06:25 Tower kernel: docker0: port 7(vethca1008c) entered disabled stateOct 17 10:06:25 Tower kernel: device vethca1008c entered promiscuous modeOct 17 10:06:25 Tower kernel: docker0: port 7(vethca1008c) entered blocking stateOct 17 10:06:25 Tower kernel: docker0: port 7(vethca1008c) entered forwarding stateOct 17 10:06:26 Tower kernel: eth0: renamed from veth5ab2e62Oct 17 10:06:26 Tower root: Restarting ubooquityOct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered blocking stateOct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered disabled stateOct 17 10:06:26 Tower kernel: device veth1547595 entered promiscuous modeOct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered blocking stateOct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered forwarding stateOct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered disabled stateOct 17 10:06:26 Tower kernel: eth0: renamed from vetha24828bOct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered blocking stateOct 17 10:06:26 Tower kernel: docker0: port 8(veth1547595) entered forwarding stateOct 17 10:06:26 Tower root: #######################Oct 17 10:06:26 Tower root: appData Backup completeOct 17 10:06:26 Tower root: #######################Oct 17 10:06:26 Tower root: Rsync log to flash drive disabledOct 17 10:06:26 Tower sSMTP[7507]: Creating SSL connection to hostOct 17 10:06:26 Tower sSMTP[7507]: SSL connection using AES128-SHA256Oct 17 10:06:28 Tower ntpd[1855]: Listen normally on 6 docker0 172.17.0.1:123Oct 17 10:06:28 Tower ntpd[1855]: new interface(s) found: waking up resolverOct 17 10:06:28 Tower sSMTP[7507]: Sent mail for ***** (221 ****) uid=0 username=root outbytes=627Oct 17 10:06:29 Tower root: Backup / Restore Completed

 
You need to go to auto update settings and set it up
Link to comment

Auto Update Settings: Docker, Update Check Frequency: Disabled, Then either either update all applications, or set it to no and cherry pick what you want updated.

 

Backup Settings: Update Applications on Restart: Yes

 

Edit:  Even though it does come with basically that as defaults, because the cfg file doesn't exist, Backup assumes you don't want it to run.  Make a change, any change in the settings, change it back and apply.  The cfg file will get written.

Edited by Squid
  • Like 1
Link to comment
18 hours ago, Squid said:

Auto Update Settings: Docker, Update Check Frequency: Disabled, Then either either update all applications, or set it to no and cherry pick what you want updated.

 

Backup Settings: Update Applications on Restart: Yes

 

Edit:  Even though it does come with basically that as defaults, because the cfg file doesn't exist, Backup assumes you don't want it to run.  Make a change, any change in the settings, change it back and apply.  The cfg file will get written.

Thanks, I'll try that. :)

Edited by Hogwind
Link to comment
On 02/10/2017 at 2:38 PM, tucansam said:

Is it normal for the system to be completely unresponsive during a backup?  Right now the main unraid UI will not load, ssh'ing into the system means 10-15 seconds of wait time before I see a prompt, sysload is 13/13/13, all commands from the shell are slow/hang, 'ps -aux > /mnt/user/bob/ps.aux.10.2.17" has been working for over ten minutes, the server is not visible on the network, files and shares are unavailable..... 

 

htop won't load at all, top displays the first screen the freezes (no updates), sysload is now 15+ across the board, rsync (x3) and unrraid are the processes using the most CPU.  I can't even get a read on i/o, system is running like an 8088.

 

(...)
 

but system is still unresponsive.

 

Did you find a solution to this? I've got the same symptoms, load going through the roof, unresponsive http UI, poor performance on IO. How did you get iostat installed?

 

I've since tried to delete the original backups, but that resulted in a full freeze like @Harro and @cowboytronic. Is the only way to format the affected drives?

Link to comment
19 hours ago, Fredrick said:

 

Did you find a solution to this? I've got the same symptoms, load going through the roof, unresponsive http UI, poor performance on IO. How did you get iostat installed?

 

I've since tried to delete the original backups, but that resulted in a full freeze like @Harro and @cowboytronic. Is the only way to format the affected drives?

 

Gonna answer myself here. 

 

I booted in safe-mode and removed the previous backup using midnight commander. It gives you actual progress information, so its easier to see if things are stuck. Each Plex-backup was between 250.000-320.000 files/directories (with 15.000 items in my library..), so deleting each one took a lot of time. This seems very inefficient of Plex, but is also outside the scope of this discussion I guess. 

 

Also note this has nothing to do with this plugin, but is a result of how Plex stores cache/metadata/media for its library.

Link to comment

Guys, I've been researching, and the problem is a combination of XFS and the rm command (and how it works) combined with the fact that XFS is dog slow at deleting insane number of files.  (Also, I am still unable to actually replicate the problem... Probably because my Plex appdata only contains ~500,000 files instead of millions)

 

This is one suggestion I have to get rid of the folder.  Note that this is definitely in the danger zone if you mistype the path.   I'm going to test an update to the plugin using this system instead, so you might want to hold off until then

 

mkdir /tmp/empty
rsync -avXH --delete /tmp/empty/ /mnt/user/Backups

Substituting /mnt/user/Backups with your appropriate appdata backup share.  Failure properly escape any spaces, etc could potentially result in data loss though.  You've been warned.

 

 

Apparently rsync'ing an empty folder and having it do the deleting is far more efficient than the linux rm command I've been using.

Edited by Squid
  • Like 1
Link to comment
9 hours ago, Squid said:

Guys, I've been researching, and the problem is a combination of XFS and the rm command (and how it works) combined with the fact that XFS is dog slow at deleting insane number of files.  (Also, I am still unable to actually replicate the problem... Probably because my Plex appdata only contains ~500,000 files instead of millions)

 

This is one suggestion I have to get rid of the folder.  Note that this is definitely in the danger zone if you mistype the path.   I'm going to test an update to the plugin using this system instead, so you might want to hold off until then

 


mkdir /tmp/empty
rsync -avXH --delete /tmp/empty/ /mnt/user/Backups

Substituting /mnt/user/Backups with your appropriate appdata backup share.  Failure properly escape any spaces, etc could potentially result in data loss though.  You've been warned.

 

 

Apparently rsync'ing an empty folder and having it do the deleting is far more efficient than the linux rm command I've been using.

 

Using rsync is a cool work around. Thanks for the info, will have to give that a try when I get my backups going!

Link to comment
12 minutes ago, deusxanime said:

 

Using rsync is a cool work around. Thanks for the info, will have to give that a try when I get my backups going!

I'm testing the update right now. - Still going to be slow as its an inherent problem with XFS as massive deletions (and Linux does not support actually deleting a directory and all the contents in one shot), but so long as the system doesn't have a heart attack then we should be good.

 

Link to comment

This is where we stand.

 

rsync vs rm is not the solution.

 

Managed to replicate this after sitting there and doing 5 hours of full backups, renaming all of the sets, and then watching them delete (and watching a syslog tail / htop / iotop - Its been a fun Saturday >:( )

 

Issue is one of the following:

  • XFS et al update the journal constantly as metadata changes.  During a deletion, there are many, many updates of metadata per file (more than simply creating the file).  This constant update of the journal / log is bogging the system right down and eventually it basically just halts.  I'm sure that if you sit there long enough it may recover eventually.  Many, many reports of this via Google, along with some work arounds, none of which are particularly applicable however to unRaid.  Completely out of my control.
  • Running out of memory.  During the deletion, memory usage does continually climb in proportion to the number of files.  At one point on my system, I was running ~90% in use.  Completely out of my control.

 

I have a plan that will still allow dated backups and not be a problem to delete, (namely switching over to a zip or tar archive), but its definitely not going to happen in the immediate future, and it may also come with its own caveats (performance in creating the archive?)

 

In the meantime, an update to the plugin is available.  Beyond a simple fix for a display aberration introduced by 6.4.0-rc10b, the change is a red note on the Use Dated backups option advising that you may have problems with this.

 

If you're stuck with old dated sets, you best option would be to go through with Krusader or something and gradually delete the subfolders from your Plex backup (main ones to separate from each other would be ...../TV Shows and ../Movies, and whatever ../Music is called

 

Edited by Squid
Link to comment

From what I've seen on Plex forums, tar'ing is a recommended way to back it up, because the issue you are running into with millions of files. You only end up with a single file to delete/transfer which can greatly improve times. By doing that though you don't have the option of doing incremental syncs, but the trade-off seems like it would probably be worth it rather than hanging your entire NAS! 

 

Not sure if it only makes sense to restrict that to Plex though, or make it universal for all the appdata backups (maybe one tar for each folder in there?).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.