evakq8r Posted January 18, 2023 Share Posted January 18, 2023 Hi, Have started receiving warnings that my log files are nearing 100% and have been trying to work out why. If I manually look at the docker.log.1 file, it shows a repeating 'starting signal loop' from moby but not sure why it's started within the last couple of weeks or whether it's related. time="2023-01-19T01:11:08.810433865+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:11:08.810622094+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:11:08.810682107+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:11:08.811308407+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=40475 runtime=io.containerd.runc.v2 time="2023-01-19T01:11:08.885314367+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:11:08.885523569+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:11:08.885564943+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:11:08.886076138+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=40521 runtime=io.containerd.runc.v2 time="2023-01-19T01:12:11.044193548+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:12:11.044408590+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:12:11.044466804+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:12:11.044917385+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=1512 runtime=io.containerd.runc.v2 time="2023-01-19T01:12:11.550734620+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:12:11.550943282+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:12:11.550986842+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:12:11.551531510+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=1727 runtime=io.containerd.runc.v2 time="2023-01-19T01:13:13.201550504+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:13:13.201763393+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:13:13.201804197+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:13:13.202338359+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=13786 runtime=io.containerd.runc.v2 time="2023-01-19T01:13:14.762442280+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:13:14.762640405+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:13:14.762679552+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:13:14.763138656+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=14205 runtime=io.containerd.runc.v2 time="2023-01-19T01:14:15.347267221+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:14:15.347479993+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:14:15.347522144+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:14:15.348121927+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e4236caee8994d9f103782165ec3884e41400ce9094ae8aaa9917617ff313da pid=24400 runtime=io.containerd.runc.v2 time="2023-01-19T01:14:16.900834894+10:30" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 time="2023-01-19T01:14:16.900988734+10:30" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 time="2023-01-19T01:14:16.901031182+10:30" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 time="2023-01-19T01:14:16.901436282+10:30" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8e390fd60ea6e65820c5e4e5ab570c6c9998db4997f39e95d55e98d319bd2119 pid=24747 runtime=io.containerd.runc.v2 Diagnostics attached. Any help would be appreciated. Thank you. notyourunraid-diagnostics-20230119-0126.zip Quote Link to comment
trurl Posted January 18, 2023 Share Posted January 18, 2023 Why do you have 200G docker.img? Have you had problems filling it? 20G is often more than enough. The usual cause of filling docker.img is an application writing to a path that isn't mapped. Also, your appdata share has files on the array. Syslog is being spammed with connections to port 23. Any idea what that is about? 1 Quote Link to comment
evakq8r Posted January 18, 2023 Author Share Posted January 18, 2023 16 minutes ago, trurl said: Why do you have 200G docker.img? Have you had problems filling it? 20G is often more than enough. The usual cause of filling docker.img is an application writing to a path that isn't mapped. Good question. I don't recall when I made that change but it's been like that for quite some time. If I were to change it down to 20G, would that impact the existing Docker containers I have? Or will it just rebuild? 17 minutes ago, trurl said: Also, your appdata share has files on the array. I see. May have been remnants of moving everything to array when I changed cache drives. I'll look at that later today. 18 minutes ago, trurl said: Syslog is being spammed with connections to port 23. Any idea what that is about? Where are you seeing this? The promiscuous/blocked logs? They only refer to port 2 as far as I can see. As for what is spamming port 23, I haven't the foggiest as I don't use telnet for anything, only SSH. Are you able to see any more info as to where the connection attempts are coming from? Quote Link to comment
trurl Posted January 19, 2023 Share Posted January 19, 2023 9 hours ago, evakq8r said: The promiscuous/blocked logs? Jan 19 01:20:30 NotYourUnraid kernel: br-1a61be4b1d02: port 23(veth93ec9e6) entered blocking state Jan 19 01:20:30 NotYourUnraid kernel: br-1a61be4b1d02: port 23(veth93ec9e6) entered forwarding state I don't know what this is about but it seems to be the main thing filling your syslog. Your docker.log in diagnostics is empty, perhaps it rotated and new one didn't have any space. Quote Link to comment
evakq8r Posted January 19, 2023 Author Share Posted January 19, 2023 7 hours ago, trurl said: I don't know what this is about but it seems to be the main thing filling your syslog. Unfortunately I don't what it's about either. The logs are now indicating port 32, rather than 23: Jan 19 19:28:47 NotYourUnraid kernel: br-1a61be4b1d02: port 32(veth5105afc) entered blocking state Jan 19 19:28:47 NotYourUnraid kernel: br-1a61be4b1d02: port 32(veth5105afc) entered forwarding state Jan 19 19:28:48 NotYourUnraid kernel: br-1a61be4b1d02: port 32(veth5105afc) entered disabled state Also just seen this error flash up a few times too: Jan 19 19:28:44 NotYourUnraid kernel: Page cache invalidation failure on direct I/O. Possible data corruption due to collision with buffered I/O! The adapter names suggest virtual ethernet adapters and/or bridge adapters. What would cause spontaneous connection attempts like this? I just don't know where to start... Quote Link to comment
Squid Posted January 19, 2023 Share Posted January 19, 2023 Is there a container that's restarting itself every minute? Looking at the container's uptime on the docker tab should show one that's very low uptime, even after double checking again in a half hour 1 Quote Link to comment
evakq8r Posted January 19, 2023 Author Share Posted January 19, 2023 8 minutes ago, Squid said: Is there a container that's restarting itself every minute? Looking at the container's uptime on the docker tab should show one that's very low uptime, even after double checking again in a half hour Eureka! There were 2 containers spontaneously rebooting every minute. Authelia and Cachet-URL-Monitor (former for a failed migration (apparently it thinks my DB is corrupt so that'll be fun to sort) and the latter because the Cachet container was off and it couldn't communicate). Since stopping both containers, the promiscous logs have finally stopped. Thank you for your (and @trurl's) help! Is there a way to drop the percentage of the log file without a full reboot? Quote Link to comment
trurl Posted January 19, 2023 Share Posted January 19, 2023 You could try deleting old logs in /var/log, but you might also need to restart logging. I don't remember how to do that. Quote Link to comment
evakq8r Posted January 19, 2023 Author Share Posted January 19, 2023 (edited) 11 minutes ago, trurl said: You could try deleting old logs in /var/log, but you might also need to restart logging. I don't remember how to do that. I had tried deleting a 10MB docker.log.1 file, but that didn't reduce the space at all. That said, I deleted 2 older syslog files (.1 and .2, at 3MB a piece), and it drop the log file a little. What I don't get is the disparity between the space in use, and what space is free: df -h /var/log Filesystem Size Used Avail Use% Mounted on tmpfs 128M 118M 11M 92% /var/log ls -sh /var/log total 4.5M 32K Xorg.0.log 0 cron 1.4M file.activity.log 0 nginx/ 0 removed_scripts@ 0 swtpm/ 4.0K apcupsd.events 0 debug 8.0K lastlog 0 packages@ 0 removed_uninstall_scripts@ 2.9M syslog 4.0K apcupsd.events.1 4.0K dhcplog 0 libvirt/ 0 pkgtools/ 0 samba/ 0 vfio-pci 4.0K apcupsd.events.2 4.0K diskinfo.log 172K maillog 0 plugins/ 0 scripts@ 4.0K wg-quick.log 4.0K apcupsd.events.3 92K dmesg 0 mcelog 0 preclear/ 0 secure 12K wtmp 4.0K apcupsd.events.4 0 docker.log 0 messages 0 pwfail/ 0 setup@ 0 btmp 4.0K faillog 0 nfsd/ 0 removed_packages@ 0 spooler Whereabouts is the extra space being used? Edited January 19, 2023 by evakq8r Different ls command Quote Link to comment
Kilrah Posted January 19, 2023 Share Posted January 19, 2023 (edited) Can install ncdu from nerdTools and see it in a more useful way. Samba logs can sometimes grow quite a bit. If the files are in use they might not be cleared until closed hence the comment above... Edited January 19, 2023 by Kilrah Quote Link to comment
evakq8r Posted January 19, 2023 Author Share Posted January 19, 2023 3 minutes ago, Kilrah said: Can install ncdu from nerdTools and see it in a more useful way. Samba logs can sometimes grow quite a bit. If the files are in use they might not be cleared until closed hence the comment above... Thanks for the suggestion. I will try that later today. I'm trying to avoid a full reboot as the Poweredge server I have Unraid on will always fail to reboot successfully (some IDRAC error, can't recall offhand). Needs to be powered off at the server itself then turned back on and wait 15 minutes. Quote Link to comment
evakq8r Posted January 19, 2023 Author Share Posted January 19, 2023 7 hours ago, Kilrah said: Can install ncdu from nerdTools and see it in a more useful way. Samba logs can sometimes grow quite a bit. If the files are in use they might not be cleared until closed hence the comment above... Samba logs are definitely contributing, but not to the extent I was hoping for: Quote Link to comment
Solution evakq8r Posted January 22, 2023 Author Solution Share Posted January 22, 2023 I've ended up having to reboot anyway as I had other issues to deal with (space issues). The reboot has fixed the log files size (now 3%). Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.