/var/log is getting full (currently 57 % used)


Recommended Posts

Woke up to a Fix common problems notification.

 

Quote

/var/log is getting full (currently 57 % used)    Either your server has an extremely long uptime, or your syslog could be potentially being spammed with error messages. A reboot of your server will at least temporarily solve this problem, but ideally you should seek assistance in the forums and post your

 

Anyone help me figure out why my log file is getting large? Uptime is only 22 days. 

 

Logs attached. Thanks! 

 

unraid-diagnostics-20200331-0719.zip

Link to comment

Looking at my syslog looks like its something to do with my veeam server. I have a share on unraid which is the repo for my backups. It's been running for awhile though. I'll start doing some more digging. 

 

Thanks!

 

Mar 31 05:47:10 UNRAID kernel: eth0: renamed from veth47aa2ae
Mar 31 05:47:10 UNRAID kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethbd35dfc: link becomes ready
Mar 31 05:47:10 UNRAID kernel: docker0: port 6(vethbd35dfc) entered blocking state
Mar 31 05:47:10 UNRAID kernel: docker0: port 6(vethbd35dfc) entered forwarding state
Mar 31 05:47:12 UNRAID avahi-daemon[7344]: Joining mDNS multicast group on interface vethbd35dfc.IPv6 with address fe80::3cfb:ecff:feca:4d7f.
Mar 31 05:47:12 UNRAID avahi-daemon[7344]: New relevant interface vethbd35dfc.IPv6 for mDNS.
Mar 31 05:47:12 UNRAID avahi-daemon[7344]: Registering new address record for fe80::3cfb:ecff:feca:4d7f on vethbd35dfc.*.
Mar 31 05:47:13 UNRAID kernel: eth0: renamed from veth3f8cedb
Mar 31 05:47:13 UNRAID kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth5d43e12: link becomes ready
Mar 31 05:47:13 UNRAID kernel: docker0: port 16(veth5d43e12) entered blocking state
Mar 31 05:47:13 UNRAID kernel: docker0: port 16(veth5d43e12) entered forwarding state
Mar 31 05:47:15 UNRAID avahi-daemon[7344]: Joining mDNS multicast group on interface veth5d43e12.IPv6 with address fe80::e4ba:1eff:fe41:1e64.
Mar 31 05:47:15 UNRAID avahi-daemon[7344]: New relevant interface veth5d43e12.IPv6 for mDNS.
Mar 31 05:47:15 UNRAID avahi-daemon[7344]: Registering new address record for fe80::e4ba:1eff:fe41:1e64 on veth5d43e12.*.
Mar 31 05:57:07 UNRAID sshd[26837]: Accepted password for root from 10.10.20.20 port 62142 ssh2
Mar 31 05:57:08 UNRAID sshd[27170]: Accepted password for root from 10.10.20.20 port 62148 ssh2
Mar 31 05:57:10 UNRAID veeamselftar[27610]: acquired lock /tmp/VeeamAgent0b04802b-0777-4142-ba7f-1ba0677b0b2f.lock
Mar 31 05:57:10 UNRAID veeamselftar[27610]: Extracting contents to /tmp/VeeamAgent0b04802b-0777-4142-ba7f-1ba0677b0b2f.data
Mar 31 05:57:10 UNRAID veeamselftar[27610]: releasing lock /tmp/VeeamAgent0b04802b-0777-4142-ba7f-1ba0677b0b2f.lock
Mar 31 05:57:15 UNRAID sshd[26837]: Received disconnect from 10.10.20.20 port 62142:11: Connection terminated by the client.
Mar 31 05:57:15 UNRAID sshd[26837]: Disconnected from user root 10.10.20.20 port 62142
Mar 31 05:57:15 UNRAID veeamselftar[27610]: acquired lock /tmp/VeeamAgent0b04802b-0777-4142-ba7f-1ba0677b0b2f.lock
Mar 31 05:57:15 UNRAID veeamselftar[27610]: Erasing contents at /tmp/VeeamAgent0b04802b-0777-4142-ba7f-1ba0677b0b2f.data
Mar 31 05:57:15 UNRAID veeamselftar[27610]: releasing lock /tmp/VeeamAgent0b04802b-0777-4142-ba7f-1ba0677b0b2f.lock
Mar 31 05:57:15 UNRAID veeamselftar[27610]: Failed to wait for child process to finish: Interrupted system call
Mar 31 05:57:15 UNRAID sshd[27170]: Received disconnect from 10.10.20.20 port 62148:11: Connection terminated by the client.
Mar 31 05:57:15 UNRAID sshd[27170]: Disconnected from user root 10.10.20.20 port 62148
Mar 31 06:47:04 UNRAID kernel: veth47aa2ae: renamed from eth0
Mar 31 06:47:04 UNRAID kernel: docker0: port 6(vethbd35dfc) entered disabled state
Mar 31 06:47:04 UNRAID avahi-daemon[7344]: Interface vethbd35dfc.IPv6 no longer relevant for mDNS.
Mar 31 06:47:04 UNRAID avahi-daemon[7344]: Leaving mDNS multicast group on interface vethbd35dfc.IPv6 with address fe80::3cfb:ecff:feca:4d7f.
Mar 31 06:47:04 UNRAID kernel: docker0: port 6(vethbd35dfc) entered disabled state
Mar 31 06:47:04 UNRAID kernel: device vethbd35dfc left promiscuous mode
Mar 31 06:47:04 UNRAID kernel: docker0: port 6(vethbd35dfc) entered disabled state
Mar 31 06:47:04 UNRAID avahi-daemon[7344]: Withdrawing address record for fe80::3cfb:ecff:feca:4d7f on vethbd35dfc.
Mar 31 06:47:05 UNRAID kernel: docker0: port 6(veth2bb3155) entered blocking state
Mar 31 06:47:05 UNRAID kernel: docker0: port 6(veth2bb3155) entered disabled state
Mar 31 06:47:05 UNRAID kernel: device veth2bb3155 entered promiscuous mode
Mar 31 06:47:05 UNRAID kernel: IPv6: ADDRCONF(NETDEV_UP): veth2bb3155: link is not ready
Mar 31 06:47:06 UNRAID kernel: docker0: port 16(veth5d43e12) entered disabled state
Mar 31 06:47:06 UNRAID kernel: veth3f8cedb: renamed from eth0
Mar 31 06:47:06 UNRAID avahi-daemon[7344]: Interface veth5d43e12.IPv6 no longer relevant for mDNS.
Mar 31 06:47:06 UNRAID avahi-daemon[7344]: Leaving mDNS multicast group on interface veth5d43e12.IPv6 with address fe80::e4ba:1eff:fe41:1e64.
Mar 31 06:47:06 UNRAID kernel: docker0: port 16(veth5d43e12) entered disabled state
Mar 31 06:47:06 UNRAID kernel: device veth5d43e12 left promiscuous mode
Mar 31 06:47:06 UNRAID kernel: docker0: port 16(veth5d43e12) entered disabled state
Mar 31 06:47:06 UNRAID avahi-daemon[7344]: Withdrawing address record for fe80::e4ba:1eff:fe41:1e64 on veth5d43e12.

 

Link to comment
23 minutes ago, Fiala06 said:

Uptime is only 22 days. 

4 minutes ago, Fiala06 said:

Looking at my syslog looks like its something to do with my veeam server. I have a share on unraid which is the repo for my backups. It's been running for awhile though.

 

Maybe for your use case you will just have to reboot more often to clear the logs.

Link to comment
  • 1 year later...
On 3/31/2020 at 5:00 PM, Fiala06 said:

I can deal with rebooting once a month if needed. Thanks for the suggestion. 

Hi,

was veeam able to resolve the report? I have the same thing but I don't have to log this, can I somehow disable it?

 

Sep 9 08:18:45 Acu-Tower veeamselftar[28444]: releasing lock /tmp/VeeamAgent36bf25da-04d2-4761-810e-462cb93a69ab.lock
Sep 9 08:18:47 Acu-Tower veeamselftar[28156]: acquired lock /tmp/VeeamAgent35164612-26bd-4007-9ec9-f05edeabc2d9.lock
Sep 9 08:18:47 Acu-Tower veeamselftar[28156]: Erasing contents at /tmp/VeeamAgent35164612-26bd-4007-9ec9-f05edeabc2d9.data

Link to comment
  • 1 month later...
On 9/9/2021 at 3:19 PM, trurl said:

Another thing you can do is delete logs that have already rotated.

 

In /var/log, there will be the current syslog, and also older syslog.1, syslog.2, ...

 

Those older ones can be safely deleted.

I found that logs from veeam backups are written to var / log / veeambackup
Is there a way to disable these logs? After making backups, the log grows full in 2 days.
And making so many writes to a flash drive every day is not good.

Link to comment
On 10/21/2021 at 5:26 PM, ChatNoir said:

All the system runs in RAM, logs included.

Unless you use the syslog server and decide to duplicate your log to the flashdrive, but that only should be used for debugging purposes.

 

If you require a long term syslog server, you should host that on another machine on the network.

Hi,
veeam backup i don't need to log at all. I have reporting of backups by email and I see when the backup does not take place.
But I didn't figure out how to shut down this log.

I can't help setting up a syslog server. Veeam backup runs outside the unraid syslog and saves the log in its own folder in var / log / veeambackup. So I only see on the dashboard that the log is growing.

So either connect veeam to the unraid syslog or disable veeam log in some way.

Edited by Acu
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.