Jump to content
We're Hiring! Full Stack Developer ×

Uptime 10 days - /var/log full


DieFalse

Recommended Posts

38 minutes ago, fmp4m said:

Received a random notification that /var/log is 94% full and havent had time to look into it.   Unraid 6.5.3-rc1

Diagnostics posted incase its important to the RC

 

/var/log is getting full (currently 94 % used)

varlogfull.zip

 

syslog isn't very large. You must have something else writing there.

 

From command line, post results of

ls -lah /var/log

 

Link to comment
9 hours ago, trurl said:

 

syslog isn't very large. You must have something else writing there.

 

From command line, post results of


ls -lah /var/log

 

 

I think its a false positive from FCP....

 

root@NAS:~# ls -lah /var/log
total 124K
drwxr-xr-x 13 root   root  580 May 31 09:01 ./
drwxr-xr-x 15 root   root  340 May 18  2016 ../
drwxr-xr-x  2 root   root  140 May 31 08:13 atop/
-rw-------  1 root   root    0 May 18 11:58 btmp
-rw-r--r--  1 root   root    0 Mar  9 17:53 cron
-rw-r--r--  1 root   root    0 Mar  9 17:53 debug
-rw-rw-rw-  1 root   root 2.7K May 31 18:01 diskinfo.log
-rw-rw-rw-  1 root   root  89K May 20 23:35 dmesg
-rw-rw-rw-  1 root   root    0 May 20 23:36 docker.log
-rw-r--r--  1 root   root    0 Nov 21  2017 faillog
-rw-rw-rw-  1 root   root  617 May 21 08:23 fluxbox.log
-rw-r--r--  1 root   root    0 Apr  7  2000 lastlog
drwxr-xr-x  5 root   root  160 May 20 23:38 libvirt/
-rw-r--r--  1 root   root 9.0K May 31 04:40 maillog
-rw-r--r--  1 root   root    0 Mar  9 17:53 messages
drwxr-xr-x  2 root   root   40 May 15  2001 nfsd/
drwxr-x---  2 nobody root   60 May 20 23:36 nginx/
drwxr-xr-x  2 root   root 8.1K May 24 14:18 packages/
drwxr-xr-x  2 root   root  540 May 25 17:45 plugins/
-rw-rw-rw-  1 root   root    0 May 20 23:36 preclear.disk.log
drwxr-xr-x  2 root   root   60 May 20 23:36 removed_packages/
drwxr-xr-x  2 root   root   60 May 20 23:36 removed_scripts/
drwxr-xr-x  3 root   root  160 May 24 14:14 samba/
-rw-r--r--  1 root   root   33 Feb 11 22:02 scan
drwxr-xr-x  2 root   root 1.2K May 20 23:36 scripts/
-rw-r--r--  1 root   root    0 Mar  9 17:53 secure
drwxr-xr-x  3 root   root   80 Aug 21  2012 setup/
-rw-r--r--  1 root   root    0 Mar  9 17:53 spooler
-rw-rw-r--  1 root   utmp 7.5K May 20 23:36 wtmp

 

Link to comment
28 minutes ago, fmp4m said:

I think its a false positive from FCP....

 

No it's not. From your diagnostics zip, system/df.txt:

 

Quote

Filesystem      Size  Used Avail Use% Mounted on

tmpfs           128M  122M  6.2M  96% /var/log

 

Try this:

 

du -h /var/log

 

Link to comment
13 hours ago, trurl said:

 

No it's not. From your diagnostics zip, system/df.txt:

 

 

Try this:

 


du -h /var/log

 

 

 

root@NAS:~# du -h /var/log
1.5M    /var/log/atop
0    /var/log/setup/tmp
4.0K    /var/log/setup
508K    /var/log/scripts
0    /var/log/samba/cores/winbindd
0    /var/log/samba/cores/smbd
0    /var/log/samba/cores/nmbd
0    /var/log/samba/cores
0    /var/log/samba
4.0K    /var/log/removed_scripts
52K    /var/log/removed_packages
0    /var/log/plugins
2.4M    /var/log/packages
4.0K    /var/log/nginx
0    /var/log/nfsd
0    /var/log/libvirt/uml
0    /var/log/libvirt/lxc
0    /var/log/libvirt/qemu
0    /var/log/libvirt
4.4M    /var/log

 

Link to comment

There is one case where this will not show the problem .....

 

if a program has a log file open and continues to write to it, but another program (or user) has deleted the file; the directory entry for the file is removed (so doesn’t show up in the ls or du output) but the file continues to exist (and possible grow) until the program closes it or the program is terminated.

 

Link to comment
9 hours ago, trurl said:

That looks OK now. Are you still getting the warning? What do you get from this?


df -h

 


Error still present at 98% now.

 

root@NAS:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs           16G  2.0G   14G  13% /
tmpfs            32M  1.6M   31M   5% /run
devtmpfs         16G     0   16G   0% /dev
tmpfs            16G   20M   16G   1% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M   11M  118M   8% /var/log
/dev/sda1        15G  926M   14G   7% /boot
/dev/loop0      7.5M  7.5M     0 100% /lib/modules
/dev/loop1      4.5M  4.5M     0 100% /lib/firmware
/dev/md1        3.7T  2.8T  910G  76% /mnt/disk1
/dev/md2        3.7T  2.8T  898G  76% /mnt/disk2
/dev/md3        3.7T  2.8T  925G  76% /mnt/disk3
/dev/md4        3.7T  2.8T  931G  76% /mnt/disk4
/dev/md5        3.7T  1.9T  1.8T  52% /mnt/disk5
/dev/md6        3.7T  1.9T  1.8T  51% /mnt/disk6
/dev/md7        3.7T  1.9T  1.9T  51% /mnt/disk7
/dev/md8        1.9T  1.9G  1.9T   1% /mnt/disk8
/dev/md9        1.9T  1.9G  1.9T   1% /mnt/disk9
/dev/sdq1       1.9T   74G  1.8T   4% /mnt/cache
shfs             30T   17T   13T  57% /mnt/user0
shfs             31T   17T   15T  54% /mnt/user
/dev/sde1       120G   43G   77G  36% /mnt/disks/128gbssd-livetru
/dev/sdd1       239G   86G  153G  36% /mnt/disks/DieFalse
/dev/sdh1       466G  508M  466G   1% /mnt/disks/VMs
/dev/sdc1       477G   80M  477G   1% /mnt/disks/512SSD-BTM
/dev/loop2      100G   11G   87G  11% /var/lib/docker
/dev/loop3      1.0G   17M  905M   2% /etc/libvirt
shm              64M     0   64M   0% /var/lib/docker/containers/06f410f75932a0c47ec0150d3afce238f013a7649cf2c6526c6fc5793a4af922/shm
shm              64M     0   64M   0% /var/lib/docker/containers/8d644de6ab63a99334cae60b3d4f2b94349b4d2a22212b9a96c962f9729b62f8/shm
shm              64M     0   64M   0% /var/lib/docker/containers/4457fc94daf92d91a260fbe2d497ee3cf6b082072ad4b212597a2d8114673f3b/shm
shm              64M     0   64M   0% /var/lib/docker/containers/ccd1a89e92ed58ab389694e2f4eb7cb8a5eacba69cebc566316a9bf10efe0308/shm
shm              64M  8.0K   64M   1% /var/lib/docker/containers/e538288330b8395beba2d8c45de42b695d3624800ac8f2ee79fd0e6f6da4893e/shm
shm              64M     0   64M   0% /var/lib/docker/containers/95f35c99fee2791858c0d5e73ed6b16b8141d42a8c309c3c125a93eed2ed3600/shm
shm              64M     0   64M   0% /var/lib/docker/containers/626f99711bd2833f6758c871b4cac48596fd0d4a07839d5cd74543107931d813/shm
shm              64M     0   64M   0% /var/lib/docker/containers/4c8ff2f4babf6085841bc8068a27306a5e8fe775c6ec96ec6e01368e19b35278/shm
shm              64M     0   64M   0% /var/lib/docker/containers/f4445575984f15b3b939fbdc2a4d8fbf69eaf55707c7a349f77ce7fe79f72835/shm
shm              64M  4.0K   64M   1% /var/lib/docker/containers/93baf072eaa2726e9cbde07d8b59e51fdb6057fa7b9ad74bc9d456cbfafa5899/shm
shm              64M  4.0K   64M   1% /var/lib/docker/containers/abff09f67cd317d35588743b4c6f4c896d71d47487528dd0e96f59c4c53fc3b9/shm
shm              64M     0   64M   0% /var/lib/docker/containers/b81145a74d939e26b60cfb43e563a1001bcc122a13a8d3ba7e7fcf087af48205/shm
shm              64M     0   64M   0% /var/lib/docker/containers/7be1481fd411936836c5fbce890e910ee3ec33b3bce41fb0dd5435d8af17e5b1/shm
shm              64M     0   64M   0% /var/lib/docker/containers/491ded4de6f151871526071602c635be4a205d4f612226b840a0bbf5a74f6b65/shm
shm              64M  4.0K   64M   1% /var/lib/docker/containers/8de1a09cf25a260a1ac8ecdd82c95355517ef5eb171b11783ccd6cec782c76d8/shm
/dev/sdb1       477G   80M  477G   1% /mnt/disks/512SSD-TOP
shm              64M     0   64M   0% /var/lib/docker/containers/4fd3dcae7537d5c88977b30085eac3f59611079809ac1f1035debc092c21a5bb/shm

 

 

 

6 hours ago, remotevisitor said:

There is one case where this will not show the problem .....

 

if a program has a log file open and continues to write to it, but another program (or user) has deleted the file; the directory entry for the file is removed (so doesn’t show up in the ls or du output) but the file continues to exist (and possible grow) until the program closes it or the program is terminated.

 

 

This may very well be the issue.   However, I would not know how to find that....

Link to comment
20 hours ago, trurl said:

This says only 8% now. Post another diagnostic, it should say the same.


I ran another test from FCP and the alert went away.  When this happens again (if) I will rerun all of the commands and post.

It would be nice to know what caused it. heh.

Link to comment
  • 1 month later...

Uptime 13 days,  var log full again.

 

root@NAS:~# ls -lah /var/log
total 1.1M
drwxr-xr-x 13 root   root  620 Jun 23 17:20 ./
drwxr-xr-x 15 root   root  340 May 18  2016 ../
-rw-r--r--  1 root   root 117K Jun 23 17:18 Xorg.0.log
drwxr-xr-x  2 root   root  200 Jul  7 04:40 atop/
-rw-------  1 root   root    0 Jun 12 12:23 btmp
-rw-r--r--  1 root   root    0 Mar  9 17:53 cron
-rw-r--r--  1 root   root    0 Mar  9 17:53 debug
-rw-rw-rw-  1 root   root 120K Jul  7 14:01 diskinfo.log
-rw-rw-rw-  1 root   root  89K Jun 23 17:17 dmesg
-rw-rw-rw-  1 root   root  813 Jun 28 23:01 docker.log
-rw-r--r--  1 root   root    0 Nov 21  2017 faillog
-rw-rw-rw-  1 root   root  617 Jun 23 17:20 fluxbox.log
-rw-r--r--  1 root   root    0 Apr  7  2000 lastlog
drwxr-xr-x  5 root   root  160 Jun 23 17:20 libvirt/
-rw-r--r--  1 root   root  16K Jul  7 12:10 maillog
-rw-r--r--  1 root   root    0 Mar  9 17:53 messages
drwxr-xr-x  2 root   root   40 May 15  2001 nfsd/
drwxr-x---  2 nobody root   60 Jun 23 17:18 nginx/
drwxr-xr-x  2 root   root 8.1K Jun 25 08:14 packages/
drwxr-xr-x  2 root   root  560 Jul  6 19:15 plugins/
-rw-rw-rw-  1 root   root    0 Jun 23 17:18 preclear.disk.log
drwxr-xr-x  2 root   root   60 Jun 23 17:18 removed_packages/
drwxr-xr-x  2 root   root   60 Jun 23 17:18 removed_scripts/
drwxr-xr-x  3 root   root  160 Jun 28 20:14 samba/
-rw-r--r--  1 root   root   33 Feb 11 22:02 scan
drwxr-xr-x  2 root   root 1.2K Jun 23 17:18 scripts/
-rw-r--r--  1 root   root    0 Mar  9 17:53 secure
drwxr-xr-x  3 root   root   80 Aug 21  2012 setup/
-rw-r--r--  1 root   root    0 Mar  9 17:53 spooler
-rw-r--r--  1 root   root 696K Jul  7 14:01 syslog
-rw-rw-r--  1 root   utmp 7.5K Jun 23 17:18 wtmp
root@NAS:~# du -h /var/log
125M	/var/log/atop
0	/var/log/setup/tmp
4.0K	/var/log/setup
508K	/var/log/scripts
0	/var/log/samba/cores/winbindd
0	/var/log/samba/cores/smbd
0	/var/log/samba/cores/nmbd
0	/var/log/samba/cores
0	/var/log/samba
4.0K	/var/log/removed_scripts
52K	/var/log/removed_packages
0	/var/log/plugins
2.4M	/var/log/packages
0	/var/log/nginx
0	/var/log/nfsd
0	/var/log/libvirt/uml
0	/var/log/libvirt/lxc
0	/var/log/libvirt/qemu
0	/var/log/libvirt
128M	/var/log

 

root@NAS:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          7.7G  2.1G  5.7G  27% /
tmpfs            32M  1.6M   31M   5% /run
devtmpfs        7.7G     0  7.7G   0% /dev
tmpfs           7.8G   19M  7.8G   1% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  128M     0 100% /var/log
/dev/sda1        15G  929M   14G   7% /boot
/dev/loop0      7.5M  7.5M     0 100% /lib/modules
/dev/loop1      4.5M  4.5M     0 100% /lib/firmware
/dev/md1        3.7T  2.8T  895G  76% /mnt/disk1
/dev/md2        3.7T  2.8T  925G  76% /mnt/disk2
/dev/md3        3.7T  2.8T  928G  76% /mnt/disk3
/dev/md4        3.7T  2.8T  938G  75% /mnt/disk4
/dev/md5        3.7T  2.8T  900G  76% /mnt/disk5
/dev/md6        3.7T  2.7T  972G  74% /mnt/disk6
/dev/md7        3.7T  1.9T  1.8T  51% /mnt/disk7
/dev/md8        1.9T  1.9G  1.9T   1% /mnt/disk8
/dev/md9        1.9T  1.9G  1.9T   1% /mnt/disk9
/dev/sdq1       1.9T  190G  1.7T  11% /mnt/cache
shfs             30T   19T   11T  63% /mnt/user0
shfs             31T   19T   13T  60% /mnt/user
/dev/sde1       120G   43G   77G  36% /mnt/disks/128gbssd-livetru
/dev/sdd1       239G   86G  153G  36% /mnt/disks/DieFalse
/dev/sdh1       466G  508M  466G   1% /mnt/disks/VMs
/dev/sdc1       477G   80M  477G   1% /mnt/disks/512SSD-BTM
/dev/sdb1       477G   80M  477G   1% /mnt/disks/512SSD-TOP
/dev/loop2      100G   13G   85G  13% /var/lib/docker
/dev/loop3      1.0G   17M  905M   2% /etc/libvirt
shm              64M     0   64M   0% /var/lib/docker/containers/06f410f75932a0c47ec0150d3afce238f013a7649cf2c6526c6fc5793a4af922/shm
shm              64M     0   64M   0% /var/lib/docker/containers/8d644de6ab63a99334cae60b3d4f2b94349b4d2a22212b9a96c962f9729b62f8/shm
shm              64M     0   64M   0% /var/lib/docker/containers/4457fc94daf92d91a260fbe2d497ee3cf6b082072ad4b212597a2d8114673f3b/shm
shm              64M  8.0K   64M   1% /var/lib/docker/containers/332834bc95db8106ed675f1d3991fb7279e7b6d37a7d96f35720a52115e033ce/shm
shm              64M     0   64M   0% /var/lib/docker/containers/626f99711bd2833f6758c871b4cac48596fd0d4a07839d5cd74543107931d813/shm
shm              64M     0   64M   0% /var/lib/docker/containers/f4445575984f15b3b939fbdc2a4d8fbf69eaf55707c7a349f77ce7fe79f72835/shm
shm              64M  4.0K   64M   1% /var/lib/docker/containers/117bd6ab1169d177e994817d786f8adfaccbb1c18d2a281733e91c52d55d07b8/shm
shm              64M  4.0K   64M   1% /var/lib/docker/containers/abff09f67cd317d35588743b4c6f4c896d71d47487528dd0e96f59c4c53fc3b9/shm
shm              64M     0   64M   0% /var/lib/docker/containers/1307b87904cd3425f65de4cc6c46317eea51075b601072cbc89db83486accb23/shm
shm              64M     0   64M   0% /var/lib/docker/containers/aa48d00cbc1a8da5020323cd75dccd17e2f59b5d0aa10c1ac7f3ed6307d2d218/shm
shm              64M  4.0K   64M   1% /var/lib/docker/containers/629afb9655aba57c3509ea33b8235f0923eb346aa009c65ca1116d654a4b1695/shm
shm              64M     0   64M   0% /var/lib/docker/containers/438f9efed4995db0fb534ae00c7b07ed77521ab9089e8f1714ca034d56630b36/shm
shm              64M     0   64M   0% /var/lib/docker/containers/dec74d4096b58e93a4d2cf5ea053fea30027ef1ece4bcb33d9b78c54ead577c1/shm
shm              64M     0   64M   0% /var/lib/docker/containers/594c170593513e8e063026bbcf9c1e49e1f98969cac557e6f0a0f78490eb4de4/shm
shm              64M     0   64M   0% /var/lib/docker/containers/d5b713a741a05cc2ce796ff0334ea3f123ce91ed260e591b3478f52db48e96c8/shm
shm              64M     0   64M   0% /var/lib/docker/containers/35d88271ad010ffb58adfaaf8e821b7d4c1d9310b66066941d9ed70148794880/shm

Looks like atop?

Link to comment

quick look indicates you are running something called atop and its using up all the space of the log partition.

59 minutes ago, fmp4m said:

root@NAS:~# du -h /var/log
125M /var/log/atop

 

its not a standard part of unRAID, so you should do something about the logging.

Link to comment

Accounting should normally not be a problem, unless you leave atop running for a long time - or kill it with -9 flag.

 

 

When atop starts, it will tell the kernel to start writing accounting information to disk.
And when the last instance of atop ends, it will tell the kernel to stop saving accounting information.

 

See the process accounting chapter:

https://linux.die.net/man/1/atop

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...