/var/log is getting full (currently 100 % used)


Recommended Posts

I noticed that my log was showing 100% full under the Memory section of the dashboard.  After googling around a bit I saw that Fix Common Problems plugin would check for this, ran it, and confirmed that there was a problem (/var/log is getting full (currently 100 % used)).

 

I understand a reboot will clear this problem, but I am hoping that someone can help me determine the cause and/or fix.  I have no VM's and just a few common Dockers (Plex, Tautulli, Sonarr, Deluge, BOINC, and PiHole).  I have 128GB of RAM so increasing the available size is no problem, but I'm not sure how that is done and I don't want to just treat the symptom if there is a memory leak or other problem somewhere.

 

Diagnostics attached

 

*edit - I rebooted the machine and it came up fine, but Plex refuses to start and PiHole is displaying running, but not connectable, sigh

*edit2 - I tried to delete PiHole and it got stuck in the status of 'dead', so I disabled, deleted, and re-enabled my dockers without PiHole

unraid1-diagnostics-20201116-0121.zip

Edited by Unraiding
Link to comment

Looks like your previous restart was December 31st. Maybe a full log after 10 1/2 months it no so bad ?

 

Still, since Nov 14th your log is been spammed by

Nov 14 18:25:06 Unraid1 kernel: loop: Write error at byte offset 719699968, length 4096.
Nov 14 18:25:06 Unraid1 kernel: print_req_error: I/O error, dev loop2, sector 1404352
Nov 14 18:25:06 Unraid1 kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 111, rd 0, flush 0, corrupt 0, gen 0

I also see instances of : Nov 14 18:25:34 Unraid1 shfs: cache disk full

 

This might explain your issues with your containers.

 

I guees the experts could diagnose this better but it seems you have issues with your cache.

Link to comment
11 hours ago, ChatNoir said:

Looks like your previous restart was December 31st. Maybe a full log after 10 1/2 months it no so bad ?

 

Still, since Nov 14th your log is been spammed by


Nov 14 18:25:06 Unraid1 kernel: loop: Write error at byte offset 719699968, length 4096.
Nov 14 18:25:06 Unraid1 kernel: print_req_error: I/O error, dev loop2, sector 1404352
Nov 14 18:25:06 Unraid1 kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 111, rd 0, flush 0, corrupt 0, gen 0

I also see instances of : Nov 14 18:25:34 Unraid1 shfs: cache disk full

 

This might explain your issues with your containers.

 

I guees the experts could diagnose this better but it seems you have issues with your cache.

I wish my uptime was that high.  I'm guessing it had been maybe a month since I had rebooted it for a UPS upgrade. 

 

My cache disk is a 1TB 860 Evo and it transfers off the temporary files every night.  I had just queued up a few too many downloads that night and it got full for a couple hours until the Mover cleared it out again.  It does line up with the general timeline of things going bad though.  Could it have been some kind of cascading issue?  Cache fills > log memory fills > things start corrupting?

Link to comment
7 hours ago, JorgeB said:

You should re-create the docker image.

Thanks.  That is what I ended up doing.  Disabled, deleted, and re-enabled Docker, then reinstalled my containers.  I decided to skip PiHole as it seemed to be the cause, but understand now that the filled cache could have been the root cause of some form of corruption.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.