Zoidoid Posted January 8, 2023 Share Posted January 8, 2023 (edited) I was trying to install a new Docker image and it was failing. Also suddenly seeing warnings around Unraid about a lack of free disk space: Warning: file_put_contents(): Only -1 of 100 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php on line 715 root@Tower:/var/log# df -h -t tmpfs Filesystem Size Used Avail Use% Mounted on tmpfs 32M 32M 0 100% /run tmpfs 7.8G 0 7.8G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 256M 45M 212M 18% /var/log Could someone kindly explain what /run is and why it might be full? I couldn't easily find anyone else who was experiencing this issue. Any help would be greatly appreciated. Thank you! EDIT: Seems like this is the culprit: /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f3e7a3a030a1d536b1147f7922564df9866fecca5a120a60d4330c4c263ae1fd# ^C root@Tower:/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f3e7a3a030a1d536b1147f7922564df9866fecca5a120a60d4330c4c263ae1fd/log.json The log file isn't terribly interesting: {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:12-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:17-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:22-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:27-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:32-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:37-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:42-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:47-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:52-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:57-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:02-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:07-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:12-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:17-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:22-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:28-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:33-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:38-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:43-05:00"} {"level":"error","msg":"exec failed: write /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f3e7a3a030a1d536b1147f7922564df9866fecca5a120a60d4330c4c263ae1fd/.269433a179ee056127a893dee64e8f5c23120e57541a3105e974663c37885a31.pid: no space left on device","time":"2023-01-06T21:14:43-05:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:48-05:00"} Edited January 8, 2023 by Zoidoid More info/clarity Quote Link to comment
Jason Stevenson Posted March 9, 2023 Share Posted March 9, 2023 I just had the same exact thing happen to my server. Did you ever figure it out, why that log got so big? Or has it reoccurred for you? Quote Link to comment
Squid Posted March 10, 2023 Share Posted March 10, 2023 If it exact same, then it looks to be a docker container that's continually stopping but is set to restart unless shut down. Uptime on each container might help 1 Quote Link to comment
Jason Stevenson Posted March 10, 2023 Share Posted March 10, 2023 Thanks that is probably it, I had tdarr/tdarr node running, and shut it down, mid-conversion, I think it was set to autostart, so maybe it was doing as you say, and filled the log... Quote Link to comment
vstylez_ Posted April 26, 2023 Share Posted April 26, 2023 Solution 1 If you have the scripts plugin installed. You can use this command adapted for log files. find /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ -maxdepth 999999 -noleaf -type f -name "log.json" -exec rm -v "{}" \; This runs on my server every 24 hours and has yet to have the issue since. Docker Daemon will recreate the log to make sure that it is logging the health status of your application - so any docker application that has a health status showing. Solution 2 The other option is to remove the health check of the docker image that is running by using these parameters in Extra Parameters --no-healthcheck Solution 3 The other option is to increase the size of your tmpfs /run folder with the command below but at some point that will fill up. This command will set it to 85MB from default 32MB mount -t tmpfs tmpfs /run -o remount,size=85M I hope a built-in prune mechanism or placing the logs somewhere else with a size cap gets implemented. Removing the health status of a docker application is not a good solution and those with limited RAM cannot increase the allowance of /run to just keep logs without restarting. 1 6 1 Quote Link to comment
Father_Redbeard Posted June 8, 2023 Share Posted June 8, 2023 @vstylez_ thank you so much for this. I was pulling my hair out trying to find the solution! Quote Link to comment
snoopy86 Posted June 8, 2023 Share Posted June 8, 2023 Exactly, thanks for that! This should be already fixed in UnRaid core and not having the need for us to run special scripts.Poslano z mojega SM-G998B z uporabo Tapatalk Quote Link to comment
urmyboyblue Posted June 9, 2023 Share Posted June 9, 2023 (edited) Same!! Been pulling my hair out trying to figure this out too! Thanks @vstylez_!! Is this a bug or something gone awry? EDIT Found this which on another thread which was causing my issues. Added the "--no-healthcheck" option under my Plex container and the logs stopped. Edited June 9, 2023 by urmyboyblue Quote Link to comment
tomasaron Posted June 14, 2023 Share Posted June 14, 2023 @vstylez_ used solution 1 , works like a charm, thank you Quote Link to comment
SkilledAlpaca Posted July 7, 2023 Share Posted July 7, 2023 On 4/26/2023 at 2:37 PM, vstylez_ said: Solution 1 If you have the scripts plugin installed. You can use this command adapted for log files. find /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ -maxdepth 999999 -noleaf -type f -name "log.json" -exec rm -v "{}" \; This runs on my server every 24 hours and has yet to have the issue since. Docker Daemon will recreate the log to make sure that it is logging the health status of your application - so any docker application that has a health status showing. Solution 2 The other option is to remove the health check of the docker image that is running by using these parameters in Extra Parameters --no-healthcheck Solution 3 The other option is to increase the size of your tmpfs /run folder with the command below but at some point that will fill up. This command will set it to 85MB from default 32MB mount -t tmpfs tmpfs /run -o remount,size=85M I hope a built-in prune mechanism or placing the logs somewhere else with a size cap gets implemented. Removing the health status of a docker application is not a good solution and those with limited RAM cannot increase the allowance of /run to just keep logs without restarting. This solved my issue until a fix is implemented from Lime. Thank you very much! Quote Link to comment
Rob Prouse Posted February 29 Share Posted February 29 This was happening to me. In my case, it was the DDNS container. Stopping it solved my issue. Now I just need to figure out why it is filling the folder. The process I used to determine which container was using all the space, Log into the terminal Determine if /run is full by running df Figure out which directory is using the space by running du /run In my case it was /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d0d2f1fc260167b0504709e92ee78b602a3acc9837735feff0d751c5cf55283d Take a look in the directory to see what is taking up the space so you can fix it later. In my case it was thousands of .pid files Note the hash, that is the container id Run docker container ls The first 12 characters from the hash should line up with one of the running containers listed Kill the offending container Fixed, buy yourself a beer 1 Quote Link to comment
valiente Posted March 2 Share Posted March 2 On 2/29/2024 at 4:54 PM, Rob Prouse said: This was happening to me. In my case, it was the DDNS container. Stopping it solved my issue. Now I just need to figure out why it is filling the folder. The process I used to determine which container was using all the space, Log into the terminal Determine if /run is full by running df Figure out which directory is using the space by running du /run In my case it was /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d0d2f1fc260167b0504709e92ee78b602a3acc9837735feff0d751c5cf55283d Take a look in the directory to see what is taking up the space so you can fix it later. In my case it was thousands of .pid files Note the hash, that is the container id Run docker container ls The first 12 characters from the hash should line up with one of the running containers listed Kill the offending container Fixed, buy yourself a beer Had the exact same issue twice now - Mine was with erikvl87/languagetool container. Filled up my /run folder and basically took down all other containers with it as well as making the server unresponsive with 100% CPU Usage Increased /run slighthly - Will keep an eye on it This container received the --no-healthcheck treatment Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.