Jump to content

tmpfs /run full


Recommended Posts

I was trying to install a new Docker image and it was failing. Also suddenly seeing warnings around Unraid about a lack of free disk space:

 

Warning: file_put_contents(): Only -1 of 100 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php on line 715

 

root@Tower:/var/log# df -h -t tmpfs
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            32M   32M     0 100% /run
tmpfs           7.8G     0  7.8G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           256M   45M  212M  18% /var/log

 

Could someone kindly explain what /run is and why it might be full? I couldn't easily find anyone else who was experiencing this issue. Any help would be greatly appreciated. Thank you!

 

EDIT:

Seems like this is the culprit:

 

/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f3e7a3a030a1d536b1147f7922564df9866fecca5a120a60d4330c4c263ae1fd# ^C
root@Tower:/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f3e7a3a030a1d536b1147f7922564df9866fecca5a120a60d4330c4c263ae1fd/log.json

 

The log file isn't terribly interesting:

 

{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:12-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:17-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:22-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:27-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:32-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:37-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:42-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:47-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:52-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:13:57-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:02-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:07-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:12-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:17-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:22-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:28-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:33-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:38-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:43-05:00"}
{"level":"error","msg":"exec failed: write /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f3e7a3a030a1d536b1147f7922564df9866fecca5a120a60d4330c4c263ae1fd/.269433a179ee056127a893dee64e8f5c23120e57541a3105e974663c37885a31.pid: no space left on device","time":"2023-01-06T21:14:43-05:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-01-06T21:14:48-05:00"}

 

Edited by Zoidoid
More info/clarity
Link to comment
  • Zoidoid changed the title to tmpfs /run full
  • 2 months later...
  • 1 month later...

Solution 1

If you have the scripts plugin installed. You can use this command adapted for log files. 

 

find /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ -maxdepth 999999 -noleaf -type f -name "log.json" -exec rm -v "{}" \;

 

This runs on my server every 24 hours and has yet to have the issue since. Docker Daemon will recreate the log to make sure that it is logging the health status of your application - so any docker application that has a health status showing. 

 

Solution 2

The other option is to remove the health check of the docker image that is running by using these parameters in

 

Extra Parameters --no-healthcheck

 

Solution 3

The other option is to increase the size of your tmpfs /run folder with the command below but at some point that will fill up. This command will set it to 85MB from default 32MB

 

mount -t tmpfs tmpfs /run -o remount,size=85M
 

I hope a built-in prune mechanism or placing the logs somewhere else with a size cap gets implemented. Removing the health status of a docker application is not a good solution and those with limited RAM cannot increase the allowance of /run to just keep logs without restarting. 

  • Like 1
  • Thanks 6
  • Upvote 1
Link to comment
  • 1 month later...

Same!! Been pulling my hair out trying to figure this out too! Thanks @vstylez_!!

 

Is this a bug or something gone awry?

 

EDIT

Found this which on another thread which was causing my issues. Added the "--no-healthcheck" option under my Plex container and the logs stopped.

 

Edited by urmyboyblue
Link to comment
  • 4 weeks later...
On 4/26/2023 at 2:37 PM, vstylez_ said:

Solution 1

If you have the scripts plugin installed. You can use this command adapted for log files. 

 

find /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ -maxdepth 999999 -noleaf -type f -name "log.json" -exec rm -v "{}" \;

 

This runs on my server every 24 hours and has yet to have the issue since. Docker Daemon will recreate the log to make sure that it is logging the health status of your application - so any docker application that has a health status showing. 

 

Solution 2

The other option is to remove the health check of the docker image that is running by using these parameters in

 

Extra Parameters --no-healthcheck

 

Solution 3

The other option is to increase the size of your tmpfs /run folder with the command below but at some point that will fill up. This command will set it to 85MB from default 32MB

 

mount -t tmpfs tmpfs /run -o remount,size=85M
 

I hope a built-in prune mechanism or placing the logs somewhere else with a size cap gets implemented. Removing the health status of a docker application is not a good solution and those with limited RAM cannot increase the allowance of /run to just keep logs without restarting. 

This solved my issue until a fix is implemented from Lime. Thank you very much!

Link to comment
  • 7 months later...

This was happening to me. In my case, it was the DDNS container. Stopping it solved my issue. Now I just need to figure out why it is filling the folder.

The process I used to determine which container was using all the space,

  1. Log into the terminal
  2. Determine if /run is full by running df
  3. Figure out which directory is using the space by running du /run
  4. In my case it was /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d0d2f1fc260167b0504709e92ee78b602a3acc9837735feff0d751c5cf55283d
  5. Take a look in the directory to see what is taking up the space so you can fix it later. In my case it was thousands of .pid files
  6. Note the hash, that is the container id
  7. Run docker container ls
  8. The first 12 characters from the hash should line up with one of the running containers listed
  9. Kill the offending container
  10. Fixed, buy yourself a beer
  • Thanks 1
Link to comment
On 2/29/2024 at 4:54 PM, Rob Prouse said:

This was happening to me. In my case, it was the DDNS container. Stopping it solved my issue. Now I just need to figure out why it is filling the folder.

The process I used to determine which container was using all the space,

  1. Log into the terminal
  2. Determine if /run is full by running df
  3. Figure out which directory is using the space by running du /run
  4. In my case it was /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d0d2f1fc260167b0504709e92ee78b602a3acc9837735feff0d751c5cf55283d
  5. Take a look in the directory to see what is taking up the space so you can fix it later. In my case it was thousands of .pid files
  6. Note the hash, that is the container id
  7. Run docker container ls
  8. The first 12 characters from the hash should line up with one of the running containers listed
  9. Kill the offending container
  10. Fixed, buy yourself a beer

Had the exact same issue twice now - Mine was with erikvl87/languagetool container.

Filled up my /run folder and basically took down all other containers with it as well as making the server unresponsive with 100% CPU Usage
 

  1. Increased /run slighthly - Will keep an eye on it
  2. This container received the --no-healthcheck treatment
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...