Jump to content

Partition `/run` is full


toddeTV

Recommended Posts

Posted

Hi there,

 

I noticed that my partition `/run` is full:

# df -h
Filesystem                Size  Used Avail Use% Mounted on
rootfs                     16G  2.9G   13G  19% /
tmpfs                      32M   32M   40K 100% /run
/dev/sda1                  29G  1.1G   28G   4% /boot
overlay                    16G  2.9G   13G  19% /lib/firmware
overlay                    16G  2.9G   13G  19% /lib/modules
devtmpfs                  8.0M     0  8.0M   0% /dev
tmpfs                      16G     0   16G   0% /dev/shm
cgroup_root               8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs                     128M  8.2M  120M   7% /var/log
tmpfs                     1.0M     0  1.0M   0% /mnt/disks
tmpfs                     1.0M     0  1.0M   0% /mnt/remotes
tmpfs                     1.0M     0  1.0M   0% /mnt/rootshare
[... array disks, caches and unassigned devices]
/dev/loop2                1.0G  4.8M  904M   1% /etc/libvirt
tmpfs                     3.2G     0  3.2G   0% /run/user/0

 

After inspection with `ncdu` command, I found the following file with 30.9 MiB filling almost all of `/run`: 

/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3ae06a993a7d6ea7a8899040990a24182fd279c0331b0776b17d9807a000fa98/log.json

 

This file is bloated with almost identical lines like so:

{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:16:53+01:00"}
{"level":"info","msg":"Using OCI specification file path: /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3ae06a993a7d6ea7a8899040990a24182fd279c0331b0776b17d9807a000fa98/config.json","time":"2023-02-06T01:16:53+01:00"}
{"level":"info","msg":"Auto-detected mode as 'legacy'","time":"2023-02-06T01:16:53+01:00"}
{"level":"info","msg":"Using prestart hook path: /usr/bin/nvidia-container-runtime-hook","time":"2023-02-06T01:16:53+01:00"}
{"level":"info","msg":"Applied required modification to OCI specification","time":"2023-02-06T01:16:53+01:00"}
{"level":"info","msg":"Forwarding command to runtime","time":"2023-02-06T01:16:53+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:16:53+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:16:58+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:04+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:09+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:14+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:19+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:24+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:29+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:34+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:39+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:44+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:49+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:54+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:17:59+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:18:04+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:18:09+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:18:15+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:18:20+01:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-02-06T01:18:25+01:00"}

[... Repeating the info line every five minutes !!!]

 

In the path a docker id is provided and also the config path inside the log file contains the same docker id. This docker id on my system is `Plex-Media-Server` from the repository `plexinc/pms-docker`. 

Stopping and restarting Plex clears `/run` but starts the log flushing all over again. So it is no solution to restart Plex manually all several days.

 

The log lines shown above that floats the `/run` partition are not the same shown when I click on `Logs` in the UNRAID GUI.

 

So I have the following questions:

 

  1. How can I decrease the log level or the intensity of the Docker container Plex so that it will not write the same level `info` log every five minutes?
    1. I found this post that helps limit the log output to 50MB, but the `/run` is 32MB max space, so I guess the logs described here go to another place than `/run`. Maybe this is for the `Logs` when using the UNRAID GUI.
       
  2. Can I, besides of question 1, set the space for `/run` bigger? 32MB seems a little small and 50MB or 100MB would be nicer, I guess.
    1. I found this post that tells me how to increase `/run` on Ubuntu but the UNRAID (slackware) seems to have another construction for that folder. Or am I wrong?

 

Thanks for your help in advance.

  • 3 months later...
Posted

I'm running into an issue that seems to bring my unRAID dockers to their knees because one of my dockers keeps dumping this into the log.json:
 

{"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-05-31T14:08:12-07:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-31T14:08:12-07:00"}

 

Once it consumes all 32 MB (that's right, Megabytes), /run is full and I start losing dockers, today's flavor comes in the form of my Ghost CMS running a non-profit's blog. Docker issues are the hardest for me to track down because of the ease of setting them up in unRAID and how complex they are behind the scenes and the lack of insight into that complexity on the unRAID side of things.

 

Anyone have any pointers?

Posted
19 minutes ago, digitaljock said:

I'm running into an issue that seems to bring my unRAID dockers to their knees because one of my dockers keeps dumping this into the log.json:
 

{"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-05-31T14:08:12-07:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-05-31T14:08:12-07:00"}

 

Once it consumes all 32 MB (that's right, Megabytes), /run is full and I start losing dockers, today's flavor comes in the form of my Ghost CMS running a non-profit's blog. Docker issues are the hardest for me to track down because of the ease of setting them up in unRAID and how complex they are behind the scenes and the lack of insight into that complexity on the unRAID side of things.

 

Anyone have any pointers?

Turns out this is my PostgreSQL container. Why in the world does it care at all about `NVIDIAContainerCLIConfig`?

Posted

I have the same issue, but for me it is my Plex container i believe, which indeed uses the gpu. But this issue is new.

 

Anyone of you running on rc6? And do you have the nvidia driver plugin installed by any chance?

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...