Marshalleq Posted June 25 Share Posted June 25 Hi all, does anyone know why this is happening? It actually ends up stopping my VM's and dockers in the end. Despite what it eludes to below, the driver is actually installed. Is there some new method I am not aware of? Oh, this log for me is at: /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/239d6e30c683fd56432d60fe02198e271c6d56ad3d7b562a74e1c9e51be78d68 Which is why it keeps filling up my drive. It's a small 32MB Tmpfs partition. Thanks! ``` cat log.json {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Using OCI specification file path: /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/239d6e30c683fd56432d60fe02198e271c6d56ad3d7b562a74e1c9e51be78d68/config.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Auto-detected mode as 'legacy'","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Ignoring 32-bit libraries for libcuda.so: [/usr/lib/libcuda.so.535.54.03]","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Using prestart hook path: /usr/bin/nvidia-container-runtime-hook","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /dev/dri/card0 as /dev/dri/card0","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /dev/dri/renderD128 as /dev/dri/renderD128","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /usr/lib64/libnvidia-egl-gbm.so.1.1.0 as /usr/lib64/libnvidia-egl-gbm.so.1.1.0","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate glvnd/egl_vendor.d/10_nvidia.json: pattern glvnd/egl_vendor.d/10_nvidia.json not found","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /etc/vulkan/icd.d/nvidia_icd.json as /etc/vulkan/icd.d/nvidia_icd.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /etc/vulkan/implicit_layer.d/nvidia_layers.json as /etc/vulkan/implicit_layer.d/nvidia_layers.json","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate egl/egl_external_platform.d/15_nvidia_gbm.json: pattern egl/egl_external_platform.d/15_nvidia_gbm.json not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate egl/egl_external_platform.d/10_nvidia_wayland.json: pattern egl/egl_external_platform.d/10_nvidia_wayland.json not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/nvidia_drv.so: pattern nvidia/xorg/nvidia_drv.so not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/libglxserver_nvidia.so.535.54.03: pattern nvidia/xorg/libglxserver_nvidia.so.535.54.03 not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate X11/xorg.conf.d/10-nvidia.conf: pattern X11/xorg.conf.d/10-nvidia.conf not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/nvidia_drv.so: pattern nvidia/xorg/nvidia_drv.so not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/libglxserver_nvidia.so.535.54.03: pattern nvidia/xorg/libglxserver_nvidia.so.535.54.03 not found","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounts:","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounting /usr/lib64/libnvidia-egl-gbm.so.1.1.0 at /usr/lib64/libnvidia-egl-gbm.so.1.1.0","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounting /etc/vulkan/icd.d/nvidia_icd.json at /etc/vulkan/icd.d/nvidia_icd.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounting /etc/vulkan/implicit_layer.d/nvidia_layers.json at /etc/vulkan/implicit_layer.d/nvidia_layers.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Devices:","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Injecting /dev/dri/card0","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Injecting /dev/dri/renderD128","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Hooks:","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Injecting /usr/bin/nvidia-ctk [nvidia-ctk hook create-symlinks --link ../card0::/dev/dri/by-path/pci-0000:81:00.0-card --link ../renderD128::/dev/dri/by-path/pci-0000:81:00.0-render]","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Applied required modification to OCI specification","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Forwarding command to runtime","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:28+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:28+12:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:33+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:33+12:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:39+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:39+12:00"} ``` Quote Link to comment
Mstein999 Posted June 28 Share Posted June 28 I had this issue as well, after some sleuthing around I found this solution and it seems to have helped me in stopping this log file from filling up with these Nvidia logs. While editing the Plex docker image, in the Extra Parameters field under Advanced View options add "--no-healthcheck" I am still unsure what causes the logs, but after adding that I have haven't had any issues and the file stays roughly the same size without filling up the TMPFS partition. 2 Quote Link to comment
Marshalleq Posted June 28 Author Share Posted June 28 Thankyou! Will try it thanks. 1 Quote Link to comment
Sascha_B Posted June 28 Share Posted June 28 The same behavior occurred to me after updating to version 6.12.1. I noticed it after receiving this message in the syslog: Jun 28 02:04:34 Zeus elogind-daemon[1503]: Failed to save session data /run/systemd/sessions/c6: No space left on device Jun 28 02:04:34 Zeus elogind-daemon[1503]: Failed to save user data /run/systemd/users/0: No space left on device. The entry "--no-healthcheck" resolved the issue, thank you! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.