Warning: file_put_contents(): Only -1 of 100 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php on line 714


odie

Recommended Posts

On Tuesday, I got this error.. and all my Dockers would say, "Not Unavailable". I changed my DNS (used Cloudflare) (after turning VM and Docker off) in Network settings.. I have yet to see this error on Thursday. And Plex docker been running since Tues.

Edited by Joey0live
Link to comment

Just had this issue this morning

 

Seemed to be the Plex container writing a ton of info logs

 

{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-10T08:17:17+01:00"}
{"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-06-10T08:17:22+01:00"}


it's writing this into log.json over and over until it runs out of space
in my case in here
/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ee915362cd78cd496a851176c392358304b634dd99e244d1700b5c2b825e06dc/log.json

 

I have deleted the log.json file for now and it's recreated it - buys some time
I have to go out but when I get back home I'll see if I can work out what's going on

Edited by PhilBarker
Link to comment

I am running on 6.11.5 after some stability issues with BTRFS read only problems i see .2 has come out i might try upgrade again to 6.12.2 see if its better than .1.

 

The issue seems to be the new version of Nvidia Drivers on 6.11.5 with Plex is created some problems filling tmpfs with this.

I can't work out how to get rid of it just keeps looping till it fills the log to 31M.

 

{"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-07-01T08:49:53+10:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-07-01T08:49:53+10:00"}
{"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-07-01T08:49:59+10:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-07-01T08:49:59+10:00"}
{"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-07-01T08:50:04+10:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-07-01T08:50:04+10:00"}

 

Link to comment

Full error seems to be in the plex docker filling the logs even after updating unraid and the nvidia driver.

 

{"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"Path\": \"/usr/bin/nvidia-container-runtime-hook\",\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Using OCI specification file path: /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/da7bb470257853b13128471ed642f52c08147d7d2125cb39dff7ccc8243663af/config.json","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Auto-detected mode as 'legacy'","time":"2023-07-01T23:22:13+10:00"}
{"level":"warning","msg":"Ignoring 32-bit libraries for libcuda.so: [/usr/lib/libcuda.so.535.54.03]","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Using prestart hook path: /usr/bin/nvidia-container-runtime-hook","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Selecting /dev/dri/card0 as /dev/dri/card0","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Selecting /dev/dri/card1 as /dev/dri/card1","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Selecting /dev/dri/renderD128 as /dev/dri/renderD128","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Selecting /dev/dri/renderD129 as /dev/dri/renderD129","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Selecting /usr/lib64/libnvidia-egl-gbm.so.1.1.0 as /usr/lib64/libnvidia-egl-gbm.so.1.1.0","time":"2023-07-01T23:22:13+10:00"}
{"level":"warning","msg":"Could not locate glvnd/egl_vendor.d/10_nvidia.json: pattern glvnd/egl_vendor.d/10_nvidia.json not found","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Selecting /etc/vulkan/icd.d/nvidia_icd.json as /etc/vulkan/icd.d/nvidia_icd.json","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Selecting /etc/vulkan/implicit_layer.d/nvidia_layers.json as /etc/vulkan/implicit_layer.d/nvidia_layers.json","time":"2023-07-01T23:22:13+10:00"}
{"level":"warning","msg":"Could not locate egl/egl_external_platform.d/15_nvidia_gbm.json: pattern egl/egl_external_platform.d/15_nvidia_gbm.json not found","time":"2023-07-01T23:22:13+10:00"}
{"level":"warning","msg":"Could not locate egl/egl_external_platform.d/10_nvidia_wayland.json: pattern egl/egl_external_platform.d/10_nvidia_wayland.json not found","time":"2023-07-01T23:22:13+10:00"}
{"level":"warning","msg":"Could not locate nvidia/xorg/nvidia_drv.so: pattern nvidia/xorg/nvidia_drv.so not found","time":"2023-07-01T23:22:13+10:00"}
{"level":"warning","msg":"Could not locate nvidia/xorg/libglxserver_nvidia.so.535.54.03: pattern nvidia/xorg/libglxserver_nvidia.so.535.54.03 not found","time":"2023-07-01T23:22:13+10:00"}
{"level":"warning","msg":"Could not locate X11/xorg.conf.d/10-nvidia.conf: pattern X11/xorg.conf.d/10-nvidia.conf not found","time":"2023-07-01T23:22:13+10:00"}
{"level":"warning","msg":"Could not locate nvidia/xorg/nvidia_drv.so: pattern nvidia/xorg/nvidia_drv.so not found","time":"2023-07-01T23:22:13+10:00"}
{"level":"warning","msg":"Could not locate nvidia/xorg/libglxserver_nvidia.so.535.54.03: pattern nvidia/xorg/libglxserver_nvidia.so.535.54.03 not found","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Mounts:","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Mounting /usr/lib64/libnvidia-egl-gbm.so.1.1.0 at /usr/lib64/libnvidia-egl-gbm.so.1.1.0","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Mounting /etc/vulkan/icd.d/nvidia_icd.json at /etc/vulkan/icd.d/nvidia_icd.json","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Mounting /etc/vulkan/implicit_layer.d/nvidia_layers.json at /etc/vulkan/implicit_layer.d/nvidia_layers.json","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Devices:","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Injecting /dev/dri/renderD129","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Injecting /dev/dri/card1","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Hooks:","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Injecting /usr/bin/nvidia-ctk [nvidia-ctk hook create-symlinks --link ../card1::/dev/dri/by-path/pci-0000:01:00.0-card --link ../renderD129::/dev/dri/by-path/pci-0000:01:00.0-render]","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Applied required modification to OCI specification","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Forwarding command to runtime","time":"2023-07-01T23:22:13+10:00"}
{"level":"info","msg":"Running with config:\n{\n  \"AcceptEnvvarUnprivileged\": true,\n  \"NVIDIAContainerCLIConfig\": {\n    \"Root\": \"\"\n  },\n  \"NVIDIACTKConfig\": {\n    \"Path\": \"nvidia-ctk\"\n  },\n  \"NVIDIAContainerRuntimeConfig\": {\n    \"DebugFilePath\": \"/dev/null\",\n    \"LogLevel\": \"info\",\n    \"Runtimes\": [\n      \"docker-runc\",\n      \"runc\"\n    ],\n    \"Mode\": \"auto\",\n    \"Modes\": {\n      \"CSV\": {\n        \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n      },\n      \"CDI\": {\n        \"SpecDirs\": null,\n        \"DefaultKind\": \"nvidia.com/gpu\",\n        \"AnnotationPrefixes\": [\n          \"cdi.k8s.io/\"\n        ]\n      }\n    }\n  },\n  \"NVIDIAContainerRuntimeHookConfig\": {\n    \"Path\": \"/usr/bin/nvidia-container-runtime-hook\",\n    \"SkipModeDetection\": false\n  }\n}","time":"2023-07-01T23:22:14+10:00"}
{"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-07-01T23:22:14+10:00"}

Link to comment

Hey folks, I've been having the same issue for a 2-3+ months now. I was hoping that updating to 6.12 would help but sadly it's not the case.

 

Every time it's happening, the Plex docker is "unhealthy" and restarting the Plex Docker fix the issue.

 

As reported in previous messages, I am also using the Nvidia hardware acceleration and I am on the latest Nvidia driver.

 

I will update to 6.12.2 (currently at 6.12.1) and report back.

I doubt that the change in Docker version introduced in 6.12 and reverted with 6.12.2 will help though, if that's what we are after.

 

 

Edited by JetXS
Link to comment

It appears that the issue is quite widespread (with multiple posts referring to it in different ways on these forums...) and sound like it's all following the same pattern...

Should we do a template to ask the questions...
 

Quote
  • What Unraid update is running ?
  • Are you using docker ?
  • Are you using Plex ?
  • If yes, are you using the official Plex Docker ?
  • Are you running an Nvidia GPU ?
  • If yes, what driver version is installed ?
  • When the issue occurs, does the Plex container health get in an "Unhealthy" state ?
  • When the issue occurs, does Unraid Connect fails to show a status ?
  • When the issue occurs, can an update check on Plugins can be executed ?

 

On my end...

 

Edited by JetXS
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.